Menu Sidebar Widget Area

This is an example widget to show how the Menu Sidebar Widget Area looks by default. You can add custom widgets from the widgets in the admin.

About

I am a Philosopher. I was working on Phenomenology, Metaphysics, and Ontology – on Heidegger, Husserl, and Kant, before turning to AI. AI cannot and should not be ignored.

I found that AI researchers are often overstating the achievements and the potential of AI. They mistake the functionality of AI for substantial assessments of what AI is. Even if AI functions similar to human beings, it does not mean that AI is human. There is and should be a limit to how “human” we want our AI to be. From an engineering point of view, this limit should be: how similar does AI have to be in order to function properly? There are multiple examples of how copying the mechanisms of how humans function in an environment is the best way for AIs to operate. However, if the goal is functionality, then we do not even have the need for an identical copy.

What do we want our AIs to be? Tools? The upgraded version of us? Companions and Friends? This is ultimately a question of how we want our future to look like.

As to my own perspective on AI…

John Haugeland, in his comments on Hubert Dreyfus’ famous book on AI, What computers Can’t Do, stated the following, with which I can fully identify:

I started by proposing a return to Part III of [Hubert Dreyfus’ book] What Computers Can’t Do, attributing to it three principal theses: that human intelligence is essentially embodied; that intelligent bodies are essentially situated (embedded in the world); and that the relevant situation (world) is essentially human. And I suggested that these all come to the same thing. What they all come to, we can now see, is the radical idea that intelligence abides bodily in the world. If this is right-as I believe it is-and if science is ever to understand it, then research agenda must expand considerably. Not only is symbolic reasoning too narrow, but so is any focus on internal representation at all. When cognitive science looks for its closest kin, they will not be formal logic and information processing, but neurobiology and anthropology.

John Haugeland, “Body and world: a review of What Computers Still Can’t Do: A Critique of Articial Reason (Hubert L. Dreyfus)”, in: Artificial Intelligence 80 (1996), 119-128 (my own emphasis).

Embodied, situated AI is what we need if we want AI to function properly in human-like contexts. That does not mean that this kind of AI will ever be intelligent, conscious, sentient, understanding. It only means that this kind of AI would function most efficiently in human-like contexts. (Sorry for repeating myself.)

Haugeland and Dreyfus, of course, take their ideas from the phenomenological tradition and mainly from (the early) Heidegger. What it means that the relevant situations are essentially human can only be explained by going back to Heidegger’s core ideas of what the human being is.

There is a difference between building AI that is “just like” human beings and building AI that functions in a similar way. Not just AI researchers and scientists, but all people have the tendency to mistake concepts of function (Funktionsbegriffe) for concepts of essence (Wesensbegriffe). The idea that I’d like to develop in future texts is that these concepts of function are in itself only formalized concepts of essence. What AI research needs, then, is some kind of self-reflectivity. AI needs Philosophy.

JOIN OUR NEWSLETTER
And get notified everytime we publish a new blog post.