Artificial Intelligence 171 (2007) 1137–1160

The debates between philosophy and AI have always been a rather personal matter for Dreyfus. Not that his philosophical legacy depended on it – he has been far more important and is more widely known as a Heidegger-scholar – but it seems that since the early sixties, as he describes the situation at the beginning of this article, there was always this fight between his department and the AI Lab / Computer Science department. “Fight” might be a bit too strong, but it is neither a neutral debate between calm and uninvested protagonists.
Dreyfus is describing the decades-long debates which always orbited around the main problem that is known as the “frame-problem”:

Dreyfus points out that AI research – having failed at solving this problem and not realizing their failure – has just avoided this problem and instead turned to what they called “micro-worlds” – miniatures of the actual world where it was possible to set the possibly relevant facts of the situation in advance. The claim was that it is possible to generalize the mechanisms that were used in these micro-worlds and that it is just a matter of time (and compute) until the micro-worlds could work in the real world (1139).
The problem, according to Dreyfus, is as much a frame problem as it is a commonsense knowledge. He is quoting Minsky who said that AI “has been braindead since the early 70s” (1139). AI research, Dreyfus concludes, became a degenerating research program. We now speak of it, using Haugeland’s term, as Good Old Fashioned AI (GOFAI). There are, however, sparks of something that is emerging out of that graveyard of multi-million dollar research funds: Heideggerian AI and Heideggerian cognitive science – giving hope to the living dead AI researchers (cf. 1139).
Heideggerian AI, Dreyfus (and Wheeler (Reconstructing the Cognitive World)) claim, has become a new paradigm. See, for example, “Rodney Brooks’ behaviorist approach at MIT, Phil Agre’s pragmatist model, and Walter Freeman’s neurodynamic model” (1139). What they have in common is the shared critique of the Cartesian representational model and, on the positive side, the assumption that cognition must be embedded and embodied (1139, see also: John Haugeland, “Mind embodied and embedded,” in: Having Thought: Essays in the Metaphysics of Mind (Cambridge: MA: Harvard University Press, 1998, 218).
Dreyfus then cites Winograd to describe the situation at MIT and how, under his influence, AI researchers slowly began to be affected by Heideggerian ideas.
This took effect, Dreyfus said, for example in the world of robotics, when Rodney Brooks developed a new approach in which the robot does not rely only on formal representations of the world, but mainly on the direct input of its sensors (sensory feedback) (1140).
The problem, however, was still that these robots could not respond to new or changing situations. They did not learn. Had no memory. The world of these robots was limited to what they could receive through their sensors. But their sensors did not solve the frame problem. They, by themselves, did not structure the world according to the ever-changing relevance of the situation (1141).

Dreyfus does not give away the chance to make fun of his colleagues in the AI research labs. He quotes Dennett, who, at the end of the century, had high expectations of their robot “Cog,” but who failed miserably. (The first quote and the last quote are from Dennett, here p. 1141).

Even though giving up the Cartesian representational model allowed for some progress, the problem – something the AI researchers like Dennett “didn’t get” – was still that our coping with the world cannot be adequately understood only as a direct response to sensory inputs. Brooks’ “Empiricist model,” his new robots who were able to function in restricted environments, still had a problem when facing changing situations and framing them according to relevance (1142).
Dreyfus gives a hint on page 1142 what was missing from the AI research in the late 20th century: humans are not “imposing meaning on a meaningless given” nor are their brains “converting stimulus input into reflex responses” (1142). There is, with other words, more to the picture: input-output feedback systems are, by themselves, not enough. Not enough, we have to clarify, means: this kind of AI does not yet have “intelligence, emotional interactions, long term stability and autonomy, or general robustness” (1142, citing Brook). Dreyfus suggest that what is missing comes from the embodiment of the mind. Earlier, he mentioned embeddedness as the second sufficient condition for this kind of AI.
Chapter 4: Heideggerian AI, stage 2: Programming the ready-to-hand
We continue here as soon as we can.