Authors: Tom Froese, Tom Ziemke
In: Artificial Intelligence 173 (2009), 466-500
Open Archive, Accessible at: https://www.sciencedirect.com/science/article/pii/S0004370208002105

This was one of the first articles on Embodied AI / Enactive AI that I read. Initially, I thought this article might already be outdated, reading it in 2022, almost 15 years after its first draft has been sent to the journal Artificial Intelligence. However, what I wanted was an overview and a first impression of the engagement with Dreyfus’ and the current issues of engineering AI.
The authors, Froese and Ziemke, provided a detailed list of references that I will certainly use for further research on this topic. Here are some things I noticed in my non-expert first read of their paper:
What they call “current limitations of embodied AI” is explained in terms of “providing fuller models of natural embodied cognition”. It becomes clearer in the following pages, that the authors follow the “understanding by building”-principle: so, the limitations of embodied AI are not limitations per se, just limitations for this approach of building something in order to explain the underlying structures.
However, at the same time the author claim to follow the goal to “specify more precisely what actually constitutes […] a fully enactive AI. The aim of this paper is to provide some initial steps toward the development of such an understanding” (468). Let’s see how this unfolds.
Interesting to see is their engagement with Hubert Dreyfus’ (at that time recent) statements about the failure of Heideggerian or embodied AI. The main problem they call the problem of “grounding meaning”. We can find an explanation when they’re quoting Dreyfus: “the ‘big remaining problem’ is how to incorporate into current embodied AI an account of how we ‘directly pick up significance and improve our sensitivity to relevance’ since this ability ‘depends on our responding to what is significant for us'” (470). They also speak of the “problem of meaning” in this regard, asking how is it possible “to appropriately pick up relevance according to its situation” – referring to the famous “frame problem” (ibid.).
Here, I find, the authors do not response directly to Dreyfus criticism, instead they do so indirectly by giving a more moderate (harmless?) version of Dreyfus claim, based on their “understanding by building”-principle: we only need to build an AI agent that is good enough for us to analyze and study it (“simple enough for us to actually construct and analyze (471)). The authors seems to hold the opinion that the purpose of building AI is to build models that can help us understand certain phenomena (471). What is not addressed, in my opinion, is the more fundamental question, if “being responsive to meaning” is a necessary and sufficient condition of functioning embodied AI. Let’s see if this is also true for the remaining 26 (!) pages of this article.
Sidenote: In the following, the authors speak of “evolutionary algorithms” and “artifical evolution”. I am not sure what they mean by that. Neural-Networks? Machine Learning? Would make sense in this context.
The claim is, furthermore, that the great advantage of embodied AI over GOFAI (good old fashioned AI) is that it can “make sense” of the world in terms of sensory-motor feedback-loops. The remaining questions are, is this approach good enough: 1. to solve the Dreyfus-problem of “grounding meaning”, and 2. do we need to solve (1) in order to have “intelligent”, that is: fully functioning embodied AI? (at this point, it is still not exactly clear, why AI needs to be responsive to meaning in order to function properly. In a footnote, they mention “high-lever cognitive tasks”, not specifying what is meant by that (471)). What I want to know, is: Why does Embodied AI need to be able to “engage in purposeful behavior”? Why does it need to “care” about its environment? Again, if the whole point of building AI is not creating an identical model to human intelligence, then why, from an engineering standpoint, is it necessary to integrate “care,” “intention,” and “purpose”? In Dreyfus’ work, his philosophical considerations always overlapped with these more fundamental engineering questions. We ask, again, what is this “problem of meaning” other than AI does not experience it?
There is a very good (in my opinion) paragraph about the meaning problem that explains it in terms of “from the inside” vs “from the outside” emergence or attribution.

This might be the link between the “meaning problem” and the before-mentioned “frame-problem”. In order to navigate in a real-world situation, it might not be enough to have sensory-motor feedback-loops. Navigation is a kind of orientation that implies a purposeful structuring of the environment in terms of meaningful (relevant) and not-meaningful (irrelevant). Navigation then becomes the task of linking this sensory-motor apparatus with the (affordances) of the situation. It becomes apparent, I think, that this kind of “structuring” can either happen externally when the machine gets a task – then this task becomes the principle of the structuring; or it can emerge internally if the agent applies a task to him-, her- or itself. In phenomenology, this has been studied and analyzed as the relation between our motivations (incl. feelings, knowledge, language, and even “instincts”) and the environment. Obviously, this kind of self-assigning tasks which is constitutive of our autonomy is not yet (maybe never?) a part of any AI.
The authors make clear that this distinction between external and internal tasks leads to further problems: “We are thus faced with the problem of determining what kind of embodiment is necessary so that we can reasonably say that there is such a concern for the artificial agent just like there is for us and other living beings. What kind of body is required so that we can say that the agent’s goals are genuinely its own?” (472). So, for the authors, the problem of “autonomy” is essentially the problem of “what kind of embodiment”.
Interrupted by other Tasks, Will be continued soon!