
Ned Block discusses three common assumptions in AI research regarding AI consciousness:
1) computational functionalism: “implement computations of a certain kidn is necessary and sufficient for consciousness…”
This becomes relevant because we will soon have AI systems that resemble us in computational aspects but not in the biological underlying materials.
Block is aware of the “behavioristic aspect” that defines our judgments and assessments of AI; but he is more interested in the conceptual framework, that is: he wants to probe the arguments pro and contra “computational functionalism”. His hypothesis he calls the “meat hypothesis”: that our consciousness does depend directly on the subcomputational (=biological) properties (even if it also depends on the computational properties.
It order to clarify Block tells us it is useful to look at these distinctions in terms of causal roles and their realizers. His point seems to be that causal roles can have different realizers; and that these realizers are the “material”/”biological” underpinnings. Block points out that we might soon have AI systems that can copy the functional roles of humans (in this case: consciousness) but with very different realizers (silicon based, neural networks, etc.). At the same time, we have animals that sahre the same “realizers” but cannot display specific cognitive functions of humans.
But Block clarifies that with “realizers” he is referring not just to material composition which is why he chooses the term “subcomputationalism”. He does not want to/need to specify what kind of material is needed for Xyz; instead by distinguishing between realizers and functional roles, he can point us to our ignorantia / lack of knowledge regarding what is crucial for our consciousness: meat (biology) or simply the computation that’s going on.
“And in those terms, we do not know from our own consciousness whether it is the meat or the role that is crucial for our consciousness. […] The role and the realizer hypotheses are equally plausible, but most major theories of conscious-ness assume functionalism, that is, the role hypothesis (Box 1). The main reason is that it is easier to study what consciousness does than what it is – that is why David Chalmers called the former the ‘easy problems’ and the latter ‘the hard problem’.” p. 4
What I like is that Block points out that the basic assumption for testing both hypothesis seems to be functionalism, since even when we’re checking “what aspects of our biology are important to consciousness, we focus on those that carry out the finromation processing roles that we think consciousness has/is” (4). As Block points out correctly: “How can this practice be understood without supposing that the information processing roles are what make the biological state conscious?”
This is not a minor issue, since it assumes that the contribution of the “meat-part” must be computational, that is: it must be about information processing since that is what consciousness is about.
Block therefore continues:
“From the point of view of the meat hypothesis, functionalism has it backwards: it is biologically grounded consciousness that is in part responsible for the information processing roles. The functionalists are saying that what consciousness is and what it does amount to the same thing. While the meat hypothesis acknowledges that the information processing role is important, specifically for the functions consciousness performs, it maintains that consciousness itself is something different from the role, based in biology. It may appear simpler to collapse what con-sciousness does and what it is into one thing, but that simplicity is illusory if that role can be ac-complished by something other than biology (which may not be possible, as Rosa Cao points out [12]) or if consciousness might not have that role in some circumstances.”
Block seems to be addressing this in the next chapter: “But are realizers not computational too?” but before that he speaks about what chalmers calls the “hard problem of consciousness”, describing its as an “explanatory gap” between brain activation and the experience. – Block thinks that functionalism can only solve the Hard Problem if “you think that all there is to explain about conscious experience is its functional role and not ‘what it is like’ to experience it.” (5).
