In modern cognitive science and philosophy of mind, it has become quite reasonable to regard the human brain as analogous to, or even identical to, digital computers. This might seem quite plausible from a prima facie standpoint. Our mental states can seem to be nothing more than functional, mechanical states of cause and effect, input and output. Moreover, our minds seem to exhibit “computational” and algorithmic behavior that could be compared to highly complex digital computers. In this view, known as computationalism, the brain is simply a piece of hardware whereby the mind acts as a kind of computer software embedded on it. Naturalist Richard Carrier is one adherent of such a theory (emphasis mine): Cognitive science has established that the brain is a computer that constructs and runs virtual models.
Although computationalism is a widely held view, though not as widely held as Carrier insinuates, I maintain that it is not without major defeaters. These next few posts will deal with the—what I view to be insurmountable—problems of computationalism, and exactly how these problems will most likely not be overthrown by any potential advances in cognitive science. That is to say, the problems with computationalism are metaphysical problems and, therefore, it is argued that no findings in cognitive science will ever resolve these problems.
The first objection to computationalism is the absolute dependence of computers on intentional consciousness for their meaning. David Bentley Hart, in his book The Experience of God, explains:
A computer does not even really compute. We compute, using it as a tool. We can set a program in motion to calculate the square root of pi, but the stream of digits that will appear on the screen will have mathematical content only because of our intentions, and because we—not the computer—are running algorithms. The computer, in itself, as an object or a series of physical events, does not contain or produce any symbols at all; its operations are not determined by any semantic content but only by binary sequences that mean nothing in themselves.
What Hart is articulating is that computers are not computers at all apart from our own intentional consciousness. The number “2” that appears on a calculator only represents the mathematical concept two because we have embedded it with such meaning. That symbol on the calculator has no intrinsic meaning in and of itself. Were all humans to vanish off the face of the earth, that symbol “2” would be rendered nothing more than a meaningless pixilated squiggle.
This point is upheld even with regards to programs and algorithms run on computers. When I type in “1”, “+”, “1”, “=”, and the computer yields a symbol of “2”, one might interpret this as the computer “thinking” in some sense. For hasn’t the computer itself added one and one to yield a correct answer of two? Well, yes the computer has yielded a correct answer, but this was not done by any “thinking” on the part of the computer. For if we, instead, decided to embed the symbol “2” with the meaning “waffle” then the computer would now still be running the same mechanical program, and yet the semantics behind the program would be incoherent! The original program of adding one and one to yield two only works because we have implemented such symbols, and therefore the program itself, with the correct meanings; but, without our own intentional consciousness grounding the meaning, the program, and the computer itself, are literally meaningless. Hart articulates once more that “[s]oftware no more ‘thinks’ than a minute hand knows the time or the printed word ‘pelican’ knows what a pelican is.” A computer only computes because we give it such a value. Anything could in principle be used as a computational device.So, what exactly is the significance of the above objection? Well, philosopher Edward Feser articulates:
If computation is observer-relative, then that means that its existence presupposes the existence of observers, and thus the existence of minds; so obviously, it cannot be appealed to in order to explain observers or minds themselves.[…][I]t is computation that must get explained in terms of the human mind, not the human mind in terms of computation.
What Feser is demonstrating is that all computation is dependent on intentional consciousness. Therefore, computationalism, which attempts to explain the mind, and therefore consciousness, cannot provide the ontological foundation for consciousness since computation of any kind already requires consciousness in order to count as computation in the first place. Computationalism is trying to account for the thing it already presupposes! Has a journey ever seemed so wrong-headed?
In the next post in this series it will be demonstrated that computationalism fails to account for human reason itself.