I’m continuing my series on the pitfalls and
shortcomings of computationalism—the view that the mind is a kind of computer
software implemented on the hardware of the brain. Last post we identified the
fact that computers have no semantic significance in and of themselves, and
they only receive such significance in the presence of intentional
consciousness. Any analysis of the semantic content of computers can only lead
right back to our minds, where we began.
Yet, this is not even the worst objection to be
hurled at computationalism. I maintain that computationalism suffers from an
even greater handicap than the inability to explain the mind: the denial of human rationality itself. That
is to say, if the computationalist theory of the mind is correct, then there is
no ontological ground for human rationality. Let me explain.
Remember, from the previous post, that computers are
simply big hunks of plastic and metal with electrical currents running through
them. They are by themselves devoid of any intrinsic meaning and significance.
Furthermore, recall that when one types “1 + 1 =” on a computer and receives
the symbol “2”, this is only because we have endowed the symbols “1”,”+”,”=”
and “2” with their meaning; moreover, this program is only run correctly
because computer science engineers have programmed it to do such a thing—it’s
possible that we could have programmed it to yield a different answer. But this
means that computers don’t really “compute” at all; it is we who compute, only
utilizing the computer as a tool. But this also leads us to the conclusion that
computers don’t really “think” either. At least not in any sense of what we
mean by rational thought—the act or process of forming conclusions, judgments,
or inferences based on previous premises or propositions.
Yet certainly computers give the illusion of thought. Computers have been
designed so complex as to be capable of beating chess masters, carrying out
difficult mathematical computations, and at least behaving as if they were conscious. In fact the illusion is so
great that naturalist Richard Carrier states that “there is nothing a brain
does that a mindless machine like a computer can’t do even today, in fact or in
principle, except engage in the very process of recognizing itself”. Carrier
goes so far as to claim that there is no
illusion, and that computers do in fact think and reason just like humans. However,
I maintain that once we shed some light on what exactly is going on when
computers “think”, we will see that there is no thinking going on at all; and I
believe that we need only return to an example given last post to demonstrate
this.
Recall that a calculator would still yield “2” after
the input “1 + 1 =” even if we changed the meaning of “2” to mean “waffle”, so
that the semantic content would now be equivalent to one plus one is equal to
waffle. Surely this is an incoherent logical sequence. Yet, the calculator,
devoid of intrinsic meaning, is none the wiser. The change in semantic content
has no effect on the operation and computation of the calculator. We could even
change the meaning of “1 + 1 =” to “please display the message of waffle”, and
yet the calculator would still continue to display this seemingly “logical”
sequence of symbols—I owe the insight of such examples to philosopher Edward
Feser, expounded in his books Philosophyof Mind and The Last Superstition.
The above example demonstrates that a calculator
fails to think in any sense of the word. If
we can change the semantic content of the symbols in a logical sequence such
that the sequence subsequently fails to retain coherency, then any medium
capable of rational thought should thereby adjust its sequence accordingly.
But a computer will not do this—unless
reprogrammed to account for the semantic change--because its causal efficacy is not based on semantic content;
rather, its causal efficacy is based purely on electrochemical properties,
governed by physical laws. The meaning
of a particular state of a computer seems to play no role whatsoever in
yielding any subsequent states. Therefore, the reason a computer transitions
from one symbol to another has nothing to do with the meanings of said symbols;
it only transitions from one symbol, or state, to another due to
electrochemical properties the computer scientists have programmed it to
utilize. Thus, everything that makes
thought rational—the act of proceeding from one premise to another based on
the semantic content of each premise in order to arrive at a logical inference
coherent with each previous premise—is
completely absent from the behavior of a computer.
So, if the brain is simply a type of computer,
different in degree but not in kind, then there is no ontological foundation
for human rationality. Computationalism is, in a sense, self-defeating—the
computationalist upholds his theory because he believes it rational to do so,
yet if computationalism is true then the brain cannot exhibit rationality; therefore,
any conclusions reached through logical inference, such as computationalism,
are non-rational, and are therefore unjustified.
Let it be emphasized, through a quick digression,
that the above objection against computationalism is not only efficacious
against computational theories of mind. Rather, I maintain, it deals a fatal
blow to any naturalistic framework that attempts to ground human cognition in any
purely physical, mechanical properties. Therefore, even if a naturalist is not
a computationalist, he still runs into the problem of explaining rational
thought in terms of non-rational causes—a difference in quality, not quantity.
In the next post we will examine John Searle’s
Chinese Room Argument to demonstrate that although computers can act like a human brain, the ontology
behind the former is wholly different, in kind, from the latter.
No comments:
Post a Comment