Let us return to our investigation of the theory of
computationalism. I feel that our next topic of contention is best introduced
by returning to naturalist Richard Carrier’s quote regarding the success of
computationalism: “there is nothing a brain does that a mindless machine like a
computer can’t do even today, in fact or in principle, except engage in the
very process of recognizing itself.” Carrier’s assertion is well received.
There are many things human beings can do that (seemingly) computers are also
capable of (e.g., playing chess, carrying on a conversation etc.).
However, the question is immediately raised
regarding whether or not computers actually
carry out processes, or whether they in principle ever could, in the same
manner a human brain does. Does a computer really “play” chess or does it just seem to? Does a computer carry on a
conversation or does it just seem to?
(Does anybody really believe they are having a conversation with Siri on their
iphones?) The foremost defender of the view that a computer does not function similarly to human
cognition is the naturalist philosopher John Searle. His thought experiment
that attempts to demonstrate this assertion is known as the Chinese Room
Argument.
For those unfamiliar with the Chinese Room Argument,
it goes something like this. Imagine a unilingual English speaking man in a
locked room. The room has a small “in” slot and “out” slot. Through this “in”
slot is slipped Chinese symbols. The man in the room has a rulebook that tells
him, in English, which Chinese symbols to send through the “out” slot based on
the symbols received. The rulebook does not tell him the meaning of the
symbols; it only tells him that if he receives a symbol that looks like “such
and such”, he should respond with the different symbol “such and such”. To
native Chinese speakers it would seem that the man inside the room speaks and
understands Chinese. But, obviously, the
man speaks no Chinese whatsoever.
So what exactly does the Chinese Room demonstrate
about the nature of digital computers? Well, in the thought experiment the man
is doing exactly what a computer does—though admittedly to a much more
simplistic degree—namely, manipulating symbols in adherence with an algorithm
in which only syntax is emphasized. But the man in the Chinese Room does not
understand the semantics behind the symbols he is manipulating; similarly, if a
computer is simply programmed to manipulate symbols based on pure syntax then
how can a computer be said to understand the meaning behind said symbols? We
have once again run into a problem of semantics—something that humans
intrinsically possess and produce, but that computers do not. Edward Feser
summarizes the consequences: running a program, of whatever level of
complexity, cannot suffice for understanding or intelligence; for if it did
suffice, then [the man in the Chinese Room] would, simply by virtue of
“running” the Chinese language program, have understood the language.
So, once again, we see that digital computers
constitute a difference in kind from the human brain. A computer runs
syntactically defined programs and algorithms, yet does not develop any type of
semantic understanding of the symbols it’s manipulating. In contrast, the human
mind can run syntactically defined programs and algorithms, yet it maintains,
throughout, a complete semantic comprehension—and much of this semantic
comprehension is antecedent to any syntactical manipulation!
Let us return to Carrier’s comment above regarding
the “abilities” of computers. While a computer can run programs that simulate
and mimic human behavior, this does not mean that the ontology behind the
computer is the same as a human brain. A computer can simulate the moves in a
game of chess, but it is not “playing” in any relevant sense of the word. A
computer can simulate a conversation with a person, but it is not really
“conversing” in any relevant sense of the word. A computer doesn’t understand
what a pawn does, nor does it understand what a person is saying to it. It is
only programed to give a certain range (sometimes thousands) of syntactical
outputs based on a certain range of syntactical inputs. This is not
intelligence; it is a pure mechanical process of manipulation.
Not only do computers not think or reason, as we
have seen, in any relevant sense, but they don’t really show any signs of
intelligence at all. The whole endeavor of creating Artificial Intelligence is
simply nothing more than a misnomer. University of Oxford philosopher Luciano
Floridi states that, “we have no intelligence whatsoever to speak in terms of
AI today as you would expect it from a cognitive science perspective.” I
maintain that, in principle, we never will—at least not as long as the man in the
Chinese Room is only a difference in degree from computers.
In the next post we will examine the question
regarding whether or not computers exhibit intentionality—obviously one of the
most important features of human cognition.
No comments:
Post a Comment