Showing posts with label computationalism. Show all posts
Showing posts with label computationalism. Show all posts

Monday, March 3, 2014

Is the brain a computer? Part IV


Let us conclude our series regarding the theory of computationalism. The last few posts (part I, II and III) have, at least to my thoughts, hurled some pretty devastating objections towards the view that our minds are simply some type of software program run on the hardware of the brain—different in degree but not in kind from a digital computer. These aforementioned objections have demonstrated that computers, even in principle, are simply unable to produce any of the defining characteristics that make up human mental experience. Yet, there is still one more aspect of human cognition that computationalism falls entirely short of accounting for: intentionality.

In philosophy of mind, intentionality refers to the ability of something to be about, or refer to, or represent something beyond itself. For instance, my writing this post exhibits a multitude of intentionality: I am thinking about the post and about the words I type; I am referring to many concepts beyond myself (e.g. intentionality); I have a desire to write this post etc. Seeing as how all cognition consists in a subject entertaining the presence of an object, and therefore of the subject having thoughts about said object, it should be apparent that intentionality is present in every act of cognition. In fact, intentionality has such an intimate correlation with the mental that Franz Brentano—the pioneer of addressing intentionality as such an important philosophical topic—labeled intentionality the “mark of the mental”. Now, if intentionality is such a crucial aspect of what constitutes the mental, then surely any worthwhile theory promulgated in the philosophy of mind should be able to, at the very least, account for such a phenomenon. So the question becomes whether or not computationalism can account for “the mark of the mental” (i.e. intentionality).

It might seem that computationalism can indeed account for intentionality. For computers seem to exhibit all types of intentionality. A computer might generate a picture of a lake, and surely the computer is representing something beyond itself, namely, a lake. Or, a computer could produce a game of chess, and surely the computer is now about something, namely, chess. Of course we could think of hundreds of more examples where a computer seems to demonstrate the aspects that make up intentionality. But, do these instances really exhibit intentionality on the part of a computer? No, they do not; at least, not the kind of intentionality that the mental exhibits.

For instance, picture a piece of paper with the word “bunny” written on it. We obviously know what the paper is referring to, namely, a bunny. Yet, does the paper really have intentionality as we would characterize it? Surely the paper is referring to something beyond itself, which is exactly what constitutes the intentional. However, I maintain that upon closer examination it can be demonstrated that no such intentionality is really going on. To illuminate this, let’s imagine that all humans on the face of the earth were suddenly to go extinct. Would the paper still refer to a bunny? No, the word “bunny” on the paper is now just a meaningless set of ink blots on a paper. The term “bunny” is a term that us humans invented to represent an animal. It is we who grounded the term “bunny” with its meaning and, therefore, its intentionality. Without human minds the term “bunny” is absolutely vacuous with regards to meaning.

However, it is still true that while the human mind grounds the semantics of the term “bunny”, the term does in fact still refer beyond itself. But, this intentionality is only valid if it derives its meaning from a mind—again, the term is meaningless apart from a mind that imparts such meaning to it. Therefore, the intentionality is not intrinsic to the paper or the term. What we have here is what John Searle calls “derived intentionality”.  And if we look around the world at inanimate objects that seem to exhibit intentionality, we will see that all forms of such intentionality are derived. The painting of the Mona Lisa only derives its representation, of the lady that was painted, from the painter. The number “2” that represents the mathematical concept of two only represents such a concept because we endowed it with such semantics. Thus, we see that any intentionality not constitutive of a mind is only derived intentionality; only the mind has intrinsic intentionality.

The points made above should demonstrate the obvious ramifications for computationalism. Computers only exhibit derived intentionality; any intentionality present in a computer is imparted to it by the programmers and users. The only reason a computer produces a game of chess is because the programmers have put “produce a game of chess” into its programs; moreover, a game of chess is only a game of chess relative to chess players. But this provides an insurmountable problem for computationalism. If computers can only exhibit derived intentionality, then they surely cannot produce any kind of a mind. For minds, by their very nature, exhibit intrinsic intentionality. So, a mind would already have to exist for a computer to even attempt to produce a mind. The computationalist, once again, has to presuppose what he’s trying to account for! Oh, what a tangled web we weave.

Once again, we see that computationalism is unable to account for the very characteristics that make a mind what it is, and therefore it is an unsuccessful theory in the philosophy of mind.

Lastly, as I similarly articulated in the post on rationality, the dead-end that computationalism runs into when trying to ground intentionality is no different than the dead-end that any naturalist account of the mind runs into as well. If the physical can only exhibit derived intentionality, then it cannot account for intrinsic intentionality; and therefore it cannot account for the existence of the mind. 

Wednesday, February 19, 2014

Is the brain a computer? Part III


Let us return to our investigation of the theory of computationalism. I feel that our next topic of contention is best introduced by returning to naturalist Richard Carrier’s quote regarding the success of computationalism: “there is nothing a brain does that a mindless machine like a computer can’t do even today, in fact or in principle, except engage in the very process of recognizing itself.” Carrier’s assertion is well received. There are many things human beings can do that (seemingly) computers are also capable of (e.g., playing chess, carrying on a conversation etc.).

However, the question is immediately raised regarding whether or not computers actually carry out processes, or whether they in principle ever could, in the same manner a human brain does. Does a computer really “play” chess or does it just seem to? Does a computer carry on a conversation or does it just seem to? (Does anybody really believe they are having a conversation with Siri on their iphones?) The foremost defender of the view that a computer does not function similarly to human cognition is the naturalist philosopher John Searle. His thought experiment that attempts to demonstrate this assertion is known as the Chinese Room Argument.

For those unfamiliar with the Chinese Room Argument, it goes something like this. Imagine a unilingual English speaking man in a locked room. The room has a small “in” slot and “out” slot. Through this “in” slot is slipped Chinese symbols. The man in the room has a rulebook that tells him, in English, which Chinese symbols to send through the “out” slot based on the symbols received. The rulebook does not tell him the meaning of the symbols; it only tells him that if he receives a symbol that looks like “such and such”, he should respond with the different symbol “such and such”. To native Chinese speakers it would seem that the man inside the room speaks and understands Chinese.  But, obviously, the man speaks no Chinese whatsoever.

So what exactly does the Chinese Room demonstrate about the nature of digital computers? Well, in the thought experiment the man is doing exactly what a computer does—though admittedly to a much more simplistic degree—namely, manipulating symbols in adherence with an algorithm in which only syntax is emphasized. But the man in the Chinese Room does not understand the semantics behind the symbols he is manipulating; similarly, if a computer is simply programmed to manipulate symbols based on pure syntax then how can a computer be said to understand the meaning behind said symbols? We have once again run into a problem of semantics—something that humans intrinsically possess and produce, but that computers do not. Edward Feser summarizes the consequences: running a program, of whatever level of complexity, cannot suffice for understanding or intelligence; for if it did suffice, then [the man in the Chinese Room] would, simply by virtue of “running” the Chinese language program, have understood the language.

So, once again, we see that digital computers constitute a difference in kind from the human brain. A computer runs syntactically defined programs and algorithms, yet does not develop any type of semantic understanding of the symbols it’s manipulating. In contrast, the human mind can run syntactically defined programs and algorithms, yet it maintains, throughout, a complete semantic comprehension—and much of this semantic comprehension is antecedent to any syntactical manipulation!

Let us return to Carrier’s comment above regarding the “abilities” of computers. While a computer can run programs that simulate and mimic human behavior, this does not mean that the ontology behind the computer is the same as a human brain. A computer can simulate the moves in a game of chess, but it is not “playing” in any relevant sense of the word. A computer can simulate a conversation with a person, but it is not really “conversing” in any relevant sense of the word. A computer doesn’t understand what a pawn does, nor does it understand what a person is saying to it. It is only programed to give a certain range (sometimes thousands) of syntactical outputs based on a certain range of syntactical inputs. This is not intelligence; it is a pure mechanical process of manipulation.

Not only do computers not think or reason, as we have seen, in any relevant sense, but they don’t really show any signs of intelligence at all. The whole endeavor of creating Artificial Intelligence is simply nothing more than a misnomer. University of Oxford philosopher Luciano Floridi states that, “we have no intelligence whatsoever to speak in terms of AI today as you would expect it from a cognitive science perspective.” I maintain that, in principle, we never will—at least not as long as the man in the Chinese Room is only a difference in degree from computers.

In the next post we will examine the question regarding whether or not computers exhibit intentionality—obviously one of the most important features of human cognition.

Wednesday, February 12, 2014

Is the brain a computer? Part II


I’m continuing my series on the pitfalls and shortcomings of computationalism—the view that the mind is a kind of computer software implemented on the hardware of the brain. Last post we identified the fact that computers have no semantic significance in and of themselves, and they only receive such significance in the presence of intentional consciousness. Any analysis of the semantic content of computers can only lead right back to our minds, where we began.

Yet, this is not even the worst objection to be hurled at computationalism. I maintain that computationalism suffers from an even greater handicap than the inability to explain the mind: the denial of human rationality itself. That is to say, if the computationalist theory of the mind is correct, then there is no ontological ground for human rationality. Let me explain.

Remember, from the previous post, that computers are simply big hunks of plastic and metal with electrical currents running through them. They are by themselves devoid of any intrinsic meaning and significance. Furthermore, recall that when one types “1 + 1 =” on a computer and receives the symbol “2”, this is only because we have endowed the symbols “1”,”+”,”=” and “2” with their meaning; moreover, this program is only run correctly because computer science engineers have programmed it to do such a thing—it’s possible that we could have programmed it to yield a different answer. But this means that computers don’t really “compute” at all; it is we who compute, only utilizing the computer as a tool. But this also leads us to the conclusion that computers don’t really “think” either. At least not in any sense of what we mean by rational thought—the act or process of forming conclusions, judgments, or inferences based on previous premises or propositions.

Yet certainly computers give the illusion of thought. Computers have been designed so complex as to be capable of beating chess masters, carrying out difficult mathematical computations, and at least behaving as if they were conscious. In fact the illusion is so great that naturalist Richard Carrier states that “there is nothing a brain does that a mindless machine like a computer can’t do even today, in fact or in principle, except engage in the very process of recognizing itself”. Carrier goes so far as to claim that there is no illusion, and that computers do in fact think and reason just like humans. However, I maintain that once we shed some light on what exactly is going on when computers “think”, we will see that there is no thinking going on at all; and I believe that we need only return to an example given last post to demonstrate this.

Recall that a calculator would still yield “2” after the input “1 + 1 =” even if we changed the meaning of “2” to mean “waffle”, so that the semantic content would now be equivalent to one plus one is equal to waffle. Surely this is an incoherent logical sequence. Yet, the calculator, devoid of intrinsic meaning, is none the wiser. The change in semantic content has no effect on the operation and computation of the calculator. We could even change the meaning of “1 + 1 =” to “please display the message of waffle”, and yet the calculator would still continue to display this seemingly “logical” sequence of symbols—I owe the insight of such examples to philosopher Edward Feser, expounded in his books Philosophyof Mind and The Last Superstition.

The above example demonstrates that a calculator fails to think in any sense of the word. If we can change the semantic content of the symbols in a logical sequence such that the sequence subsequently fails to retain coherency, then any medium capable of rational thought should thereby adjust its sequence accordingly. But a computer will not do this—unless reprogrammed to account for the semantic change--because its causal efficacy is not based on semantic content; rather, its causal efficacy is based purely on electrochemical properties, governed by physical laws. The meaning of a particular state of a computer seems to play no role whatsoever in yielding any subsequent states. Therefore, the reason a computer transitions from one symbol to another has nothing to do with the meanings of said symbols; it only transitions from one symbol, or state, to another due to electrochemical properties the computer scientists have programmed it to utilize. Thus, everything that makes thought rational—the act of proceeding from one premise to another based on the semantic content of each premise in order to arrive at a logical inference coherent with each previous premise—is completely absent from the behavior of a computer.

So, if the brain is simply a type of computer, different in degree but not in kind, then there is no ontological foundation for human rationality. Computationalism is, in a sense, self-defeating—the computationalist upholds his theory because he believes it rational to do so, yet if computationalism is true then the brain cannot exhibit rationality; therefore, any conclusions reached through logical inference, such as computationalism, are non-rational, and are therefore unjustified.

Let it be emphasized, through a quick digression, that the above objection against computationalism is not only efficacious against computational theories of mind. Rather, I maintain, it deals a fatal blow to any naturalistic framework that attempts to ground human cognition in any purely physical, mechanical properties. Therefore, even if a naturalist is not a computationalist, he still runs into the problem of explaining rational thought in terms of non-rational causes—a difference in quality, not quantity.

In the next post we will examine John Searle’s Chinese Room Argument to demonstrate that although computers can act like a human brain, the ontology behind the former is wholly different, in kind, from the latter.

Friday, February 7, 2014

Is the brain a computer? Part I


In modern cognitive science and philosophy of mind, it has become quite reasonable to regard the human brain as analogous to, or even identical to, digital computers. This might seem quite plausible from a prima facie standpoint. Our mental states can seem to be nothing more than functional, mechanical states of cause and effect, input and output. Moreover, our minds seem to exhibit “computational” and algorithmic behavior that could be compared to highly complex digital computers. In this view, known as computationalism, the brain is simply a piece of hardware whereby the mind acts as a kind of computer software embedded on it. Naturalist Richard Carrier is one adherent of such a theory (emphasis mine): Cognitive science has established that the brain is a computer that constructs and runs virtual models.

Although computationalism is a widely held view, though not as widely held as Carrier insinuates, I maintain that it is not without major defeaters. These next few posts will deal with the—what I view to be insurmountable—problems of computationalism, and exactly how these problems will most likely not be overthrown by any potential advances in cognitive science. That is to say, the problems with computationalism are metaphysical problems and, therefore, it is argued that no findings in cognitive science will ever resolve these problems.

_____________________________________________________________

The first objection to computationalism is the absolute dependence of computers on intentional consciousness for their meaning. David Bentley Hart, in his book The Experience of God, explains:

A computer does not even really compute. We compute, using it as a tool. We can set a program in motion to calculate the square root of pi, but the stream of digits that will appear on the screen will have mathematical content only because of our intentions, and because we—not the computer—are running algorithms. The computer, in itself, as an object or a series of physical events, does not contain or produce any symbols at all; its operations are not determined by any semantic content but only by binary sequences that mean nothing in themselves.


What Hart is articulating is that computers are not computers at all apart from our own intentional consciousness. The number “2” that appears on a calculator only represents the mathematical concept two because we have embedded it with such meaning. That symbol on the calculator has no intrinsic meaning in and of itself. Were all humans to vanish off the face of the earth, that symbol “2” would be rendered nothing more than a meaningless pixilated squiggle.

This point is upheld even with regards to programs and algorithms run on computers. When I type in “1”, “+”, “1”, “=”, and the computer yields a symbol of “2”, one might interpret this as the computer “thinking” in some sense. For hasn’t the computer itself added one and one to yield a correct answer of two? Well, yes the computer has yielded a correct answer, but this was not done by any “thinking” on the part of the computer. For if we, instead, decided to embed the symbol “2” with the meaning “waffle” then the computer would now still be running the same mechanical program, and yet the semantics behind the program would be incoherent! The original program of adding one and one to yield two only works because we have implemented such symbols, and therefore the program itself, with the correct meanings; but, without our own intentional consciousness grounding the meaning, the program, and the computer itself, are literally meaningless. Hart articulates once more that “[s]oftware no more ‘thinks’ than a minute hand knows the time or the printed word ‘pelican’ knows what a pelican is.” A computer only computes because we give it such a value. Anything could in principle be used as a computational device.
So, what exactly is the significance of the above objection? Well, philosopher Edward Feser articulates:

If computation is observer-relative, then that means that its existence presupposes the existence of observers, and thus the existence of minds; so obviously, it cannot be appealed to in order to explain observers or minds themselves.[…][I]t is computation that must get explained in terms of the human mind, not the human mind in terms of computation.


What Feser is demonstrating is that all computation is dependent on intentional consciousness. Therefore, computationalism, which attempts to explain the mind, and therefore consciousness, cannot provide the ontological foundation for consciousness since computation of any kind already requires consciousness in order to count as computation in the first place.  Computationalism is trying to account for the thing it already presupposes! Has a journey ever seemed so wrong-headed?


In the next post in this series it will be demonstrated that computationalism fails to account for human reason itself.