Let us conclude our series regarding the theory of
computationalism. The last few posts (part I, II and III) have, at least to my thoughts, hurled some
pretty devastating objections towards the view that our minds are simply some
type of software program run on the hardware of the brain—different in degree
but not in kind from a digital computer. These aforementioned objections have demonstrated
that computers, even in principle, are simply unable to produce any of the
defining characteristics that make up human mental experience. Yet, there is
still one more aspect of human cognition that computationalism falls entirely
short of accounting for: intentionality.
In philosophy of mind, intentionality refers to the
ability of something to be about, or refer to, or represent something beyond
itself. For instance, my writing this post exhibits a multitude of
intentionality: I am thinking about the
post and about the words I type; I am
referring to many concepts beyond
myself (e.g. intentionality); I have a desire
to write this post etc. Seeing as how all cognition consists in a subject
entertaining the presence of an object, and therefore of the subject having
thoughts about said object, it should
be apparent that intentionality is present in every act of cognition. In fact,
intentionality has such an intimate correlation with the mental that Franz
Brentano—the pioneer of addressing intentionality as such an important
philosophical topic—labeled intentionality the “mark of the mental”. Now, if
intentionality is such a crucial aspect of what constitutes the mental, then
surely any worthwhile theory promulgated in the philosophy of mind should be
able to, at the very least, account for such a phenomenon. So the question
becomes whether or not computationalism can account for “the mark of the
mental” (i.e. intentionality).
It might seem that computationalism can indeed account for intentionality.
For computers seem to exhibit all types of intentionality. A computer might
generate a picture of a lake, and surely the computer is representing something beyond itself, namely, a lake. Or, a
computer could produce a game of chess, and surely the computer is now about something, namely, chess. Of
course we could think of hundreds of more examples where a computer seems to
demonstrate the aspects that make up intentionality. But, do these instances
really exhibit intentionality on the part of a computer? No, they do not; at
least, not the kind of intentionality that the mental exhibits.
For instance, picture a piece of paper with the word
“bunny” written on it. We obviously know what the paper is referring to,
namely, a bunny. Yet, does the paper really have intentionality as we would
characterize it? Surely the paper is referring to something beyond itself,
which is exactly what constitutes the intentional. However, I maintain that
upon closer examination it can be demonstrated that no such intentionality is
really going on. To illuminate this, let’s imagine that all humans on the face
of the earth were suddenly to go extinct. Would the paper still refer to a
bunny? No, the word “bunny” on the paper is now just a meaningless set of ink
blots on a paper. The term “bunny” is a term that us humans invented to
represent an animal. It is we who grounded the term “bunny” with its meaning
and, therefore, its intentionality. Without human minds the term “bunny” is
absolutely vacuous with regards to meaning.
However, it is still true that while the human mind
grounds the semantics of the term “bunny”, the term does in fact still refer
beyond itself. But, this
intentionality is only valid if it derives its meaning from a mind—again, the term is meaningless apart from a mind that
imparts such meaning to it. Therefore, the intentionality is not intrinsic to the paper or the term. What
we have here is what John Searle calls “derived intentionality”. And if we look around the world at inanimate
objects that seem to exhibit
intentionality, we will see that all forms of such intentionality are derived. The
painting of the Mona Lisa only derives its representation, of the lady that was
painted, from the painter. The number “2” that represents the mathematical
concept of two only represents such a concept because we endowed it with such
semantics. Thus, we see that any
intentionality not constitutive of a mind is only derived intentionality; only the mind has intrinsic intentionality.
The points made above should demonstrate the obvious
ramifications for computationalism. Computers only exhibit derived
intentionality; any intentionality present in a computer is imparted to it by
the programmers and users. The only reason a computer produces a game of chess
is because the programmers have put “produce a game of chess” into its
programs; moreover, a game of chess is only a game of chess relative to chess players. But this
provides an insurmountable problem for computationalism. If computers can only
exhibit derived intentionality, then they surely cannot produce any kind of a
mind. For minds, by their very nature, exhibit intrinsic intentionality. So, a
mind would already have to exist for a computer to even attempt to produce a
mind. The computationalist, once again, has to presuppose what he’s trying to
account for! Oh, what a tangled web we weave.
Once again, we see that computationalism is unable
to account for the very characteristics that make a mind what it is, and
therefore it is an unsuccessful theory in the philosophy of mind.
Lastly, as I similarly articulated in the post on rationality, the dead-end that computationalism runs into when trying to ground
intentionality is no different than the dead-end that any naturalist account of
the mind runs into as well. If the physical can only exhibit derived
intentionality, then it cannot account for intrinsic intentionality; and
therefore it cannot account for the existence of the mind.
No comments:
Post a Comment