The Three Stances: Dennett, Consciousness, and Meaning

by Paul O'Brien

     Many people hold the belief that our humanity is somehow
threatened when thinkers offer biological explanations about the
human race, such as when the theory of evolution is used to account
for the origin of life or the science of neurology tries to explain
the workings of the mind. There appears to be something
disheartening about a theory that tells us that happiness may
merely be the product of chemicals in the brain, or that a sense of
love may actually be a veiled desire to propagate the species.
These kinds of theories can seem inhuman and insensitive to many
people, perhaps because they paint a picture of the human race that
seems too detached from our intuitive feelings about ourselves. We
tend to regard our lives as much more special than what these
theories appear to intimate at, and we would like to think that our
personal humanity is more within our own control. As a result, many
people find biological theories about humanity unconvincing on a
personal level despite whatever scientific success they may
achieve.
     Perhaps this is too hasty a dismissal, however. First of all,
I would like to argue that biological theories (or any variety of
physical theories) do not automatically exclude the personal, more
nonrational qualities that we cherish. For instance, it may be true
that we are both the product of an expression of our parents' love
(when they made love) and the product of an intricate and blind
biological process of combining and reproducing genes (when they
had sex and combined their genetic codes). Secondly, the biological
theories about humanity are quite humanistic. They are made for our
sake; knowing about the physical aspects underlying certain
phenomena allows us to make predictions and arrive at more useful
explanations. These theories improve our lives, not degrade them,
but one must understand certain characteristics about the
interaction between such theories and our everyday sense of the
world first. That is what I hope to show now.
     In an essay called "Mechanism and Responsibility" (from Essays
on Freedom of Action 1973), Daniel C. Dennet offers an insightful
distinction between three different types of stances or
perspectives when regarding something. He applies these different
stances to the consideration of how to beat a computer in a game of
chess. Since that application makes the stances extremely clear, I
would like to use that example here. First of all, one way to beat
the computer is to look at how its program works, which is looking
at the computer's actions from a "design stance." Someone designed
a program that includes a series of conditional moves which
commands the computer to move its pieces a certain way given a
certain situation (for example, if one's king is vulnerable, then
protect the king). As Dennet points out, this stance is helpful
when we want to make predictions about the computer's moves,
because if you have an understanding of how something is designed,
you can predict its actions based upon that design. To understand
the computer's gameplay from the design stance, we would look to
understand the chess program's "blueprints," guiding principles, or
governing laws.
     Another way to try to beat the computer is to look at the
computer's hardware itself, which is to open up the machine and see
how it physically operates. This is the "physical stance," and it
involves physical laws and factors, such as how electricity powers
the device that plays the game and how certain physical
capabilities of the machine affect its gameplay (like memory
storage capacity). This stance attempts to see how something
operates on its most basic, physical level.
     But a third way to address a computer opponent in a chess
match is to assume that the computer is just like any other human
player; it has a desire to win and possesses rationality to pursue
that end. This is looking at the computer from an "Intentional
stance," as Dennet puts it. 
     We know that computers can play chess, and we know that they
can win too. This is not a hypothetical example, then, and it is
one that clearly shows three potential ways of regarding this
chess-playing computer. But which is more real? Which is more
accurate? Ontologically, I'm not prepared to say that any one of
these stances is any more accurate than the others. Perhaps the
greatest temptation is to say that though a computer needs a
designed program and physical hardware to play chess, it does not
possess rationality and intentions like humans do; therefore the
Intentional stance is not as real as the other two stances in this
case. But I want to be careful about the assumption that this
response makes. It assumes that rationality really only belongs to
humans, yet I would hesitate to say such a thing only because I
have a very incomplete grasp of just what "rationality" is. Do you
know what rationality is? Are you sure that computers can't have
it? Or don't already have it? How do you know, for instance, if the
author of this paper isn't a computer? (The famous Turing test
comes to mind: How could you tell a human from a very sophisticated
computer if all you could do was ask them questions without ever
seeing them?)
     Personally, I do not find the question about which stance is
more real than the others as interesting as the following question:
Which stance is better than the others? After all, my goal is to
beat this thing, not contemplate its ontology. I want to use
whichever stance helps me the most to defeat the computer;
therefore, I'm seeking a pragmatic answer which doesn't look at
what a thing is so much as what a thing does. The best stance will
be the most useful stance in this case.
     What makes one stance more useful than another will depend on
how the computer plays its game. Suppose the chess game is being
depicted on a computer screen; if the game keeps resetting, that
is, if all the pieces suddenly appear in their original positions,
I would think that a malfunction is probably taking place and I
would assume a physical stance. Perhaps a part of the disk or drive
storing the chess program has malfunctioned, or power fluctuations
are interfering with the computer's continuity. Could the error be
explained as the product of design? Possibly. Maybe someone
designed the program to restart whenever it senses a losing game
unfolding. Could the error be explained as the product of
rationality? Possibly, because maybe this is not an error at all,
but the tactic of a very timid chess-player who forfeits game after
game the minute they dislike the way it's going.
     Now suppose that turn after turn, the computer never moves its
queen diagonally. I would probably assume the design stance in this
situation, because the regularity of such a move suggests to me
that this is how the computer is programmed to play. It would
certainly assist me in beating the computer if I understood that it
was designed to act like this; for example, I could easily predict
that the queen would never capture my piece if they lie diagonally
with one another, which clearly gives me more safety to move around
the board. Could the moves be the product of something physical and
not designed? It's certainly possible; maybe the computer chips
that relay coded commands about the queen moving diagonally are
faulty, and thus never get the chance to convey the information.
Could the moves be the product of rationality? That too is
possible. Maybe the player believes that this is actually good
strategy (and how good it would be if they captured my king when I
assumed that its queen would not move diagonally). 
     Now suppose one last thing: Suppose that after a few games of
chess, the computer moves its pieces in ways that reveal no
malfunctions nor suggest any simple designed moves. Wouldn't it be
just as well to treat the computer as if it were really some human
playing the game? After all, if the computer plays a complex enough
game of chess, the effort to chart its design or hardware and make
predictions based upon that would become incredibly difficult; just
imagine the complexity of calculations that it would require. It
would be far easier and faster to just pretend its human, wouldn't
it? And imagine trying to assume the physical or design stance with
a normal human opponent; would you chart all of their firing
neurons and brain states, or track the intricacies of the person's
teachings and psychology regarding chess moves in order to win the
game, even if you could?
     Well, yes, I actually believe you would if given the proper
situation. If a human opponent kept knocking over your pieces, even
when they weren't captured, there might be something wrong with
their physical dexterity or mental capacity, or maybe someone
taught them how to play chess the wrong way. Unless this person was
trying to make you mad, the Intentional stance just doesn't seem to
stand out as the best stance in this situation, even though we
believe that they really have rationality. Or imagine that a human
opponent thoroughly defeated you every single time you played; if
some neurologist explained to you that this person's brain works
far faster and more efficiently than yours given the way its
chemicals and neurons work, then that would be a good explanation
for why you were beaten. If someone else explained to you that this
person was the star pupil of the great Bobby Fischer, that would
also be a good explanation for their victories; they were simply
taught by a master and learned his winning strategies accordingly.
And if someone simply told you that this person has a great
intellect, sense of timing, intuition, and strategy, then you would
probably believe that to be a good explanation too. 
     But it isn't very far of a stretch to ask if rationality
itself is the product of design or physical limitations. After all,
I can make someone feel very happy by giving them a certain amount
of a certain chemical, can't I? And can't I make someone feel happy
if, say, they psychologically associate the ocean with happiness,
and I show them the ocean? But one could also consider whether the
design of something is the product of physics or intentions. A
computer program certainly needs physics to operate; what good is
a computer program without the ability to be physically stored or 
physically executed? And from a different stance, how could a
computer program ever have been made without a human programmer who
had the rationality and intention to create it in the first place?
Lastly, consider just how often we grant physical objects designs
and intentions, and not in any irrational way. When moving bodies
collide and interact a certain way, laws governing their motions
seem very much like the stuff we see from the design stance. And
many times, forces of nature act so strangely to us that we
practically believe that they're moving from intentions. Isn't it
revealing how humans used to worship aspects of nature as if they
were connected to something sentient, like Thor when lightning
struck? And isn't it peculiar to think that even today we still
give proper names to hurricanes? In a way, then, any one stance
could potentially do all the explanatory work you wanted, and yet
no one stance seems automatically to ontologically precede or
dominate the others.
     The most interesting aspect about these three stances is that
their real power lies in their explanatory power and not in what
something or someone "really" is. What is really real about
something as opposed to another thing, anyway? Is my mental life
the product of rationality, intentionality, and consciousness? Or
is it the product of what I was taught to believe and how my
experience teaches me to think? Or is the product of how my brain
and body operates? In reality, it could be any of those things, or
all of those things. It may be none of those things for all I know!
I simply don't know for sure, but I believe that all three stances
play a part. I don't believe, for example, that someone could be
conscious without any physical hardware, be it a brain or computer
processor; take away my brain, and I believe you have taken away
the mind as I know it. I also don't believe that someone could be
conscious without some sort of program; take away the way I process
information and think, and my brain is useless and my consciousness
never gets enough organization to form. But take away my
rationality and I believe you get a creature that I don't identify
myself with; something with just a brain and a program that is not
the "me" I regard as myself. (Though controversial, rationality
does not need to really exist as something beyond physical form and
program for its idea or belief to serve practical, explanatory
purposes.) All three stances seem to make their contributions.
     Biological theories succeed in offering us as good a physical
stance as possible. This is very useful; if I have a headache, I
want a chemical that can make it go away. But notice three things:
First of all, the physical stance does not automatically explain
everything. There is probably a lot more that we don't know about
the brain, the origin of species, and the physical world in general
than we do. And even if we could explain things in physical terms,
it may require such a Herculean undertaking that it wouldn't be
worth the trouble. The design stance and Intentional stance still
explain many things better and in more useful ways than the
physical one. Therefore my second point is that the physical stance
does not necessarily exclude the other two stances at all. We can
beat the chess computer three different ways, or all three ways, or
with any combination thereof. This suggests to me that evolution
doesn't need to destroy my belief that humanity is good, or that
neurology doesn't need to threaten my feelings of love and passion.
As long as I understand that there are different ways of looking at
the same thing, then I feel there's no need to feel worried. Since
I know so little about what anything really is anyway, I feel
satisfied in the belief that what something does is a good enough
way to look at the world philosophically.
     Finally, notice that understanding things from the physical
stance is beneficial. It explains many things far better than the
other two stances do, and that makes life better, not worse.
Scientists aren't out to burst people's bubbles; they want to
explore one aspect of reality that will ultimately help humankind.
They do what they do because it pushes humanity up, not down.
Perhaps people's negative feelings toward biological theories stems
from their belief that one stance really is better than the other
two; perhaps people become disheartened by the physical stance when
they look upon it from the wrong perspective; perhaps the physical
stance seems insensitive only because many of its advocates have
tried to make it seem like the only real or sensible stance
(extreme reductionists come to mind).
     But these opinions simply seem myopic to me. Which stance
reflects the real world? Is the real world anything more than the
various perspectives and their respective contents (and how would
we know it if it was)? Is any one stance ultimately better than any
other?
     It depends on how you look at it.