Chapter Nine

Do You Think Artificial Intelligence Will Equal Or Surpass Human Intelligence?

Adams (U.C. Los Angeles): I think the first thing I would say is that 
I do not have a very informed opinion on artificial intelligence. I 
assume that in some respect artificial intelligence has already 
surpassed human intelligence.  My computer can do some things that I 
can't.  But do I think that someday computers will be able to do 
every task that humans can handle?  No, I don't really think that. 
But that may be as much of an expression of prejudice as anything 
else.  It's not based on a particular knowledge of what computers can 
do. That's a question really outside my area of expertise. 

Arntzenius (University of Southern California): It depends a bit on 
what you mean by intelligence.  If you mean the ability to do IQ 
tests well then I certainly think that we will be able to design 
computers that will do the tests better than we can. Why?  Well we've 
been able to program computers  to  do arithmetic better than we do; 
they certainly play chess better than I do. In practice we seem to be 
a very complicated and very efficiently designed machine. I doubt 
very much if our explicit design will be able to construct something 
that in almost all areas outperforms us.  I'm not even convinced that 
the hardest problem is to design something that has the mental 
capacity that we do. 

Beckman (Harvey Mudd): There are several different issues to be 
considered in artificial intelligence.  As you've phrased your 
question in terms of "intelligence," we are forced to ask how 
intelligence should be assessed.  If, for example, we accept the idea 
that intelligence should be measured by how rapidly something can 
perform complex mathematical tasks or store and retrieve mathematical 
information, then computers have already beaten the human mind by a 
long shot.  However, if we interpret intelligence in some more 
complex way--say, translating between human natural languages or 
making design decisions based on more than technical factors--it's 
not entirely clear whether computers will ever beat the human mind.  
If we ask the question in terms of "consciousness" rather than 
"intelligence," then I don't hold out much hope that computers will 
ever replicate human consciousness; in other words, I think that  
artificial  intelligence will  always  be  "artificial"  in 
significant ways. 

Churchland, Paul (U.C. San Diego):  I think that it will surpass 
human intelligence.  I think that in some dimensions it surpassed 
human intelligence twenty years ago.  However, it surpassed it in 
only a very narrow capacity--the  capacity  for  a  sheer  repetitive 
computation  like  doing  long  division,  or multiplication,  or 
addition or things like that but intelligence is a very much broader 
capacity than that.   I think it will take fifty or a hundred years 
before we understand the human brain fully.  When we understand how 
the human brain works, I think it will then be relatively 
straightforward, though it will be difficult.  It will be 
conceptually a straightforward technological matter to make an 
artificial system which can do the things that we do.   I don't think 
we will do that however.   It's too easy to make human intelligence 
already.  You only need a loving couple to do it.  So we're not going 
to put up millions of dollars to make artificial humans.  What we 
will do instead is to take arti
icial intelligence systems for some scientific purpose that we create 
some subset of the human capacities or perhaps will show some 
cognitive feature that we don't have at all.  After all there are 
many more kinds of brains possible than just the human brain.  I 
fully expect in the fairly near future for artificial intelligence to 
exceed the humans in many dimensions.  I don't know how this is all 
going to come out.  I think it's going to be a very exciting and 
interesting adventure and I'm not entirely comfortable with every 
aspect of it, but I think it will happen.

Cohon (Stanford University):  I can't predict what kinds of machines 
will be built in the future. On the one hand, I am inclined to think 
that, since the human brain is made of matter and it can think, it is 
possible to make other things out of matter that do the same things. 
On the other hand, much of what we classify as intelligent is 
socially defined and can only occur within a social context; this is 
especially true of speech. Consequently, it may be that no real 
machine intelligence is possible in the absence of some sort of 
machine community or society of machines. Anyway, while scholars are 
talking about science fiction scenarios, real researchers in machine 
intelligence are very far from understanding what human intelligence 
is, so at present it is impossible to predict whether machine 
intelligence can be made to equal it. For example, it is not 
understood how human beings recognize faces or understand speech of 
unknown persons, and psychologists are only beginning to figure out 
how people make sens
 of information that is presented to them in written form. Until we 
know what human intelligence is it is impossible to say whether 
machines will be able to duplicate it or surpass it.

Copp (U.C. Davis):  In some respects, yes; in others no, I doubt it 
will surpass human creativity. But I am only projecting up to 50 
years. Beyond that, who knows?

Dreyfus (U.C. Berkeley):  I have written two books on this subject. 
There are two kinds of artificial intelligence. The first kind, which 
started in about 1960, was devoted to using computers that were 
called physical symbol systems. The computer would have in it symbols 
that represented features of the world and the programs of the 
computer would be used to make inferences and deduce conclusions from 
this representation of features. I said in 1965, and in my book in 
1972, that this kind of artificial intelligence would not work 
because of our way of being in the world is not having in our mind 
representations and features. It turns out that I think (and lots of 
other people now are beginning to think) that I was right, that it is 
failing. There was an article that had A.I. on the cover and quoted 
me and agreed with me that symbolic A.I. did not succeed. But now 
there is a new kind of A.I. using computers doing what is called 
simulated neural networks. I think that that will never produce full 
human intelli
ence, but I think that it is not philosophically wrong like the 
symbolic A.I. but I do not think that it will work because the brain 
is too complicated and we do not know how it is wired up, so we can't 
make a simulated network that is enough like it even if we could. I 
think that the fact that we have bodies and move around in a world 
and have a culture is part of the way that our neural network gets 
tuned the way it is tuned and a computer that just had a neural 
network and passively took in what is called input vectors and paired 
them with output vectors [will not] have our kind of intelligence.

Fischer (U.C. Riverside): In certain ways, such as calculations, 
computers are already equal,  if not better.   They are also 
continually progressing in mechanics.  However, I remain skeptical as 
to whether computers will ever be as insightful or as creative as the 
human mind. 

Friedman (U.C. Davis):  No, never! Because we lead from it, and so we 
will be that much smarter. I've believed in manmachine relations for 
a long time.

Griesemer (U.C. Davis):  I'm not convinced that artificial 
intelligence is intelligence, so I don't think there's yet a question 
about whether it will bypass human intelligence. I think intelligence 
is a property of certain biological entities, so whatever computers 
are capable of, it isn't intelligence (unless computers are capable 
of being certain sorts of biological entities!). It's merely by 
analogy that we call what computing machines do thinking.

Jolley (U.C. San Diego): I am not well versed in this debate, but no 
I don't think that artificial intelligence will equal or bypass human 
intelligence except in very limited spheres (such as the ability to 
perform calculations at fantastic speed). My reasons are those which 
Descartes gives in the Discourses of Method, Part V.

Jubien (U.C. Davis):  If "artificial intelligence" just refers to the 
capacities of computers, then I think it already exceeds the 
capacities of human intelligence in certain ways (e.g., speed of 
computation). I don't think computers will ever have fully "human" 
intelligence because I don't think they will ever have mental 
experiences akin to those that humans have.

Kalish (U.C. Los Angeles): There are things which computers can do 
now which human beings can't do and the speed with which you can do 
computations and things like that are incredible. Also computers can 
store an enormous amount of information in its memories and you can 
get it back. On the other hand, it is quite well known that there are 
problems which you cannot prove and that there is no algorithm which 
will ever solve them. Human ingenuity is the only way they will ever 
be solved. So these are two entities where the human mind and the 
artificial machine differ. They both have enormous qualities and it 
is not a matter of trying to say that there are two things in which 
one is a little better than the other. There are certain things that 
one can do that the other can't and we are getting better and better. 
My gosh the things that can be done now and the way you can 
communicate with people is fascinating. So let me put it this way, I 
don't think that any person of your generation who doesn't become 
r competent is going to be able to compete in this life. There are 
mathematical problems that only the human mind will be able to answer 
because we can prove that there is no way we can program a machine to 
answer the question.

Kaplan (U.C. Los Angeles): There is the so called Turing test.  When 
you carry on a conversation and you can't tell if it's a machine or a 
person.  I don't really have a view as to whether we will be able to 
create machines that will pass the Turing test.  It's clear that 
machines can already do tasks which require a kind of intelligence, 
much better than we can.   I use a spell checker because it is a 
better speller than I am and quicker.  I don't know of any machine 
that is as creative as I am.  I am very skeptical to whether we will 
be able to do it,  unless we start to build biological machines.

Lambert (U.C. Irvine): Well, there are several things to think about 
it.  First, it's a difficult thing to say what human intelligence  is 
 and  it's  hard  to  tell  whether  artificial intelligence will 
surpass human intelligence or not.  We are not even clear what human 
intelligence is.  But if it means, for example, that machines will be 
able to do certain things better than human beings would do 
intellectual things, well they can already do things better than 
humans.   For example, machines, the new computers, put in parallels, 
can solve differential equations infinitely faster than human beings 
can do.  Now, whether you're going to call that a case of surpassing 
human intelligence, I don't know.  It certainly can do things faster. 
 There are respects in which computers just don't even come close to 
human beings.   So I'm inclined to say or view the whole question of, 
will computers ultimately surpass human beings' intelligence not to 
be a clear question.  As I've suggested, if you look at intelligence
in one way, they're not.  So it's not a well formed question for me. 

Lloyd (U.C. Berkeley):  I will give you a typical philosopher's 
answer for this. I depends in what you mean by intelligence. Already 
computers are able to do certain tasks which we take to be cognitive 
tasks much better than we can. I don't think this makes them more 
intelligent. I think that there are many kinds of human intelligence. 
There is artistic intelligence, there is mathematical intelligence, 
there is a kind of verbal ability, there is the ability to see the 
whole picture, the ability to see both sides of an issue. There are 
just so many aspects of human intelligence which are vital. I don't 
see artificially constructed machines as being able to perform all of 
the functions which we would naturally attribute to human 
intelligence. I do think that machines will be able to surpass us on 
some of these tasks, but not on intelligence per say, not on 
intelligence overall.

Matson (U.C. Berkeley):  Noat any rate not using any conceivable 
refinement of the Turing machine (digital computer). Turing machines 
necessarily follow contextfree algorithms; that is not the way we 

McCann (University of Southern California): I guess it would depend 
on what factors you have in mind. In terms of calculating lots of big 
columns of numbers quickly, obviously computers can do that. Although 
of course whether they are actually computing or calculating as 
opposed to what is really happening, a bunch of electrical states 
flip-flopping inside the machine with the results being interpreted 
in certain ways. The thing that stands most in the way of getting a 
straightforward yes or no answer is just that I think we do not have 
much of a hint of what human intelligence is. I am very persuaded by 
Howard Gardener's work on multiple intelligence. He is claiming that 
the sorts of capabilities for doing certain tasks quickly, that we 
called intelligence, get measured on the standard intelligence test, 
and things like that in our culture are just a very narrow range of 
human competencies that are artificially selected out or artificially 
highlighted. In a South Pacific's Island's culture, for example, th
 ability to navigate by stars is a crucial part of intelligence, but 
it is not exactly noted by us. In fact, I think there is a big 
indeterminacy in the notion of intelligence, whether human or 
artificial. And then once you go on to say what are the comparisons 
and contrast of human intelligence and artificial intelligence the 
questions are sort of fatally infected with the multiple ambiguities 
involved in the notion of intelligence in general. There is no doubt 
that machines can do some things that we count as intelligent tasks 
better than we can, but there is no doubt that there are a lot of 
things that we can do with the results of some of these processes 
that the machines cannot and maybe would not be able to do.

McGray (University of San Diego):  The answer is "yes" and "no." 
Computers are much more adept than we are at certain kinds of 
consistency tests and certain kinds of expert systems. But some other 
sorts of questions, even some simple problems in first order 
predicate logic, cannot be decided by any machine.

Pippin (U.C. San Diego): The real question is a philosophical one: 
What is human intelligence?

Rosenberg (U.C. Riverside):  Yes, because  human intelligence is the 
result of the operations of a machine. There is no reason why better 
machines can't be made. 

Ross (Claremont Scripps College): No, because a computer needs a 
programmer to teach it what it needs to know.  A computer doesn't 
have an imagination and without an imagination the computer will be 
unable to form questions and answers itself or ways of solving 
problems.   A programmer has to do these things,  so a computer will 
always be dependent on programmers.

Roth (Claremont McKenna College): That's a really interesting 
question.   The first response I would give I think if we are 
thinking about human intelligence at it's best, my guess is that 
artificial intelligence will not be capable of surpassing or even 
equalling human intelligence, especially if we look at the subtlety 
and the kind of nuances, the imaginative potential that there is of 
human intelligence. I'm looking more on the side of creativity.  On 
the side of our  intelligence that  is  laced with feeling,  with 
aesthetic qualities, things of this kind and it seems to me to forget 
it at least as I'm sitting here now. That it would be difficult to 
imagine that we could artificially create something that would be 
equal to that kind of subtlety in terms of intelligence.  The other 
part is a little fictitious, but not entirely so.  If human beings 
fail to develop the potential of their own intelligence it's 
conceivable to me that we might create artificial intelligence that 
would be superior to ours.  We
might be more rational in some ways. So I think this is another thing,
 the human intelligence is not a fixed element; it's something that 
could become better or worse, depending on what we choose to do with 
it,  how we develop it. Sometimes we are not nearly as intelligent as 
we think we are, or as we could be, but I guess I'm impressed by when 
I look at what the human mind has been capable of doing.  That it 
seems to have a range and a scope on one hand and also a subtle 
dimension of creativity that I find it hard to define. 

Schwyzer (U.C. Santa Barbara): It's such a frightening concept. 
Intelligence by itself is not very interesting. I think that some 
human should go along with that intelligence. It makes no sense to 
just have intelligence and nothing more. It's like having weight 
without size. We can have machines, but intelligence is a human 
attribute. I suppose I am a humanist. I fail to be fascinated with 
non-human things. I do have a computer in my office;  however, it 
hasn't been used yet. It's good decoration.

Shalinsky (U.C. San Diego): It's not precisely clear what this 
question is asking. To claim that some intelligence "equals" or 
"bypasses" another is quite vague. It is certainly clear that many 
forms of artificial intelligence surpass human intelligence: the 
calculator can perform functions that humans can not, a plain old 
digital computer can perform functions that humans cannot, and a 
super computer can perform functions that humans could not even dream 
of performing. The question ought to be phrased differently: will 
artificial intelligence equal or bypass human intelligence in the 
realms in which the latter is now superior? This is an important 
question, because it applies an important distinction between the 
kinds of intelligence behavior (e.g., numbercrunching) better 
performed by a very fast computer processing in serial fashion, and 
the kinds of intelligent behavior which are best performed by 
parallel processors. Humans process in parallel, and this accounts 
for their ability to perform and unders
and in complicated contents. Currently, for example, the prospect of 
writing a computer program which will model even the simplest kinds 
of human behavior is quite dim. Consider, for example, the human 
capacity to interpret utterances: while we understand the meaning of 
"Mr. Smith watched the fireworks go up in his pajamas last July 4th," 
the computer has considerable difficulty. While we manage to 
recognize even as many as hundreds of different faces, the computer 
has considerable [trouble with face recognition]. In my view, there 
is absolutely no reason to think that parallel computers will not 
equal intelligence (even in the domains in which the latter currently 
surpasses the most advanced artificial intelligence)but this is 
just a bet, after all!
Sircello (U.C. Irvine): Machines will be able to do more, but will 
not be more intelligent than humans.

Suppes (Stanford University):  Already in certain respects, of course,
 computers could do things better than human beings. For example, 
computation. Other things they that can't do as well. So I think what 
will happen will be an increasing complicated comparison. Computers 
will continue to acquire capabilities they don't now have and so the 
comparison and kinds of tasks they could do, how well they do in 
comparison to, show how well humans do, will continue to change.

Wollheim (U.C. Davis):  The problem that confronts us first is how to 
introduce consciousness and meaning. I don't have any conviction that 
this can be done.

Woodruff (U.C. Irvine): I think that hardware, as well as wetware, as 
it's called, can do these things, in principle.  So artificial 
intelligence of this sort is possible.  I think to some extent it 
already exists.  But what people have in mind, I suppose, when they 
ask this question is that they imagine it being like an alien coming 
and saying these things that we just can't understand and saying, "Oh,
 he's so smart, he's smarter than we could ever. . ." and so on.   Do 
I think that sort of thing will happen?  I can imagine it happening.  
I think that for a long time, humans would understand how it happened.
 That is that one would have to create some  kind  of  
quasievolutionary mechanisms  that would  allow machines to evolve,  
so they could go beyond humans actually programming them.   Although 
even now,  computers have certain abilities that even though we 
program them to do these things, what they do is so complex (because 
it's so large scale) there is a certain sense in which we can't 
understand what
they're doing.  I think that we can understand what artificial 
intelligence is, and that it's not, in principle, different in kind 
from what humans do. If there are differences, they have to do with 
the fact that we are different kinds of machines than electronic 
computers.   We are massively parallel, and we have all these 
interconnections in the brain which people are now trying to 
understand,  stuff called neural net computing.    But  it's  not any 
kind  of  ontological difference, not different kinds of stuff or 
substance in mind and matter.  So do I think that artificial 
intelligence will equal or bypass human intelligence?  I'm not sure, 
but I certainly wouldn't be surprised.   The reason I wouldn't be 
surprised is that we create machines that are much more powerful 
physically.  I don't see any reason why we cannot create thinking 
machines that are more powerful than we are.  In fact, we've already 
done it in certain respects.   I especially don't think that human 
intelligence is something that is es
entially different from machine intelligence. Our brains thinking or 
electronics thinking are essentially the same thing.