Skip to main content

The Case For and Against AGI

The following is an excerpt from GigaOm publisher Byron Reese’s new book, The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity. You can purchase the book here.

The Fourth Age explores the implications of automation and AI on humanity, and has been described by Ethernet inventor and 3Com founder Bob Metcalfe as framing “the deepest questions of our time in clear language that invites the reader to make their own choices. Using 100,000 years of human history as his guide, he explores the issues around artificial general intelligence, robots, consciousness, automation, the end of work, abundance, and immortality.”

One of those deep questions of our time:

Is an artificial general intelligence, or AGI, even possible? Most people working in the field of AI are convinced that an AGI is possible, though they disagree about when it will happen. In this excerpt from The Fourth Age, Byron Reese considers it an open question and explores if it is possible.


The Case for AGI

Those who believe we can build an AGI operate from a single core assumption. While granting that no one understands how the brain works, they firmly believe that it is a machine, and therefore our mind must be a machine as well. Thus, ever more powerful computers eventually will duplicate the capabilities of the brain and yield intelligence. As Stephen Hawking explains:

I believe there is no deep difference between what can be achieved by a biological brain and what can be achieved by a computer. It therefore follows that computers can, in theory, emulate human intelligence—and exceed it.

As this quote indicates, Hawking would answer our foundational question about the composition of the universe as a monist, and therefore someone who believes that AGI is certainly possible. If nothing happens in the universe outside the laws of physics, then whatever makes us intelligent must obey the laws of physics. And if that is the case, we can eventually build something that does the same thing. He would presumably answer the foundational question of “What are we?” with “machines,” thus again believing that AGI is clearly possible. Can a machine be intelligent? Of course! You are just such a machine.

Consider this thought experiment: What if we built a mechanical neuron that worked exactly like the organic kind. And what if we then duplicated all the other parts of the brain mechanically as well. This isn’t a stretch, given that we can make other artificial organs. Then, if you had a scanner of incredible power, it could make a synthetic copy of your brain right down to the atomic level. How in the world can you argue that won’t have your intelligence?

The only way, the argument goes, you get away from AGI being possible is by invoking some mystical, magical feature of the brain that we have no proof exists. In fact, we have a mountain of evidence that it doesn’t. Every day we learn more and more about the brain, and not once have the scientists returned and said, “Guess what! We discovered a magical part of the brain that defies all laws of physics, and which therefore requires us to throw out all the science we have based on that physics for the last four hundred years.” No, one by one, the inner workings of the brain are revealed. And yes, the brain is a fantastic organ, but there is nothing magical about it. It is just another device.

Since the beginning of the computer age, people have come up with lists of things that computers will supposedly never be able to do. One by one, computers have done them. And even if there were some magical part of the brain (which there isn’t), there would be no reason to assume that it is the mechanism by which we are intelligent. Even if you proved that this magical part is the secret sauce in our intelligence (which it isn’t), there would be no reason to assume we can’t find another way to achieve intelligence.

Thus, this argument concludes, of course we can build an AGI. Only mystics and spiritualists would say otherwise.

The Case against AGI

Let’s now explore the other side.

A brain, as was noted earlier, contains a hundred billion neurons with a hundred trillion connections among them. But just as music is the space between the notes, you exist not in those neurons, but in the space between them. Somehow, your intelligence emerges from these connections.

We don’t know how the mind comes into being, but we do know that computers don’t operate anything at all like a mind, or even a brain for that matter. They simply do what they have been programmed to do. The words they output mean nothing to them. They have no idea if they are talking about coffee beans or cholera. They know nothing, they think nothing, they are as dead as fried chicken.

A computer can do only one simple thing: manipulate abstract symbols in memory. So what is incumbent on the “for” camp is to explain how such a device, no matter how fast it can operate, could, in fact, “think.”

We casually use language about computers as if they are creatures like us. We say things like, “When the computer sees someone repeatedly type in the wrong password, it understands what this means and interprets it as an attempted security breach.”

But the computer does not actually “see” anything. Even with a camera mounted on top, it does not see. It may detect something, just like a lawn system uses a sensor to detect when the lawn is dry. Further, it does not understand anything. It may compute something, but it has no understanding.

We use language that treats computers as alive colloquially, but we should keep in mind it is not really true. It is important now to make the distinction, because with AGI we are talking about machines going from computing something to understanding something.

Joseph Weizenbaum, an early thinker about AI, built a simple computer program in 1966, ELIZA, which was a natural language program that roughly mirrored what a psychologist might say. You make a statement like “I am sad” and ELIZA would ask, “What do you think made you sad?” Then you might say, “I am sad because no one seems to like me.” ELIZA might respond “Why do you think that no one seems to like you?” And so on. This approach will be familiar to anyone who has spent much time with a four-year-old who continually and recursively asks why, why, why to every statement.

When Weizenbaum saw that people were actually pouring out their hearts to ELIZA, even though they knew it was a computer program, he turned against it. He said that in effect, when the computer says “I understand,” it tells a lie. There is no “I” and there is no understanding.

His conclusion is not simply linguistic hairsplitting. The entire question of AGI hinges on this point of understandingsomething. To get at the heart of this argument, consider the thought experiment offered up in 1980 by the American philosopher John Searle. It is called the Chinese room argument. Here it is in broad form:

There is a giant room, sealed off, with one person in it. Let’s call him the Librarian. The Librarian doesn’t know any Chinese. However, the room is filled with thousands of books that allow him to look up any question in Chinese and produce an answer in Chinese.

Someone outside the room, a Chinese speaker, writes a question in Chinese and slides it under the door. The Librarian picks up the piece of paper and retrieves a volume we will call book 1. He finds the first symbol in book 1, and written next to that symbol is the instruction “Look up the next symbol in book 1138.” He looks up the next symbol in book 1138. Next to that symbol he is given the instruction to retrieve book 24,601, and look up the next symbol. This goes on and on. When he finally makes it to a final symbol on the piece of paper, the final book directs him to copy a series of symbols down. He copies the cryptic symbols and passes them under the door. The Chinese speaker outside picks up the paper and reads the answer to his question. He finds the answer to be clever, witty, profound, and insightful. In fact, it is positively brilliant.

Again, the Librarian does not speak any Chinese. He has no idea what the question was or what the answer said. He simply went from book to book as the books directed and copied what they directed him to copy.

Now, here is the question: Does the Librarian understand Chinese?

Searle uses this analogy to show that no matter how complex a computer program is, it is doing nothing more than going from book to book. There is no understanding of any kind. And it is quite hard to imagine how there can be true intelligence without any understanding whatsoever. He states plainly, “In the literal sense, the programmed computer understands what the car and the adding machine understand, namely, exactly nothing.”

Some try to get around the argument by saying that the entire system understands Chinese. While this seems plausible at first, it doesn’t really get us very far. Say the Librarian memorized the contents of every book, and further could come up with the response from these books so quickly that as soon as you could write a question down, he could write the answer. But still, the Librarian has no idea what the characters he is writing mean. He doesn’t know if he is writing about dishwater or doorbells. So again, does the Librarian understand Chinese?

So that is the basic argument against the possibility of AGI. First, computers simply manipulate ones and zeros in memory. No matter how fast you do that, that doesn’t somehow conjure up intelligence. Second, the computer just follows a program that was written for it, just like the Chinese Room. So no matter how impressive it looks, it doesn’t really understand anything. It is just a party trick.

It should be noted that many people in the AI field would most likely scratch their heads at the reasoning of the case against AGI and find it all quite frustrating. They would say that of course the brain is a machine—what else could it be? Sure, computers can only manipulate abstract symbols, but the brain is just a bunch of neurons that send electrical and chemical signals to each other. Who would have guessed that would have given us intelligence? It is true that brains and computers are made of different stuff, but there is no reason to assume they can’t do the same exact things. The only reason, they would say, that we think brains are not machines is because we are uncomfortable thinking we are only machines.

They would also be quick to offer rebuttals of the Chinese room argument. There are several, but the one most pertinent to our purposes is what I call the “quacks like a duck” argument. If it walks like a duck, swims like a duck, and quacks like a duck, I am going to assume it is a duck. It doesn’t really matter if in your opinion there is no understanding, for if you can ask it questions in Chinese and it responds with good answers in Chinese, then it understands Chinese. If the room can act like it understands, then it understands. End of story. This was in fact Turing’s central thesis in his 1950 paper on the question of whether computers can think. He states, “May not machines carry out something which ought to be described as thinking but which is very different from what a human does?” Turing would have seen no problem at all in saying the Chinese room can think. Of course it can. It is obvious. The idea that it can answer questions in Chinese but doesn’t understand Chinese is self-contradictory.


To read more of GigaOm publisher Byron Reese’s new book, The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity, you can purchase it here.



from Gigaom https://gigaom.com/2018/05/29/case-for-and-against-agi/

Comments

Popular posts from this blog

Who is NetApp?

At Cloud Field Day 9 Netapp presented some of its cloud solutions. This comes on the heels of NetApp Insight , the annual corporate event that should give its user base not just new products but also a general overview of the company strategy for the future. NetApp presented a lot of interesting news and projects around multi-cloud data and system management. The Transition to Data Fabric This is not the first time that NetApp radically changed its strategy. Do you remember when NetApp was the boring ONTAP-only company? Not that there is anything wrong with ONTAP of course (the storage OS originally designed by NetApp is still at the core of many of its storage appliances). It just can’t be the solution for everything, even if it does work pretty well. When ONTAP was the only answer to every question (even with StorageGrid and EF systems already part of the portfolio), the company started to look boring and, honestly, not very credible. The day the Data Fabric vision was announced

Inside Research: People Analytics

In a recent report, “ Key Criteria for Evaluating People Analytics ,” distinguished analyst Stowe Boyd looks at the emerging field of people analytics, and examines the platforms that focus on human resources and the criteria with which to best judge their capabilities. Stowe in the report outlines the table stakes criteria of People Analytics—the essential features and capabilities without which a platform can’t be considered relevant in this sector. These include basic analytic elements such as recording performance reviews, attendance monitoring, and integration with other HR tools. The report also defines the key criteria, or the features that actively differentiate products within the market and help organizations to choose an appropriate solution. These criteria include: Full employee life cycle tracking Support for different employee types (seasonal or freelance workers) Employee surveys Diversity and inclusion monitoring Stowe also looks at the rapid innovation and em