Skip to main content

Voices in AI – Episode 91: A Conversation with Mazin Gilbert

[voices_in_ai_byline]

About this Episode

Episode 91 of Voices in AI features Byron speaking with Mazin Gilbert from AT&T Labs about the nature of intelligence and why we have so much trouble defining it.

Listen to this episode or read the full transcript at www.VoicesinAI.com

Transcript Excerpt

Byron Reese: This is Voices in AI brought to you by Gigaom. I’m Byron Reese. Today my guest is Mazin Gilbert. He’s a VP of AT&T Labs and their advanced technologies. He holds a PhD in electrical engineering from Liverpool John Moores University, and if that weren’t enough, an MBA from Wharton as well. Welcome to the show, Mazin. 

Mazin Gilbert: Thank you for the invitation, Byron.

I always just like to kind of start out talking about what intelligence is and maybe little different [question], like, why do we have such a hard time defining what intelligence is? Yeah, that’s where I’ll start.

We always think of intelligence, certainly machine intelligence… we always compare machine intelligence to human intelligence, and we sometimes have a challenge in equating machines to humans. The intelligence of machines are radically different than humans. The intelligence is basically the ability to perform functions that may, one, be superior to any basic system to do; or two, require some form of context, some form of interpretation, some form of prediction that is not straightforward to do.

In machine intelligence, we really use that for anywhere from its basic form that could be as simple as moving data from one place to the other, all the way to its most advanced form: to be able to process petabytes of data to tell us how to best optimize traffic in our network. Both of those forms of intelligence, the most basic form and the most intelligent form, [are] absolutely essential to running a communication network.

But I mean, why do you think AI is so hard, because we have a lot of people working on it a whole lot of time, we’ve got a bunch of money in it, and yet it seems that we still don’t have machines able to do just the simplest, most rudimentary, common-sense things. I haven’t ever found an AI bot that can answer the question: ‘what’s bigger—a nickel or the sun?’ Why is that so hard?

I think we segregate AI into two classes. One class of AI are sort of rule based systems. So these are expert systems that we’ve been using as a society for decades. These are rudimentary bots. We actually have over 1500 of those deployed in AT&T. They do the rudimentary tasks, think of ‘if-then’ type of statements. They are very basic, but they do some amazing jobs in automating functions that otherwise humans would have to do at a scale, and there’s not enough humans to do those jobs in some cases.

Where it gets harder to understand is this sort of new wave of AI, of machine learning, deep learning based AI. Those are harder to understand because people equate those to some robot having the intelligence of a human, thinking like a human, making decisions like a human, and those don’t really exist today. And even what exists today are still in their rudimentary early forms. The machine learning type of AI that exists today, even in deployments (and we have a bunch of those already), the reason they’re hard is because they are very data driven. That’s the basic concept of an AI machine learning system today, data driven.

We deployed our first commercial AI system for customer care in about in 2000 called ‘How May I Help You,’ and then we had to go collect large amounts of data from our call centers to do the most basic thing. And as a result, there’s only a few of these systems you can build that if you have to go and collect large amount of data and have this data checked, evaluated, labeled by humans, which could take weeks, months, years, so the assistant can learn and do a function, that makes it really hard. So even when you think about for the most largest and commercial deployments today of AI, the Siri and Alexa and others, there are hundreds, if not probably thousands of people behind that…

But that just kind of kicks the can down the street a little, doesn’t it? I guess, then I would say, “Why is building an unsupervised learner so hard?” Why haven’t we been able to just make something that you could point at the internet, it can crawl around and it can sort it out? Why do you think that’s so hard?

So the concept of generalized artificial intelligence, which means that you build intelligence in a system and that system can do anything you want, it can classify internet traffic, it can recognize what you say, it can tell you what kind of an image – this is a cat or a dog, those systems do not exist, not in research, not in any commercial arena, they don’t exist.

What exists today are systems that have been developed, trained by humans to do one narrow function, and those systems are not easy to develop, because of the concept of: not only you need to collect large amounts of data, you need to teach the system what is the truth and what is the right action to take. I think of them as babies. You don’t train a baby in two hours or overnight. You don’t. It takes years to train a baby with a lot of feedback and it also provides feedback and sometimes supervised feedback on what is right, what is wrong, what is a picture, what is not a picture, what’s a word, what’s not a word, how to pronounce something.

That’s sort of what we need is that these systems require years of data collection with a lot of supervision and knowing the truth (just like any baby) for them to even get close to understanding and operating a simple function.

Listen to this episode or read the full transcript at www.VoicesinAI.com

[voices_in_ai_link_back]

Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.



from Gigaom https://gigaom.com/2019/07/11/voices-in-ai-episode-91-a-conversation-with-mazin-gilbert/

Comments

Popular posts from this blog

Who is NetApp?

At Cloud Field Day 9 Netapp presented some of its cloud solutions. This comes on the heels of NetApp Insight , the annual corporate event that should give its user base not just new products but also a general overview of the company strategy for the future. NetApp presented a lot of interesting news and projects around multi-cloud data and system management. The Transition to Data Fabric This is not the first time that NetApp radically changed its strategy. Do you remember when NetApp was the boring ONTAP-only company? Not that there is anything wrong with ONTAP of course (the storage OS originally designed by NetApp is still at the core of many of its storage appliances). It just can’t be the solution for everything, even if it does work pretty well. When ONTAP was the only answer to every question (even with StorageGrid and EF systems already part of the portfolio), the company started to look boring and, honestly, not very credible. The day the Data Fabric vision was announced