[voices_in_ai_byline]
About this Episode
Episode 83 of Voices in AI features host Byron Reese and Margaret Mitchell discussing the nature of language and it’s impact on machine learning and intelligence.
Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com
Transcript Excerpt
Byron Reese: This is Voices in AI brought to you by GigaOm and I’m Byron Reese. Today my guest is Margaret Mitchell. She is a senior research scientist at Google doing amazing work. And she studied linguistics at Reed College and Computational Linguistics at the University of Washington. Welcome to the show!
Margaret Mitchell: Thank you. Thank you for having me.
I’m always intrigued by how people make their way to the AI world, because a lot of times what they study in University [is so varied]. I’ve seen neuroscientists, I see physicists, I see all kinds of backgrounds. [It’s] like all roads lead to Rome. What was the path that got you from linguistics to computational linguistics and to artificial intelligence?
So I followed a path similar to I think some other people who’ve had sort of linguistics training and then go into natural language processing which is sort of [the] applied field of AI, focusing specifically on processing and understanding text as well as generating. And so I had been kind of fascinated by noun phrases when I was an undergrad. So that’s things that refer to person, places, objects in the world and things like that.
I wanted to figure out: is there a way that I could like analyze things in the world and then generate a noun phrase? So I was kind of playing around with just this idea of ‘How could I generate noun phrases that are humanlike?’ And that was before I knew about natural language processing, that was before this new wave of AI interest. I was just kind of playing around with trying to do something that was humanlike, from my understanding of how language worked. Then I found myself having to code and stuff to get that to work—like mock up some basic examples of how that could work if you had a different knowledge about the kind of things that you’re trying to talk about.
And once I started doing that, I realized that I was doing essentially what’s called natural language generation. So generating phrases and things like that based on some input data or input knowledge base, something like that. And so once I started getting into the natural language generation world, it was a slippery slope to get into machine learning and then what we’re now calling artificial intelligence because those kinds of things end up being the methods that you use in order to process language.
So my question is: I always hear these things that say “computers have a x-ty-9% point whatever accuracy in transcription” and I fly a lot. My frequent flyer number of choice has an A, an H and an 8 in it.
Oh no.
And I would say it never gets it right.
Right.
And it’s only got 36 choices.
Right.
Why is it so awful?
Right. So that’s speech processing. And that has to do with a bunch of different things including just how well that the speech stream is being analyzed and the sort of frequencies that are picked up are going to be different depending on what kind of device you’re using. And a lot of times the higher frequencies are cut off. And so words that when [spoken] face to face or sounds that we hear face to face really easily are sort of muddled more when we’re using different kinds of devices. And so that ends up especially on things like telephones cutting off a lot of these higher frequencies that really help those distinctions. And then there’s like just general training issues, so depending on who you’ve trained on and what the data represents, you’re going to have different kinds of strengths and weaknesses.
Well I also find that in a way, our ability to process linguistics is ahead of our ability in many cases to do something with it. I can’t say the names out loud because I have two of these popular devices on my desk and they’ll answer me if I mentioned them, but they always understand what I’m saying. But the degree to which they get it right, like if I say “what’s bigger—a nickel or the sun?” They never get it. And yet they usually understand the sentence.
So I don’t really know where I’m going with that other than, do you feel like you could say your area of practice is one of the more mature, like hey, we’re doing our bit, the rest of you common sense people over there and you models of the world over there and you transfer learning people, y’all are falling behind, but the computational linguistics people—we have it all together?
I don’t think that’s true. And the things you’re mentioning aren’t actually mutually exclusive either, so in natural language processing you often use common sense databases or you’re actually helping to do information extraction in order to fill out those databases. And you can also use transfer learning as a general technique that is pretty powerful in deep learning models right now.
Deep learning models are used in natural language processing as well as image processing as well as a ton of other stuff.
So… everything you’re mentioning is relevant to this task of saying something and having your device on your desktop understand what you’re talking about. And that whole process isn’t just simply recognizing the words, but it’s taking those words and then mapping them to some sort of user intent and then being able to act on that intent. That whole pipeline, that whole process involves a ton of different models and requires being able to make queries about the world and extract information based on… usually it’s going to be the content words of the phrase: so nouns, verbs things that are conveying the main sort of ideas in your utterance and using those in order to find relevant information to that.
So the Turing test… if I can’t tell if I’m talking to a person or a machine, you got to say the machine is doing a pretty good job. It’s thinking according to Turing. Do you think passing the Turing test would actually be a watershed event? Or do you think that’s more like marketing and hype, and it’s not the kind of thing you even care about one way or the other?
Right. So the Turing Test as was originally construed has this basic notion that the person who is judging can’t tell whether or not it’s human-generated or machine-generated. And there’s lots of ways to do that. That’s not exactly what we mean by human level performance. So, for example, you could trivially pass the Turing test if you were pretending to be a machine that doesn’t understand English well, right? So you could say, “Oh this is a this is a person behind this, they’re just learning English for the first time—they might get some things mixed up.”
Listen to this one-hour episode or read the full transcript at www.VoicesinAI.com
[voices_in_ai_link_back]
Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.
from Gigaom https://gigaom.com/2019/03/21/voices-in-ai-episode-83-a-conversation-with-margaret-mitchell/
Comments
Post a Comment