Skip to main content

Voices in AI – Episode 103: A Conversation with Ben Goertzel

[voices_in_ai_byline]

About this Episode

On Episode 103 of Voices in AI, Byron Reese discusses AI with Ben Goertzel of SingularityNET, diving into the concepts of a master algorithm and AGI’s.

Listen to this episode or read the full transcript at www.VoicesinAI.com

Transcript Excerpt

Byron Reese: This is Voices in AI brought to you by GigaOm, I’m Byron Reese. Today, my guest is Ben Goertzel. He is the CEO of SingularityNET, as well as the Chief Scientist over at Hanson Robotics. He holds a PhD in Mathematics from Temple University. And he’s talking to us from Hong Kong right now where he lives. Welcome to the show Ben!

Ben Goertzel: Hey thanks for having me. I’m looking forward to our discussion.

The first question I always throw at people is: “What is intelligence?” And interestingly you have a definition of intelligence in your Wikipedia entry. That’s a first, but why don’t we just start with that: what is intelligence?

I actually spent a lot of time working on the mathematical formalization of a definition of intelligence early in my career and came up with something fairly crude which, to be honest, at this stage I’m no longer as enthused about as I was before. But I do think that that question opens up a lot of other interesting issues.

The way I came to think about intelligence early in my career was simply: achieving a broad variety of goals in a broad variety of environments. Or as I put it, the ability to achieve complex goals in complex environments. This tied in with what I later distinguish as AGI versus no AI. I introduced the whole notion of AGI and that term in 2004 or so. That has to do with an AGI being able to achieve a variety of different or complex goals in a variety of different types of scenarios, different than the narrow AIs that we have all around us that basically do one type of thing in one kind of context.

I still think that is a very valuable way to look at things, but I’ve drifted more into a systems theory perspective. I’ve been working with a guy named David (Weaver) Weinbaum who did a piece recently in the Free University of Brussels on the concept of open ended intelligence, which is more looking at intelligence, than just the process of exploration and information creation than those in the interaction with an environment. And in this open ended intelligence view, you’re really looking at intelligent systems and complex organizing systems and the creation of goals to be pursued, is part of what an intelligence system does, but isn’t necessarily the crux of it.

So I would say understanding what intelligence is, is an ongoing pursuit. And I think that’s okay. Like in biology the goal is to define what life is in ‘the once and for all’ formal sense, before you can do biology or an art, the goal isn’t to define what beauty is before you can proceed. These are sort of umbrella concepts which can then lead to a variety of different particular innovations and formalizations of what you do.

And yet I wonder, because you’re right, biologists don’t have a consensus definition for what life is or even death for that matter, you wonder at some level if maybe there’s no such thing as life. I mean like maybe it isn’t really… and so maybe you say that’s not really even a thing.

Well, this is that one of my favorite quotes of all time [from] former President Bill Clinton which is, “That all depends on what the meaning of IS is.”

There you go. Well let me ask you a question about goals, which you just brought up. I guess when we’re talking about machine intelligence or mechanical intelligence, let me ask point blank: is a compass’ goal to point to North? Or does it just happen to point to north? And if it isn’t it’s goal to point to North, what is the difference between what it does and what it wants to do?

The standard example used in resistance theory is the thermostat. The thermostat’s goal is to keep the temperature above a certain level and below a certain level or in a certain range and then in that sense the thermostat does have—you know it as a sensor, it has an actual mechanism that’s a very local control system connecting the two. So from the outside, it’s pretty hard not to call the thermostat a goal to a heating system, like a sensor or an actor and a decision making process in between.

Again the word “goal,” it’s a natural language concept that can be used for a lot of different things. I guess that some people have the idea that there are natural definitions of concepts that have profound and unique meaning. I sort of think that only exists in the mathematics domain where you say a definition of a real number is something natural and perfect because of the most beautiful theorems you can prove around it, but in the real world things are messy and there is room for different flavors of a concept.

I think from the view of the outside observer, the thermostat is pursuing a certain goal. And the compass may be also if you go down into the micro physics of it. On the other hand, an interesting point is that from its own point of view, the thermostat is not pursuing a goal, like the thermostat lacks a deliberative reflective model of itself either as a goal-achieving agent. To an outside observer, the thermostat is pursuing a goal.

Now for a human being, once you’re beyond the age of six or nine months or something, you are pursuing your goal relative to the observer, that is yourself. But you’re pursuing that goal—you have a sense of, and I think this gets at the crucial connection between reflection and meta thinking, self-observation and general intelligence because it’s the fact that we represent within ourselves, the fact that we are pursuing some goals, this is what allows us to change and adapt the goals as we grow and learn in a broadly purposeful and meaningful way. Like if a thermostat breaks, it’s not going to correct itself and go back to its original goal or something right? It’s just going to break, and it doesn’t even make a halting and flawed defense to understand what it’s doing and why, like we humans do.

So we could say that something has a goal if there’s some function which it’s systematically maximizing, in which case you can say of a heating or compass system that they do have a goal. You could say that it has a purpose if it is representing itself as the goal maximizing system and can manipulate its representation somehow. And that’s a little bit different, and then also we get to the difference between narrow AIs and AGIs. I mean AlphaGo has a goal of winning at Go, but it doesn’t know that Go is a game. It doesn’t know what winning is in any broad sense. So if you gave it a version of Go with like a hexagonal board and three different players or something, it doesn’t have the basis to adapt behaviors in this weird new context and like figure out what is the purpose of doing stuff in this weird new context because it’s not representing itself in relation to the Go game and the reward function in the way the person playing Go does.

If I’m playing Go, I’m much worse than AlphaGo, I’m even worse than say my oldest son who’s like a ‘one and done’ type of Go player. I’m way down on the hierarchy and I know that it’s a game manipulating little stones on the board by analogy to human warfare. I know how to watch the game between two people and that winning is done by counting stones and so forth. So being able to conceptualize my goal as a Go player in the broader context of my interaction with the world is really helpful when things go crazy and the world changes and the original detailed goals didn’t make any sense anymore, which has happened throughout my life as a human with astonishing regularity.

Listen to this episode or read the full transcript at www.VoicesinAI.com

[voices_in_ai_link_back]

Byron explores issues around artificial intelligence and conscious computers in his new book The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity.



from Gigaom https://gigaom.com/2019/12/26/voices-in-ai-episode-103-a-conversation-with-ben-goertzel/

Comments

Popular posts from this blog

Who is NetApp?

At Cloud Field Day 9 Netapp presented some of its cloud solutions. This comes on the heels of NetApp Insight , the annual corporate event that should give its user base not just new products but also a general overview of the company strategy for the future. NetApp presented a lot of interesting news and projects around multi-cloud data and system management. The Transition to Data Fabric This is not the first time that NetApp radically changed its strategy. Do you remember when NetApp was the boring ONTAP-only company? Not that there is anything wrong with ONTAP of course (the storage OS originally designed by NetApp is still at the core of many of its storage appliances). It just can’t be the solution for everything, even if it does work pretty well. When ONTAP was the only answer to every question (even with StorageGrid and EF systems already part of the portfolio), the company started to look boring and, honestly, not very credible. The day the Data Fabric vision was announced

Inside Research: People Analytics

In a recent report, “ Key Criteria for Evaluating People Analytics ,” distinguished analyst Stowe Boyd looks at the emerging field of people analytics, and examines the platforms that focus on human resources and the criteria with which to best judge their capabilities. Stowe in the report outlines the table stakes criteria of People Analytics—the essential features and capabilities without which a platform can’t be considered relevant in this sector. These include basic analytic elements such as recording performance reviews, attendance monitoring, and integration with other HR tools. The report also defines the key criteria, or the features that actively differentiate products within the market and help organizations to choose an appropriate solution. These criteria include: Full employee life cycle tracking Support for different employee types (seasonal or freelance workers) Employee surveys Diversity and inclusion monitoring Stowe also looks at the rapid innovation and em