Search
Close this search box.

AI Today Podcast #004 – Guest Expert: James Barrat author of “Our Final Invention: Artificial Intelligence and the End of the Human Era”.

Podcast #4_ Guest Expert_ James Barrat author of “Our Final Invention_ Artificial Intelligence and the End of the Human Era”

AI Today Podcast #004 – Guest Expert: James Barrat author of “Our Final Invention: Artificial Intelligence and the End of the Human Era”.

Show Notes:

On today’s show we interview James Barrat, author of “Our Final Invention: Artificial Intelligence and the End of the Human Era”.  We discuss why James wrote this book 3 years ago now, how far away he really thinks we are from artificial human intelligence, the warning bells recently being sounded about artificial intelligence, and why he thinks there will not be another AI winter.

Links to articles and topics discussed:

___________________________________________

A transcript of podcast is available below:

Kathleen Walch: [00:00:22] Hello and welcome to the AI today podcast. I’m your host Kathleen Walch.

Ron Schmelzer: [00:00:26] And I’m your host Ronald Schmelzer. Our guest today is James Barrat author of the book “Our final Invention” Artificial Intelligence and the end of the Human Era”. Let’s get things started.

Kathleen Walch: [00:00:36] Hi James. How are you?

James Barrat: [00:00:37] I’m great. It’s good to be here. Thanks for inviting me.

Kathleen Walch: [00:00:39] Great, I’d like to get started by having you introduce yourself to our listeners and to tell us a little bit about your book and also what additional things that you’re doing in the field of AI and let’s go from there.

James Barrat: [00:00:52] OK well I’m a documentary filmmaker primarily and an author and speaker. I got into artificial intelligence, or the study of artificial intelligence, and the critique of AI because I made a film about 17 years ago now about artificial intelligence. I interviewed Ray Kurzweil and Rodney Brooks and Arthur C. Clarke among others … and Ray Kurzweil of course who is now chief engineer at Google and the Google brain project. He was very optimistic about AI and thought that it would bring in a period of utopian time when most of mankind’s problems will be defeated including mortality. Rodney Brooks was not quite that rosy but he was still very optimistic. He thought robots and AI will be our partners never our competitors. But Arthur C. Clarke who was a scientist before he was a science fiction writer said something like we steer the future not because we’re the fastest creature or the strongest creature but because we’re the most intelligent. And when we share the planet with something more intelligent than we are it will steer the future. Until then I’ve been pretty besotted with AI and I still am I still think it’s a terrific set of technologies with a great deal of potential good. But at that point some skepticism entered my mind and it just festered and I started interviewing people to make AI and ultimately came out with (my book) “Our final invention: Artificial Intelligence and the End of Human Era”.

Ron Schmelzer: [00:02:06] Sounds great. I know that in one of the books that we read among others on the topic, we’ve seen a lot of folks like Nick Bostrom and others start to be more vocal, and of course Elon Musk most recently, about some of the danger of AI. But that being said many notable AI researchers and technologists in the field such as Rodney Brooks, as you just mentioned, say that we’re really not that close to this vision of super intelligence and perhaps if it is even possible, especially in areas of self-awareness and survival instinct and the desire to self improve based on much of warnings about super intelligence. How do you respond to the industry experts who who disagree with the long term vision of AI that people talk about?

James Barrat: [00:02:40] Yeah. Well you know I do see those remarks all of the time and they’re always very contextual. Eric Marcus who’s a NYU psychology professor and also an AI maker reviewed my book for the New Yorker and he said “Does it really matter how long it takes… if it takes 50 years or 100 years to get to machines that are smarter than us, we will still be faced with the same dilemma. Is it safe? Have we prepared for that time? As Stephen Hawking said, if we knew that a vastly smarter alien race was going to land on our planet in 20 years, would we just you know say “come on by?” Or would we get ready? And you know there’s been a lot of people since I wrote my book they’re getting on the AI risk and AI skepticism bandwagon for really good reasons. You know Elon Musk is one. Stephen Hawking is another. Bill Gates lifelong programmer is another. Stuart Russell who coauthored the standard text of AI, “AI a Modern Approach” is another and I could go on and on. And these are people who know a lot about AI. My skeptical sense is that people who are defending it the most vociferously have a giant economic stake in the outcome. And you know Mark Zuckerberg came out and said Elon Musk’s comments were irresponsible. Well Mark Zuckerberg has the biggest financial stake of all in the outcome of AI. Google, you know when they don’t like something that’s been written about them, they have it erased. They have 400 lawyers. Google does not tolerate a lot of you know sort of insurrections on its own people. And it’s not going to come out and be supportive of people who are talking about AI risk. So I think the lines are drawn pretty clearly with who’s going to make a whole lot of money from AI and those people are going to have to put a bit of a defense against people like me.

Ron Schmelzer: [00:04:16] Good point.

Kathleen Walch: [00:04:17] Yeah. Now I know in your book you had said that we’re about 10 years away from artificial human intelligence and your book was written four years ago, so do you think that we’re still going to create artificial human intelligence by 2023? And then if so what are the key signs to indicate we’re reaching that point?

James Barrat: [00:04:36] You know there’s no I’m not I’m not a Futurist and being a Futurist would be the worst job in the world, because you know, as they say, super intelligent machines are always 20 years away. One place where I really do follow the guidance with Ray Kurzweil is an excellent technologist. He won the Edison Award for inventing before he became you know sort of a preacher for the singularity he was an extremely accomplished inventor. He thinks that by 2029 we’ll have human level intelligence at the price of a computer. So human level intelligence in a machine that’s cheap. He also said he wants to create a machine that makes 300 trillion calculations per second and just share that with a billion people. And so what that is is an online service that’s intelligent. The applications for that would be amazing. Imagine chaining together a bunch of super intelligences and then tracking things like climate change or drug research or cancer research or unfortunately weapons development. So we’re headed for that in 2029 does not seem anymore to be too close. I took a poll of AI makers at a conference. The mean date for coming up with human level intelligence in a machine was 2045. I think it’s gonna be sooner than that, I’m probably 2029 or 2030 but I tell ya what’s happened with Go and deep learning was a little unexpected and I think other people are speaking up now because they see the potential for just accelerating advances towards that goal of human level intelligence in a machine.

Ron Schmelzer: [00:06:00] We actually talked directly to that point on our most recent podcast called “Should we be Scared of AI” we published not too long ago. And, in that, we say that we could be one major innovation away from greatly accelerating the pace of AGI and I think the point that we make is that nobody really knows. It’s just like as you mentioned with deep learning, nobody really knows how soon we are to achieving the vision of superintelligence or how far we are. We could be sooner than we think, we could be farther than we think. And I think the reason is because it’s all based on innovations and if some smart person at some university or a company somewhere comes up with some major innovation then everything can be massively accelerated a lot faster than we were expecting.

James Barrat: [00:06:38] Well you know it’s like, Ben Goertzel if you don’t know him he’s worth looking at because he’s a fascinating person Ben Goertzel, is trying to make AGI, artificial general intelligence, and he said to me that he was they were waiting for a breakthrough like just as calculus was a breakthrough that provided a lot of mathematical shorthands and algebra was a breakthrough that provided a lot of mathematical shorthands that were waiting for the next giant innovation. And I think frankly I think deep learning is as close to a giant innovation as we’ve had in some time.

Ron Schmelzer: [00:07:05] Well that actually sort of brings in the next question, which is that especially for the folks who are sounding the warning bells about artificial intelligence, just like research into nuclear fission or fusion which everybody knows has catastrophic humanity ending potential. But if this is the 1920s and 30s it would have been very hard to get researchers and the governments and other folks to stop their research on nuclear fission and fusion because they see all the other benefits of nuclear energy least the benefits they were chasing. So what should we do now about all this research and attention and money that’s being focused on AI now? Can we really expect to stop or even slow the pace of AI research?

James Barrat: [00:07:41] At the end of my book “Our Final Invention” this is where I get. And it’s basically there’s such a huge economic wind propelling the development of AI that there’s no way we can relinquish this technology or slow it down. The amount of money invested doubled every year since 2009. Gartner and company reckon that by 2025 the value of AI and automation will be five trillion dollars which will be the largest sector of the economy. So there’s too much money to be made for this to slow down. And there are groups like the Future of Life Institute and like MIRI the machine intelligence research institute. And actually MIRI has been at this for 10 years trying to raise awareness and develop AI that’s reliably friendly. But at the same time it’s full speed ahead for so many companies. You know Google has a $200 billion war chest. The NSA (the National Security Agency) which has a long track record of abusing our rights has a $50 billion a year war chest. How does a non profit like the Future of Life Institute or MIRI compete with this much money and this much talent being thrown at the goal of making machines as intelligent as humans. I don’t know how you slow it down. I think the Future of Life institute has the right idea by trying to get the AI makers and the policy makers and the ethicists all together but then how do you bring China to the table, how do you bring Russia to the table?

Ron Schmelzer: [00:09:00] Yeah I think that was one of the feedback comments about the book was that folks were saying like you’re definitely highlighting a lot of challenges a lot of the problems, showing us the potential path to this vision of super intelligence. But you don’t really talk too much about solutions and I think, to that point, it sounds like one of the things you’re saying it may be really hard to put the cork back in this bottle. We’ve already sort of released the genie and now basically it’s a matter of just dealing with it inevitably occur at some point in the future right. So the people who are asking for solutions to this problem.

James Barrat: [00:09:30] Well as you said you know and I use this example a lot as well, it is like fission. In the 1920s and 30s they thought the biggest most respected physicists didn’t think nuclear fusion was possible and then it was and then it was weaponized and we incinerated two cities with bombs and we held a gun at our own heads as a species throughout the whole nuclear arms race. And what do we have today we have this insane dictator in North Korea threatening to use nuclear weapons. We had no maintenance plan for that technology. Right now we have no maintenance plan for this [AI] technology and this technology is actually more sensitive than fission. This is the technology that invents technology. So I didn’t have any solutions and I don’t pretend to have a lot of solutions now. I think one of the keys is probably getting the word out to a lot of people so that the government at some point steps in with regulation and I’m the last person to recommend that for anything. First of all how do you educate Congress these technologies and then how do you get any meaningful legislation passed. And right now it’s probably too early anyway. But if we just horse around like we did with fission before we know it we’ll have some cataclysmic disaster.

Kathleen Walch: [00:10:35] I mean Putin had recently warned that the one who becomes the leader in the sphere of AI will be the ruler of the world. So that is a pretty strong statement.

James Barrat: [00:10:45] (laughter) He’s no dummy, he’s a thug. He is a murderer but he’s no dummy.

Kathleen Walch: [00:10:50] No, Putin is no dummy.

James Barrat: [00:10:51] You know this is absolutely true that there is a huge first mover advantage and it’s not … you know I used to think it was just a competition among companies… Google, IBM, Amazon, Baidu. It’s really going to be a competition among governments because whoever makes superintelligence first will be able to control the other intelligence. This concept for anybody who wants to look it up it’s called the Singleton concept and it’s extremely dangerous having one super intelligence that can control all the other AI’s. If you did a Google search, search artificial intelligence and the singleton principle it’s about the problem with… this is why some people recommend, like Elon Musk, this is why he ostensibly started a company called OpenAI to make it known and transparent and to grow an ecology of AI’s simultaneously around the world so no one gets super dominance. And it’s either a good idea or a really bad idea. I can’t figure out which. If you make a AI development transparent, don’t you put in the hands of bad actors? And that’s a big problem. Bad actors who didn’t have insight and don’t have talent enough to develop it on their own.

Kathleen Walch: [00:11:56] Another point that we had brought up before was that I think people want to assume that good actors are using AI to do good things so they’ll feed it good data. But what happens when bad actors feed AI bad data and it learns off that? And by bad data I mean malicious. So that it learns off that and that it’s not doing something for the greater good. You know what happens and what should we be doing about getting AI in the hands of bad actors?

James Barrat: [00:12:21] … Getting it out of the hands of bad actors. First of all the good actors aren’t that good. I mean as you mentioned there are huge biases in data sets. There are sexist biases and their are racial biases, there are biases of keeping minorities from getting bank loans. If you feed the pictures in the neural net pictures we have at hand, you will believe that all doctors are white men. So there’s a huge amount of potential abuse in just in creating giant data sets and using big data. And then the good actors are not that good. Google has 400 lawyers because they get sued all the time. If you are thinking about studying law go through the case law of lawsuits against Google. I mean there’s been privacy law suits, there’s been copyright lawsuits. Google is a gigantic corporation that seems to have no real head. There are so many units to it that act independently. How do you keep that under control? And they have so much money that they shut up dissent. They had critique come from inside, those people get fired. They have had press, Forbes published an article that was critical of you and then Google had them take it down because they seemed to own a lot of Forbes. So I’m not sure if we have a lock on who the good actors are.

Ron Schmelzer: [00:13:26] Yeah. Well I think it’s a good point and then when somebody controls the search algorithm as you were just mentioning earlier that it’s easier to intimidate publishers because you can kind of make them invisible on the web so, that’s the truth.

James Barrat: [00:13:35] (haha) yea and that’s virtual murder and that’s a really serious thing. You know Internet invisibility is the same as not existing, in a commercial sense. So they have awesome power. These companies are becoming more like nation states than corporations in my mind and none of them are particularly virtuous.

Ron Schmelzer: [00:13:52] Sounds like it could be an interesting follow up book to all this.

James Barrat: [00:13:55] You could write a book about sensitive technologies and corporations. You know start with Union Carbide and Bophal. Our innovation runs way ahead of our stewardship. So Union Carbide decided to build a chemical plant in an intensely densely populated area and I think it was 18,000 people who died at Bhopal. Then they renamed the company and sold it off so it wouldn’t keep coming back to haunt them. But we tend to have accidents and we’re a little bit chastened and then we move on. Some accidents are recoverable but superintelligence won’t be like a bomb, it won’t be something that blows up and then you clean it up. It will be something that’s widely distributed around the planet and you just can’t simply turn it off.

Ron Schmelzer: [00:14:32] Well I think that brings us to the last question. I think that’s relevant and that is that you know A.I. has been around for a little while. It’s been around for several decades basically since the beginning of computing and it’s been through several waves of interest, investment, and the decline of interest, the so-called AI winters you know after the period of the 60s and 70s and then there was a decline and there was that resurgence of interest again in the 80s and 90s and then that declined and now we are sort of in this new resurgence of interest. A lot of people sort of attribute these AI winters to an inability of AI to live up to expectations. It didn’t do what people were claiming it would do, and so funding dried up and interest dried up. So yeah what is different now about kind of where we are in this latest cycle of interest and investment for AI that will not only live up to its expectations but surpass them in a way that the dangerous ways especially that we’ve written about in the book?

James Barrat: [00:15:19] Three things: giant data sets, faster better processors that are using graphics processing units and they turn out to be really good at powering neural nets. Data sets, graphic processing units and it’s clever and innovative techniques. Those three things have just completely opened the door to rapid innovation and success with AI tools. So yeah I think the AI winters are over.

Ron Schmelzer: [00:15:43] So if I can just sort of summarize it you think that there were some technological hurdles so back in the 50s 60s and 70s obviously we had not a lot of computing power. And then in the 80s and 90s we had better computing power, obviously not as good as we had now, but there was a data challenge especially around the expert systems. So you think basically it’s the confluence of surpassing some technological hurdles, ability now to deal with a massive almost infinite amount of data at this point. And then combined with this new techniques.

James Barrat: [00:16:09] Techniques, big data, and these using the new kind of processor. If anyone wants to google a good article about [Kevin] Kelly the Futurist wrote a really good article about the confluence of these three things and why there’s so much excitement about AI right now. And I want to say that I am absolutely fascinated by AI. I think it’s a wonderful technology. I do see its potential for great good. I think if we manage to survive this next 20 years we could solve a lot of our problems with AI. It’s this profoundly inward looking technology that asks us who we are in a way that no other technology ever has. Because it combines psychology with neuroscience and logic and just all the things we do the science of AI is trying to do better. So despite the title of my book, and the tone of my rhetoric I’m actually a big, big fan AI.

Ron Schmelzer: [00:16:59] Sounds great and I think we’re on the same page I mean obviously one of the things were trying to do is we’re trying to keep track. Our mission is just to be aware of what is happening, keep track and be influential outside observers is sort of our role.

James Barrat: [00:17:10] What you’re doing is very important and that’s getting the word out because one of the reasons I wrote “Our Final Invention” is because there was no text out there that explained in layman’s terms what was going on. And you know this is technology that will impact everyone and it ultimately threatens everyone or ultimately benefits everyone. But it would behoove everyone to know about it and then to get involved with the discussion and probably ultimately to be writing your Senators and Congressmen about how do we keep the safe how do we keep this wild technology safe.

Kathleen Walch: [00:17:41] OK James I think this is a great place to wrap it up. And so thank you very much for joining us for today’s podcast.

James Barrat: [00:17:47] The pleasure was mine. Thank you very much.

Kathleen Walch: [00:17:49] And listeners, we’ll post articles and concepts discussed in the Show Notes for today’s podcast.

Ron Schmelzer: [00:17:53] And thank you all for joining us. We really appreciate your participating and once again I want to thank James Barrat for joining us on this podcast.

James Barrat: [00:17:59] Thank you. I look forward to speaking with you again.

Kathleen Walch: [00:18:02] Likewise. All right listeners will catch you on the next podcast! And that’s a wrap for today to download this episode. Find additional episodes and transcripts subscribe to our newsletter and more please visit our Web site at Cognilytica.com. Join the discussion in between podcast on the AI Today Facebook group and make sure to join the Cognilytica Facebook page for updates on this and future podcasts. Also subscribe to our podcast in iTunes, Google Play, and elsewhere to get notified of future episodes.

Ron Schmelzer: [00:18:32] Want to support this podcast and get your message out to our listeners? Then become a sponsor. We offer significant benefits for AI today’s sponsors including promotion in the podcast and landing page and opportunities to be a guest on the AI Today show. For more information on sponsorship visit the cognilytica Web site and click on the podcast link. As always thanks for listening to AI Today and we’ll see you at the next podcast.

 

Login Or Register

cropped-CogHeadLogo.png

Register to View Event

cropped-CogHeadLogo.png

Get The AI Today Podcast #004 – Guest Expert: James Barrat author of “Our Final Invention: Artificial Intelligence and the End of the Human Era”.

cropped-CogHeadLogo.png

AI Best Practices

Get the Step By Step Checklist for AI Projects

login

Login to register for events. Don’t have an account? Just register for an event and an account will be created for you!