Search
Close this search box.

AI Today Podcast #006: Should We Regulate AI? If So, How?

Podcast #6_ Should We Regulate AI_ If So, How_

As the pursuit of AI continues, notable technologists and tech titans are sounding the warning bells for what possibly could happen in a future world dominated by superintelligent AI systems. Cognilytica wrote about this extensively in our Should We Be Scared of AI? [Cognilytica Research CG02] document and companion podcast. In that article, we write about how some are seeking to mitigate those fears through the use of laws and regulations that can be enforced upon adopters and developers of AI technology. But what sort of regulations and laws would these be? And just how practical would these laws and regulations be?

In this podcast, Cognilytica analysts Ronald Schmelzer and Kathleen Walch hypothesize on various different types of regulations that could apply to AI and what form they would take.

Supporting Links:

___________________________________________

A transcript of podcast is available below:

Kathleen Walch: [00:00:22] Hello and welcome to the AI today podcast. I’m your host Kathleen Walch.

Ron Schmelzer: [00:00:27] And I’m your host Ronald Schmelzer. And our topic today is should we regulate AI. And if so, how?

Kathleen Walch: [00:00:33] Certain people like Bill Gates and Elon Musk have been sounding the alarm bells on artificial intelligence and in our podcast “Should we be scared of AI” we talk about that a little. But Elon Musk says he’s urging governors to regulate artificial intelligence quote unquote “before it’s too late”. So he thinks that we need to create laws right now and that we need to preemptively stop what ever y bad things could come about from AI.

Ron Schmelzer: [00:01:01] So sort of the challenge of course that there are no AI laws and regulations right now. So what would these regulations be. And when I saw him say that when he has been quoted in the press like we need these laws and regulations, well are you referring to? Are you referring to laws that would prevent people from doing research on AI? Are you talking about laws that would prevent us from using AI? And so what we decided to do here is to theorize a little bit about what these kinds of laws and regulations would be. And there’s been a recent article in The New York Times that talked a little bit about what this sort of regulation could be.

Kathleen Walch: [00:01:33] Right. So there are three laws of robotics that the writer Isaac Asimov introduced in 1942 and these rules are: number one: A robot may not injure a human being or through interaction allow a human being to come to harm. Number two: a robot must obey the orders given it by human beings except such orders that would conflict with law. And then number three: a robot must protect its own existence as long as such protection does not conflict with the previous two laws. So the author of the article goes a little bit farther to say that we should now make more AI specific laws and that number one: an AI system must be subject to the full gamut of laws that apply to its human operator. Number two: an AI system must clearly disclose that it is not a human. And then number three: an AI system cannot retain or disclose confidential information without explicit approval from the source of that information. So that got us thinking that you know OK well how can we classify these laws. And so that we feel is a regulation on the capability of the actual AI system itself. So an AI system must be subject to the full gamut of laws that apply to its human operator. Well how does it interpret the law? Are the humans putting the law in and then the AI is interpreting the law or is it not ambiguous. So you know we have questions with that.

Ron Schmelzer: [00:02:56] Yeah I think actually that thought we have here is that if we’re talking about what the AI systems are doing we could talk about some of the technologies that people are using too. As I mentioned one of these things has do with privacy. You know if you’re talking to Alexa or you’re talking to Siri or talking to your Google phone and you’re disclosing all this information and it’s going back to Amazon, well we can have a law similar to the COPA law that protects minors to say OK well companies like Amazon and Google or an Apple are obligated to not disclose any confidential information that’s been disclosed to it. And that makes Apple, Google and those guys liable. But what if I’m talking to it and I ya know that’s interacting with some other application or some other piece of information you know who is liable then for that disclosure of information. I think that’s sort of the trick. The other thing is that we can create some new laws around that but it’s kind of hard to provide laws on some of the other things because the question is who is really liable. Is it the human that’s liable? Is it the machine that’s liable? And I know we’re going to get into this later in the podcast, but one of the things we thought about adding to this list that the New York Times writer wrote was you know should there be laws on how people can use AI systems to commit crimes. So let’s say right now you can’t rob a bank but can you program a drone to rob a bank or something like that I’m just making something up right now. And should there be any new laws that say well even if you are not specifically setting foot in your premises if you are setting some autonomous vehicle to do your job it should be illegal which you think would make sense but maybe that’s a loophole right now in laws. I don’t know.

Kathleen Walch: [00:04:25] And another thing that we brought up with autonomous vehicles is if there’s nobody physically driving the car say an autonomous Uber for example and it hits a pedestrian, who is liable? Is it the person who is in the car that’s liable? Is it Uber who’s liable? Is it the company that built the autonomous vehicle that’s liable? And right now I don’t think that there are rules and regulations put in place to say who is criminally liable for that.

Ron Schmelzer: [00:04:56] Yeah. And this was actually an issue I think we’ve already started to see there was this issue about how Toyota had these gas pedals that would get stuck. This was obviously not an autonomous vehicle situation and the owners were saying I wasn’t pressing down that gas pedal it just got stuck. And I think the courts found Toyota to be liable for all the damage and deaths. I think there may have been a few. And so the owners could basically claim hey that wasn’t me, that was the car was driving itself and maybe in 2007 that might have been the butt of a joke that the car was driving itself. That isn’t so much a joke anymore that the cars really are driving themselves. So I think rather than go into the courts there may need to be some rules and regulations about how these systems are built to basically say that if you use this autonomous vehicle in certain cases the owner or driver or operator or occupant is liable in some cases, the car manufacturer is liable in some cases, a car service the liable. I think this is pretty complicated to be totally honest you.

Kathleen Walch: [00:05:56] And I think that one thing we really need to be mindful of when doing this is we have to figure out who is liable and then make sure that that doesn’t stifle innovation and stifle further development on this. Because if we make a rule now that the car companies who build these are autonomous vehicles are liable for whatever injuries or deaths are caused by these vehicles. I don’t want it to be a hindrance to them to say hey this could be a potential issue. Let’s not even go there. And then they just totally stop R&D on this.

Ron Schmelzer: [00:06:27] That is a good segway. So right now we’re thinking about laws and regulations about what these systems do, how these AI systems behave in the real world. Maybe there should be regulations on what happens and how we develop AI system. A good analogy of this is that nobody has put weapons in space because we have a treaty with all these countries that say that we will not put weapons in space. And of course the first party to violate that rule violates this International Space Treaty. And we have other technologies too we have laws and regulations on human cloning. We have laws and human regulations on stem cell research and on gene editing you know so we can’t create designer babies, and we had those laws in place before we’ve actually done them so this is the arguments. Do you put the regulations in place before the technology is developed just in case somebody does it? Or do you put it in later after they’ve done it? There are certain cases where we may have to put them in place before so I could think of two situations where that may be relevant here. One is laws about weaponizing AI you know should people not be allowed to build autonomous destruction causing machines. That’s one and then the other one is this Should there be laws around emerging AI technology with humans like the whole cyborg idea. Should it be illegal for someone to implant something in their brain that gives them some sort of advantage that is illegal like doing the S.A.T. or playing blackjack or something. There are the thoughts I have about limitations before the systems are developed.

Kathleen Walch: [00:07:51] Right, and I think that some things that you had brought up they’re more ethical questions so human cloning for example or designer babies their ethical questions and I think that a lot of people can get behind the ethics of it. So you put a law in place that prevents you from doing something that you know people don’t always agree with. But artificial intelligence I think that it’s a lot more cloudy. So you could put a law in place where you’re not allowed to modify a human’s body with a system. So a cyborg for example. I couldn’t put a chip in my brain that helps me cheat on the SATs and get a perfect score. But what do I do with something else that’s not human related. If it’s autonomous vehicles or if it’s a computer system if it’s a bot if it’s something like that I think that this is where it starts to get a little bit more cloudy where people don’t always want to put a regulation in place before it’s been built because there is no ethics involved.

Ron Schmelzer: [00:08:47] In fact I didn’t jump on it because you gave me an interesting idea about the whole body modification thing. So it is very cloudy. So you could say OK in certain instances you’re not allowed to build an AI system that gives you some capability. I recall somebody was talking about how what if someone was paraplegic was disabled and they had a basically an exoskeleton where maybe replacement parts. Should there be regulation about how strong it should be. Somebody was talking about can you board a plane with a robotic arm that has the capability of basically smashing through the plane. And how would you even possibly regulate that so you could say you’re not allowed to bring guns on a plane but you can bring a bionic arm that could smash the cockpit door but you can’t tell people to take their arm off. So, man this is tricky. This is really tricky. The whole human computer interface part of it. And I think the whole weaponizing thing of AI is very difficult. What some company may think is a competitive advantage of AI, let’s say some autonomous marketing robot, Facebook could deem to be a fake news robot. So I think we’re really in tricky waters on not just what the AI systems can do but what people are allowed to build. And can you really stop some prosthetic researcher at Johns Hopkins University and say you’re not allowed to build that arm because you’re building it too strong. That’s tricky. I don’t know about that one.

Kathleen Walch: [00:10:00] I know. And then another thing that I think that we need to be mindful of is do we develop it before the laws and the regulations and kind of prodict and estimate where things are going. Or do we develop it after a system’s already been built so that we regulate and maybe pull back the reins a little on it. And we had brought this up in a previous podcast I think the “should we be scared of AI” podcast about cellphones and laws around cell phones. And back in the 80s when people had cell phones that were in their car they had to be a handheld device, that’s how you talk. And then you know in the 90s people started to get cell phones that were not plugged into their car anymore but there weren’t as many people using cell phones and there weren’t as many people driving with cell phones. And then as it started to become more mainstream and basically everybody had one, the regulators said whoa hold on a minute we have a ton of distracted drivers. The instances of accidents are going up we need to do something about this. And so they looked at what the problems are and they put laws in place to try and prevent accidents from distracted driving. Now really what the problem is it’s distracted driving it’s not necessarily the cell phone because people have put Bluetooth devices in cars and they’re able to talk through their cars now and I don’t think that has taken away the distracted driver aspect but it’s technically now legal. So holding cell phone up to my ear is illegal. But talking on a cell phone is not if it’s on speaker or it’s through Bluetooth in the car.

Ron Schmelzer: [00:11:24] I recall when Google Glass was a thing. Maybe it’s still the thing I don’t know that some driver was pulled over for distracted driving using a Google glass and it was very confusing was it really distracted driving wearing a Google Glass and I forget what happened in that situation I don’t know if he got fined or not. So I think already we’ve talked about some of the actual real challenges as much as Elon Musk and Bill Gates said we got to put regulations on this thing, man we could think of like 20 to 30 problems immediately with making real practical laws that don’t interfere with development or just cause all sorts of liability issues. So let’s get into some of the limitations of these rules. I mean like even if we say put in some laws in place we somehow get some agreement between the two parties in our government. And we managed to make some regulation happen and they’re not being weakened by whatever lobby group, they’re still just laws in the United States right. So I mean what can we do about getting parties in Russia and China and Canada to approve them.

Kathleen Walch: [00:12:21] Right. So I think that we have a few things going on here. One we have to build laws and regulations within the United States. So that could be you know our traffic laws. That doesn’t affect people outside the country because they have their own traffic laws so we can put our own traffic laws in place. But then there’s other systems, weaponizing for example, that that’s not just a U.S. problem. That’s a world problem. So how do we if the United States come to some sort of agreement on this we still need to get the rest of the world in place. So we needed to have some sort of world summit that we can come to some sort of resolution and agreement that all countries involved will follow these laws.

Ron Schmelzer: [00:12:59] The other challenge even if we do manage to get some agreement on some equivilant to the space treaty where we agree not to build AI systems that have certain capabilities we would put some limitations into the AI system we make sure that all the big AI platform vendors agree to build that into their systems so that they don’t allow this capability. I guess that’s the way to do it. Facebook, and Google, Microsoft, IBM, Amazon all those guys can say look we promise to build in some sort of control.

Kathleen Walch: [00:13:24] What happens when they get hacked.

Ron Schmelzer: [00:13:25] Yeah exactly. That still only applies as people like to say laws apply to the law abiding won’t get into the whole gun control argument here but that’s usually the argument. And so what do you do here with the bad actors scenario where you’re like OK great. All those law abiding people they’ve all agreed to limit their systems. But here I am back in the bat cave, actually the opposite the bat cave, my little volcano island and I’m building an AI system and intentionally disregarding the laws. So thank you guys for playing by the rules but I’m not going to play by the rules so what do we do about that.

Kathleen Walch: [00:13:56] Right. I mean so that can either be you know rogue agents where it’s not necessarily a nation state who’s doing that but it’s an individual who’s doing that or you have a nation state who says I don’t want to follow this anymore bye I’m leaving. Kind of like the Paris agreement. I mean what do you do. You just you pull out and maybe the rest of the country isn’t happy about it. But too bad you’re not the one in charge. So there are limitations with laws. And then even another limitation with that is we can come to some sort of agreement and then there’s a breakthrough that we were not expecting or didn’t think that it would happen as quickly as it did. And basically the laws either become obsolete or need to be changed right away to incorporate whatever it is that we just built that we did not foresee happening within the next three or five years.

Ron Schmelzer: [00:14:43] That is a good segue because that seems to be the one thing that Elon Musk and all are really concerned about. They’re not really thinking about the bad actors or companies like Tesla making bad AI systems that cause liability. What they’re afraid of is superintelligent super system that’s not even a person or a bad actor or a country or even a person it’s just the system itself becomes so strong that we can no longer control that system and we lose control and all of a sudden it’s the end of humanity as we know it. So what they want to do is put in some laws and regulations to prevent that from happening. Now we talked about this on the Should we be scared of a podcast. And one way is will the superintelligent system respect the rules that we may have built into the system to begin with because that is what Elon Musk’s idea is and I’m not so sure. I don’t know

Kathleen Walch: [00:15:31] I think that also goes back to the point I brought up earlier about how does the system interpret the law. Is it a strict interpretation of the law? Is it a loose interpretation of the law where it may think that it is the law abiding and really we do not think that it is law abiding but it’s interpreting the law in such a way that it feels it is.

Ron Schmelzer: [00:15:49] Yeah. I mean a good example of that is thou shall do no harm, one those three fundamental rules of robotics. So the computer is like OK I’m going to put all humans into a cryogenic sleep state like the matrix. So we are technically alive. We have not been harmed but we are also completely incapacitated and none of the rules say anything about Thou shall not incapacitate. So that’s a good example of computer saying following rule, superintelligent, and I’m smarter than you. So the question is will these laws and regulations have any impact on the super intelligence system.

Kathleen Walch: [00:16:21] The answer really is we don’t know because we’re not at a super intelligent system yet and we don’t know what we don’t know. You know that saying we don’t know what we don’t know so we don’t know how the system is going to react and respond to things. And we’ve said this before bad actors doing bad things. So we hope that these systems are created by good honest companies and people who have the greater good of humanity at least involved when they’re building this. But what if one super intelligence system isn’t and they start creating other superintelligent systems that aren’t and now they have this little army of bad super intelligent systems attacking the good ones.

Ron Schmelzer: [00:16:58] So the alternative to laws and regulations here would this be like to build countermeasures right just like we have cyber security we are counter cyber security. You can make all the laws and regulations you want about cyber security and you will have an Equifax. So that’s the reality we’re in. So you know laws and regulations help they only can go so far. We have two of our own thought points on this. The first one is just that speaking at the meta level we’re talking a lot about these concerns and we did talk about this also on this should we be scared of AI podcast. And we also talk about this on our AI winters podcast which you will hopefully hear as well and that is that will all this abundance of caution cause all the researchers to pull back the governments to pull back funding venture capitalists to pull back funding. And it’s not because of anything that the AI system can do or not do it’s just because people don’t want the liability. So we’re a little worried that over abundance of caution that will cause some issues here.

Kathleen Walch: [00:17:49] And overregulating before things actually come about that could cause a decline. I’m not necessarily sure that that that could cause an AI winter but I think that it could definitely cause a decline.

Ron Schmelzer: [00:18:01] And that leads us to what do we recommend again for enterprise customers and listeners and our vendor listeners. It’s because it’s just so hard to predict the future of the AI technology how it’s going to develop. It’s hard to predict the usage patterns. How’s AI going to be used?

Kathleen Walch: [00:18:16] Or how and when it will develop too? You know is this five years or is this 50 years from now that we’re building last 50 years in advance.

Ron Schmelzer: [00:18:23] It’s hard to predict what good people will do. It’s hard to predict good bad people do. If we had any impact here if we get called for expert witness testimony for example you know we would say well you know take a minimalist approach. Address the problems that are clearly happening now that you can clearly provide some guidance on like maybe there should be regulations about how strong a bionic arm should be. OK. That’s a reasonable thing to do. But other regulations talk about superintelligent system taking control of the universe and that one may be difficult to do.

Kathleen Walch: [00:18:55] Yeah I think that we should hold off on building any sort of regulation on that right now. Yeah. So that’s where we stand. That we feel that taking a minimalist approach right now is the best approach. And to not be ignorant of the points that we brought up in the podcast but to also know that we are not there yet. And so let’s not sound the alarm bells on all the potential possibilities right now and let’s focus on continuing to develop and build good intelligent systems. And let’s see where this goes.

Ron Schmelzer: [00:19:26] OK. Thank you very much for joining us on this podcast, we really enjoyed this conversation.

Kathleen Walch: [00:19:31] And listeners we’ll post any articles and concepts that we discussed in the show notes as well. So thanks for joining us and we’ll catch you at the next podcast.

 

Login Or Register

cropped-CogHeadLogo.png

Register to View Event

cropped-CogHeadLogo.png

Get The AI Today Podcast #006: Should We Regulate AI? If So, How?

cropped-CogHeadLogo.png

AI Best Practices

Get the Step By Step Checklist for AI Projects

login

Login to register for events. Don’t have an account? Just register for an event and an account will be created for you!