Search
Close this search box.

AI Today Podcast #005: The AI Winters

Podcast #5_ The AI Winters

[wpdm_package id=’180′]

Show Notes:

Like all technologies, Artificial Intelligence (AI) is not immune to the waves of obscurity, hyped promotion, plateauing of interest, and decline. In fact, the AI industry has been through two such major waves of interest, hype, plateau, and decline, commonly referred to as the “AI Winters”. Now that we’re on our third major wave of AI interest and hype, will we enter a new renaissance where AI becomes an entrenched reality, or will be face another AI Winter? Are we still overpromising what AI can deliver? Are we still too dependent on single-sources of funding that can dry up tomorrow when people realize AI’s limitations? Or are we appropriately managing expectations this time around, and are companies deep-pocketed enough to weather another wane in AI interest?

This podcast goes over some of the history of how AI has gotten to where it is today and outlines the various causes of AI Winters, and some of the reasons why we may be thawing out now.

Supporting Links and Research:

___________________________________________

A transcript of podcast is available below:

Kathleen Walch: [00:00:22] Hello and welcome to the AI today podcast. I’m your host Kathleen Walch.

Ron Schmelzer: [00:00:27] And I’m your host Ronald Schmelzer. Our topic today is the AI winters. What are they. And is another one coming.

Kathleen Walch: [00:00:34] All right. So let’s get started. So with the change of seasons upon us it’s just recently become fall it got us thinking about seasons and life cycles of technology as well. So technology goes through some sort of life cycle but not every technology goes through the same life cycle. So if you take the lightbulb or the car for example these were adopted and used right from the beginning and they’ve continued to be used since their adoption. But on the other hand there’s technology that consistently has trouble with adoption like supersonic transport or immersive 3-D. So this brings us to AI and where are we with AI. Is it going to fall into the lightbulb or the supersonic jet. So in order to understand this and where artificial intelligence stands right now we need to understand how we got there and where we currently are.

Ron Schmelzer: [00:01:27] So let’s talk about our history and how we got here. The first major wave of AI interest and investment occurred in the early to mid 1950s through the early 1970s and much of this AI research and development stemmed from the very beginnings of computer science. We were developing computers at the very beginning relays and vacuum tubes and core memory and liquid memory even by the end of the 1970s we had integrated circuits and microprocessors. We had fast systems and that’s over that 20 year period of time over that 20 year period of time in those 20 years of computing that was the heyday of computer development. It was also the heyday of the first wave of artificial intelligence. AI research basically built upon these exponential improvements in computing technology and combine that with funding from the government, academic, and military sources to produce some of the earliest and most impressive advancements in AI. But as we all know that wave of AI innovation came to a halt or at least came to a dramatic slowdown by the mid-1970s and that caused a lot of issues and so when people use the term AI winter, Kathleen what are we usually referring to when we say AI winter.

Kathleen Walch: [00:02:32] The AI winter is a period where it’s dramatically harder to get funding and support and assistance. Ron and I at Cognilytica like to refer to it as more of an AI hibernation than a winter but in general it’s called an AI winter. So what are the reasons that AI winters happen and one main reason why they happen is that people over promise and under deliver on the technology. So they over promise and they say that we will have human to machine conversations, we will have accurate translations, we will have all this stuff and they get the general public very excited. They get companies very excited and then in theory they end up under delivering. And so people end up cutting funding and cutting their enthusiasm in AI research.

Ron Schmelzer: [00:03:26] In particular the early days of the I was really funded a lot by the government. It emerged out of the Manhattan Project and the space project in particular project Apollo and getting the man on the moon and all that good stuff really generated a lot of technology and the government was very involved in technology funding. They spend billions of dollars on DARPA in particular spent billions of dollars and had a very sort of loose rein on the technology. As they like to say they invested in people not ideas so that would give people money and just let them roll with it because they had good results. But in artificial intelligence they found they were not getting the results that they were expecting. There are a couple of really interesting and sometimes comedic examples of this. You know the Defense Department funded a lot of stuff happening around machine translation. And by that we meant but you know the defense analysts wanted to be able to read Russian news articles or television or radio and they wanted of course the non Russian speakers to be able to process all that intelligence very quickly and they were really hoping that these artificial intelligence systems would be able to listen to these things and translate but that clearly didn’t work. Kathleen what sort of the the funny example of that.

Kathleen Walch: [00:04:30] Yeah. So one sentence that they were trying to translate was “the spirit is willing but the flesh is weak” and it translated into Russian as “the Vodka is good but the meat is rotten”. So taken word for word I can see where they got that translation but that does not translate to the meaning of the original sentence. And so that was where it over promised and then ended up under delevering.

Ron Schmelzer: [00:04:52] So I mean another area that was developed in the 50s through 70s was this idea of connected independent systems so you’d have this idea that became perceptrons is where you have small systems that individually could do some task they were smart at their task. But the idea is if you combine them all together if you think of how the sensors work you have your eyes your ears your tongue your your mouth your various sensory organs. If you combine them all together your brain can perceive the world that it’s living in. And the idea of this connected independent system is the same idea if you take all these little inputs and you combine it together you get a really smart output, but in reality that didn’t really work. I think they found that the way they were building those connected system there’s a lot of research going to into connected systems that the output just did not exhibit that sort of overall intelligence that they were expecting and that caused some real problems and of course those initiatives ended up not getting continued funding. Sort of another idea was the whole idea of human conversation and human reasoning that systems can do this is even in the 1960s so in the 60s aircraft cockpits were getting very complicated. We had introduced radar we have now missiles on the planes we have all this stuff communication equipment that we didn’t have during World War II. Right. So if you think about what the cockpit looked like in the 1950s and 60s there were a lot of buttons a lot of knobs a lot of dials a lot of display systems. And the Defense Department was getting worried that pilots would be so distracted by just getting all those things to work that they wouldn’t be able to fly, navigate, and encounter pilots. So one of the things that they were hoping AI systems would do is create a system that you could talk to sort of like the Siri for cockpits where you could say “hey airplane tell me if there’s another enemy approaching and launch a missile if it gets within two miles”. And they were really thinking about doing this in the 1960s and 70s and there was a big project that was out of Carnegie-Mellon that was the (speech understanding project) yes speech understanding project there was a big failure it just could not achieve that goal.

Kathleen Walch: [00:06:40] So another reason for the AI Winters is that there’s a lack of diversity in funding and as we had mentioned DARPA had a huge fund for AI and governments in particular had funds but not other institutions did. So when a government or a university is funding research and initiatives then if the technology that they’re funding loses interest usually the funding goes with it because they only have a finite amount of money and they need to spread that out.

Ron Schmelzer: [00:07:11] And this was a particular case in the US but also the United Kingdom and for those that have done research in their winters they’ll see the example of Sir James Lighthill who produced a report that went back to UK parliament to talk about their investment in AI and his conclusions was that AI just did not achieve its grandiose objectives and that the problems that AI were solving were either too trivial, narrow examples of AI that were he considered them to be toy cases and that therefore they weren’t useful. But the ones that were useful were so complicated that he said they resulted in a combinatorial explosion or intractability that will just make them unreasonable to solve. So he basically caused the end of investment by the UK government in AI. And sort of the combination of the decrease in funding especially after the Vietnam War was over in the US and the end of the space program in the US they all kind of ended in that 70s period. Just all of that just dropped funding considerably for AI. And that pretty much was the first AI Winter.

Kathleen Walch: [00:08:07] So since then you go OK but where are we now and what happened between then and now. After the 70s we came back into a period of you can call it the spring and it was the second wave of AI adaption. AI interest rekindled back in the mid-1980s with the development of the expert system and it was adopted by corporations. Now instead of governments we didn’t have the funding issue to worry about. And it was a system that leveraged the emerging power of desktop computers and cheap servers to do the work that had previously been assigned to expensive mainframes. So we had technology coming that was able to help. And we were hoping that we were not overhyping things again. And now companies were embracing this which was great. So now we had a diversity in funding again.

Ron Schmelzer: [00:08:55] The idea of expert systems basically coincided with the emergence of the desktop computer in thee corporate environment. So if we want to think again, put your time clock back a little bit in the 50s and 60s where we had the first wave of A.I. there really weren’t that many computers in businesses. If they did have a computer it was maybe one big computer and it did something like accounting or some other specialized task. But by the 1980s especially with the emergence of personal computers and business computers and the IBM PC pretty much there was a computer on every desk. That’s when the companies were like oh wait a second we have all this computing power that’s like sitting on our desk. What can we do with it. And that’s when someone said Ah-ha let’s create these expert systems and distribute autonomous decision making logic process flow to the ends so that way instead of relying on some management person with once of decision making maybe we can have these computers which you don’t turn off at night, leave them out at night. Let them sort of chew on these problems and do that and that was really the birth of the expert system people like you could do all these cool things. What happened was it became really strong and became really powerful the whole expert system idea. But we ran into some hurdles and that started to cause the second AI Winter so let’s get into some of that.

Kathleen Walch: [00:10:03] So one hurdle that we ran into was technological hurdles. Expert systems are very dependent on data. And in order to create a logical decision path you need data as inputs for those paths as well as data to define and control those paths. So in the 1980s storage was still very expensive. We didn’t have cloud storage. And people still had to have servers and racks in their office. So this compounded by the fact that each corporation an application needed to develop its own data and decision flows, it was unable to leverage the data and learnings of other corporations and research institutions. So in addition to having expensive storage we also weren’t able to easily share and connect the information with others.

Ron Schmelzer: [00:10:50] And so we’re sort of in the cloud pre-Internet storage was in the megabytes not the gigabytes or let alone terabytes of information. This clearly was a problem. What emerged to solve that problem we had some new specialized AI focused companies. We had some LISP based software, and LISP hardware, Lisp and subolic machines and they came up just to solve this problem on the expert system side and they got money back in the early days of the expert systems when the promise was there of the really intelligent enterprise making solid decision making that was facilitated by this artificial intelligence, although they may not have called it artificial intelligence there were still a bit of a stigma from the first AI winter. Companies were raising millions of dollars, billions of dollars, people were spending billions of dollars. But then we ran into these technological hurdles of limited storage capacity, lack of interconnectedness of machines, people then started to realize that if I’m spending all this money what’s the return I’m getting. And what ended up eroding that was software that was not intelligent started incorporating little bits and pieces of the AI stuff. So the emergence of something called Enterprise research planning which of course became SAP and Oracle. They’re like well we can do a little bit of rules based here and there. You know we could do a little bit of automation, maybe it’s not artificial intelligence, but it’s enough. And so when companies did the evaluation like should I spend hundreds of millions of dollars on their specialized AI system, or should I just use this generic piece of software that may not be AI but is sort of good enough. That’s what happened so one of the lessons to be learned is that sometimes the good enough systems can erode the value proposition of AI. Do you need a chat bot? For example if you could solve it with an FAQ is a really interesting question that’s still very valid. But that wasn’t the only reason we had some other issues with expert systems.

Kathleen Walch: [00:12:29] So another reason why we went into the AI winter was complexity and brittleness. So the expert systems developed a reputation of being too brittle and depending on the specific inputs and desired outputs that they wanted, the system just was not able to do this.

Ron Schmelzer: [00:12:47] What was happening was that as the system became more and more complicated and had to deal with more and more different kinds of inputs the results were not consistent and that was the biggest issue with these expert systems. It’s basically a big rules based system, that’s an expert system is. I mean for those who they’re not familiar with expert systems the idea is to model what an expert would do so let’s say you’re trying to build the perfect aircraft. So you go to the expert designer and say OK well what would you do to build a what are all the things you would think about were all the considerations you have. How much does it cost and how do you manufacture and where to manufacture and you model all that and a system is say OK great. Now I don’t need the expert anymore I got it all in software. So next time I build an aircraft I’m going to put it on into a system and it’ll ask me all the right questions and hopefully give me the right answer based on expert system. And I think what people realize is that when you’re trying to model more complicated things especially things that had to do with variable things like, what’s the weather like, what is the currency going to be like, what’s the political situation in China things like that. These became very very difficult problems to model. And you needed so much data and it became so complicated that first of all the rules became really complicated which means that trying to modify the rules became very complicated and just all that data that you needed just became very complicated as a matter of fact people have said that some of these problems are just computationally very hard. How do you predict what customer demand is going to be. How can you predict what the impact of some resources is going to be. If you have a billing system that’s depend on the price of petroleum, what’s the price of petroleum in three years. It’s like how do you compute your way around that one. It’s a very difficult problem. And so the combination of things just getting too complicated and things getting very brittle and things getting very expensive really sort of put the damper on continued innovation around that.

Kathleen Walch: [00:14:24] So now we’ve come out of the second AI winter. So the question is how did we do that, where are we now, and do we think that a third winter is coming? One way that we got out of the past AI winter was that we now have more computing power. We now have the cloud, we now have more storage, and we have something called big data which has really helped and we’re now able to technologically get over some of the hurdles that we just couldn’t get past in the 80s.

Ron Schmelzer: [00:14:53] Yeah I think what we realized there’s been a couple of changes. One of the reasons why we say we’re thawing our way out of the AI winter, we’ve thawed our way our of the AI winter as Kathleen just mentioned is because of this technology specifically big data, actually really almost an infinite amount of data. So not only do we have enough amount of storage capacity but it’s really cheap. We have a desire to store Peta bytes of data. We can do so if we can afford that AWS bill or whatever we could do that. But it’s not just the access to the data it’s the ability to combine that data with other people’s data sources. As a matter of fact it’s also the ability that that stuffs may have been chewed up before so for example we have tensor flow and Kerris. We also have the stuff in the cloud from Amazon and Google and IBM and Microsoft and Facebook. If you wanted to start an AI project tomorrow you could just open up an account with any of those providers put out a credit card and start getting access to more data than you could ever even hope to access in the 80s or the 60s or the 70s. And of course we have GPUs, they’re able to chew on their data with a lot more efficiency than we could with our little desktop servers or even the mainframes we had back in the day. So we could make an argument that we don’t really have technological hurdles for AI right now. If you have a hurdle that’s dependent on just needing more computing power or needing more storage capacity that there’s no more out there yet. There’s no more argument for that anymore. So if one of the arguments before was if we had winter reason number three before it was our technological hurdles we could say okay hurdle hurdled. But we also have like the emergence of deep learning and deepening has also changed things by itself.

Kathleen Walch: [00:16:23] So another reason that we are coming out of the winter and the thaw that we’re calling it is that there’s been an acceptance of human and machine interaction. So it’s not uncommon now for people to have a smart assistant in their home and interact with their phone either Siri or you have a Google phone. So it’s become part of our everyday life and it’s less intrusive I think and it’s less a novelty. It’s now almost expected. So we have an acceptance of that which is one leading people to now do a lot more research and at least building of skills and technologies around this. And then it’s also as people use it I think that these systems get better

Ron Schmelzer: [00:17:06] Basically the idea is that people just accept talking to machines. So if people accept talking into machines then asking an investor for a lot of money to talk to a machine doesn’t seem so silly as it might have seen in the 80s and the 70s 60s maybe even the 90s. That’s really an important reason because if that was holding people back from investing that reason’s gone. So the third thing is building off of that is that we’re now starting to see AI in everyday technology. We’re starting to see it in our cars. We’re certainly seeing it in our phones. We’re calling up call centers we’re talking to bots. So what makes this different than in the past was that in the first wave of AI, AI in the 50s and 60s 70s was very specific use cases that probably the average person would not see because they’re not in the cockpit or they’re not intelligence analysts in the Defense Department or they’re not in a research setting. So they’re not going to see that AI. So most people they see it in a movie and that’s about it. During the second wave you would only really see that AI and expert system if you’re working in a company if you’re a business person probably in management you’d probably see the AI and most times maybe you wouldn’t even see that much there because it’s not like it was adopted by every company, but this time it’s different because literally the average Joe with a smartphone or calling a call center or driving one of the latest cars, driving a Tesla or driving a car that self-parking, that’s AI. People are interacting with AI everyday. So the question about the AI winter is that had we gone past the point where people are doubting the future of AI and so investors and companies and governments and all these people can feel more comfortable about AI we won’t have some of these problems.

Kathleen Walch: [00:18:42] And what I think is different about this cycle as well is that AI is intertwined where you can’t see it as well. So a chat bot for example, you don’t know if it’s a human or a bot behind the scenes and with self-parking cars it’s integrated into this system in a way that it feels natural. And also it’s happening in a way that you don’t know. So that’s different than in the past. We were not able to do that. And I think that that’s why this is different as well where it’s starting to really get intertwined into everyday life where it’s a lot harder to suddenly just disappear. Someone had related it to the Internet, Nick Bostrom had, he said you can’t just pull the plug on the Internet. If artificial intelligence becomes so intertwined in everything that we have, you can’t just pull the plug on it. So having another winter is going to be hard in that respect. We’ve never had a winter with the Internet. You don’t just pull the plug on it.

Ron Schmelzer: [00:19:35] Good point. And is a matter of fact to circle back to one of the previous things that we talked about the emergence of deep learning what some of that kind of came out of the blue. You know it’s not just that we have the technology capability we have all the storage capability we have all the computing power is the fact that we also came up with a different way of dealing with some of these hard problems. So if you recall from the conversation about expert systems we had all this computational hardness where it’s like you know how do I account for all these things I just don’t know. And if you think about it doing image recognition or doing some of these things is also very computationally hard. When I’m looking at a picture of Kathleen on the Internet how do I know it’s Kathleen and not just some other picture. how does a computer recognize it and kind of as I mentioned out of the blue came this development of deep learning in the mid to late 80s which was foundational research that was done and that research took a little time and then people figured out how to apply it. And now we’re like oh I can apply a deep learning to this petabytes of data that I have so I can train it with an amazing amount of data and I have all this computing power so I can do it quickly. That works. And so you can say that the AI winter problem number four, the complexity and brittleness problem kind of went away because we were not trying to program rules we’re not dealing with a brutal systems that can’t deal with complexity. So those problems went away. So we’ve been really positive here in this last little bit here. We say okay we’ve got the computing power. People are comfortable with AI, they are talking to computers all the time. We’ve got deep learning we have infinite amount of storage an infinite amount of computing. So how could we possibly have an AI winter. Well we still have a couple of scenarios that might still happen right?

Kathleen Walch: [00:21:07] Right. And one in particular is research where there hasn’t been a lot of research on AI in the past 10 20 years and we’re using old technology to build these new products. And if we don’t continue to have foundational research and funding for that then we can run into a knowledge problem where we just don’t have the capability to move forward and that will go back to our overhyped and under delivered problem.

Ron Schmelzer: [00:21:35] And sort of building upon that you mentioned was AI winter problem number one which is over promising and under delivering. But the second one of course is the lack of, or dependence on, such a small number of funding sources for AI. And just to clarify it’s not that we mean that there hasn’t been any AI research over the past 20 years, there surely has been a lot. Especially if you look at what’s happening MIT and Stanford and Carnegie Mellon and a million different universities. But I think the point that was made and one that was made recently by James Hendler in 2008 is that the resources that are being applied to AI are being taken away from general AI research and being focused on this application specific stuff. Which is not a problem obviously the fact that it’s been applied is the reason why we can talk to our phones to begin with. And there’s a concern that that well is going to run dry. The pipeline as they say will run dry. And if it runs dry and we’re not doing any new AI research, we’re not coming up with any new major innovations like the next big thing like deep learning that we will run into some of the very same challenges that we run into before they’ll just be different versions of the same challenge. And so all this concern that we have about AI taking over the world may end up not happening because we’re just overestimating what’s happening. Let’s sort of wrap up here a little bit. So you know how do we think this is going to happen. Do we think there’s going to be any winter? Do we think that there won’t be an AI winter? Where do we rest here?

Kathleen Walch: [00:22:50] You know it’s always hard to predict the future and see where things stand. But based on looking at the past winters and what’s happened with them and then what came out of the winters, I think that because we are so now intertwined with AI and that the mainstream has felt more comfortable with interacting with machines, I don’t think that we’ll have another AI winter at least not in the form that we’ve had in the past. We now have companies that are heavily investing in this technology which goes to the funding problem where if we don’t have just governments and large research institutions funding this and we have companies and their R&D departments researching this then I think that we will not have another winter. I think it also depends on what other countries are doing and that if Russia and China and other countries are investing heavily in AI I do not see how the United States would not invest. I just don’t see how we can say no we’re not investing in that we’re not going to be a leader in this. And we’ll let other countries take over.

Ron Schmelzer: [00:23:52] And I think that’s one of things that Elon Musk is scared about. He’s worried about this AI arms race basically. And what will outcome. We spent a lot of time talking about this in one of our recent podcast should be scared of AI. So I would encourage you to go back and listen to that when we talk about these ideas. But I think it’s kind of funny because here we are talking about the AI winter which is kind of the opposite of should we be scared of AI which is like should we be worried that AI won’t happen. It’s sort of like the AI winter right. So on the one hand we’re really worried that AI’s going to take over the universe and on the other hand we’re worried that the funding is going to dry out and we’re going to stop doing development on it. So clearly there is some sort of balance between the two I think sort of the logical thing for companies to do as we’re speaking to you our enterprise listeners or vendor listeners is that there is a consistent and significant investment in AI happening now. People are definitely doing development now and it seems that people think that AI is a competitive advantage. Whether it’s a competitive advantage for your company or your product or your country. So in that universe to scale back your own investment in AI based on a future fear that may or may not happen does not seem to us at the moment to make sense. So we’re going to continue to keep track of what’s happening. We’re going to pay attention to what’s happening with the funding sources so you should pay attention to us because we’re paying attention to that. But right now it seems that we’re sort of just going to continue cruising here without any bumps.

Kathleen Walch: [00:25:09] Yeah. And listeners we’ll post the articles and concepts that we discussed in the show notes so that you can reference them there. And thank you for joining us and we’ll catch you at the next podcast. be a guest on The Today Show. For more information on sponsership visit the political Web site and click on the podcast link.

 

Login Or Register

cropped-CogHeadLogo.png

Register to View Event

cropped-CogHeadLogo.png

Get The AI Today Podcast #005: The AI Winters

cropped-CogHeadLogo.png

AI Best Practices

Get the Step By Step Checklist for AI Projects

login

Login to register for events. Don’t have an account? Just register for an event and an account will be created for you!