Search
Close this search box.

Cognilytica is now part of PMI!

Will There Be Another AI Winter?

This post was featured in our Cognilytica Newsletter, with additional details. Didn’t get the newsletter? Sign up here:

Just like the change of the seasons, it’s natural to think about the process of birth, growth, plateau, then decline. Technology goes through similar birth, growth, and death processes. From the beginning invention to its wide-eyed hype and enthusiasm, to the realizations of its shortcomings, and then to its inevitable downfall and replacement with something new. Yet not every technology goes through the exact same process. Some technologies are immediately adopted in a widespread fashion and don’t exhibit any sort of immediate decline or any misalignment of expectation with reality. For example, the light bulb, motor car, phonograph, airplane, Internet, mobile computing, databases, and the cloud have all been technologies enthusiastically embraced, adopted, and retained by billions of people around the planet.

On the other hand, there seem to be waves of technologies that consistently have trouble with adoption. Immersive 3D realities seem to come and go without sticking. So too the concepts of flying cars, supersonic transport, and living in space all seem to be frustratingly beyond our grasp. We can see technically how we can implement these things, but in practice the challenges seem to get in the way from making them a reality.

The relevant question here is which future does Artificial Intelligence (AI) hold for us? Will it be one of those great ideas that we can never seem to attain, or will it finally conquer the technological, economic, and societal hurdles to achieve widespread adoption? In this research, we look at the previous cycles of adoption for AI technology, its ebbs and flows, and analyze whether we are at an incline, plateau, or decline in future adoption expectations.

The First Wave of AI Adoption, and the First AI Winter

The first major wave of AI interest and investment occurred from the early-to-mid 1950s through the early 1970s. Much of the early AI research and development stemmed from the burgeoning field of computer science, going through its rapid growth from vacuum tubes and core memory to the development of integrated circuits and the first microprocessors. In fact the 20-ish years of computing from 1955 to 1975 was the heydey of computer development, resulting in many of the innovations we still use today. AI research built upon these exponential improvements in computing technology and combined with funding from government, academic, and military sources to produce some of the earliest and most impressive advancements in AI.

Yet, while progress around computing technology continued to mature and progress, with increasing levels of adoption and funding, the AI innovations developed during those heady decades of the early computer years ground to a near halt in the mid 1970s. This period of decline in interest, funding, and research is known in the industry as the AI Winter. This was a period of time when it was dramatically harder to get the funding, support, and assistance necessary to continue progress of AI.

Winter Reason #1: Overpromising, Underdelivering

The early days of AI seemed to promise everything. Computers that could play chess, navigate their surroundings, have conversations with humans, and practically think and behave as people do. It’s no wonder that HAL in 2001: A Space Odyssey didn’t seem so far fetched to the audiences in 1969. Yet as it turned out, those over-promises came to a head with the backers with misaligned expectations.

Winter Reason #2: Lack of Diversity in Funding

Government institutions in the US, UK, and elsewhere, provided millions of dollars of funding with very little oversight and restriction on how those funds were used, an outgrowth of Manhattan Project and Space program style funding. This was especially the case with DARPA, which saw great gains from Space projects and nuclear research applicable in all areas of technology. However, they did not see the same sort of general, or even specific, returns from their AI investments. Indeed, it was a practical death-knell to the UK AI research establishment when Sir James Lighthill delivered a report to the UK Parliament in which he derided the attempts of AI to achieve its “grandiose objectives.” His conclusion was that the work in AI had complexities that resulted in “combinatorial explosion” or “intractability” in some instances (Artificial General Intelligence in particular), or were too trivial to be used in more specific (narrow) instances.

Furthermore, AI funding in general was too dependent on government and non-commercial sources. When governments worldwide pulled back on academic research in the mid 1970s fueled by budgetary cutbacks and changes in strategic focus, AI suffered the most. In research settings, this is made worse by the fact that AI tends to be very much inter-disciplinary, involving different departments in computing, philosophy and logic, mathematics, brain & cognitive sciences, and others. When funding drops in one department, it impacts the ability of AI research as a whole. This is perhaps one of the most learned lessons from this era: find more consistent and reliable sources of funding so that research won’t come to an end.

The Second Wave of AI Adoption, and the Second AI Winter

Interest in AI research was rekindled in the mid 1980’s with the development of Expert Systems. Adopted by corporations, expert systems leveraged the emerging power of desktop computers and cheap servers to do the work that had previously been assigned to expensive mainframes. Expert systems helped industries across the board automate and simplify decision-making on Main Street and juice-up the electronic trading systems on Wall Street. Soon people saw the intelligent computer on the rise again. If it could be a trusted decision-maker in the corporation, surely we can have the smart computer in our lives again.

All of a sudden, it wasn’t a dumb idea to assume the computer would be intelligent again. Over a billion dollars was pumped back into AI research and development, and universities around the world with AI departments cheered. Companies developed new software (and hardware) to meet the needs of new AI applications. Venture capital firms, which didn’t exist in the previous cycle, emerged to fund these new startups and tech companies with visions of billion dollar exits. Yet just like in the first cycle, AI adoption and funding ground to a near halt.

Winter Reason #3: Technological hurdles

Expert systems are very dependent on data. In order to create logical decision paths, you need data as inputs for those paths as well as data to define and control those paths. In the 1980s storage was still expensive, often sold in megabyte increments. This is compounded by the fact that each corporation and application needed to develop its own data and decision flows, unable to leverage the data and learnings of other corporations and research institutions. Without a global, connected, almost infinite database of data and knowledge gleaned from that data, corporations were hamstrung by technology limitations.

Compounded on these data issues was the still somewhat limited computing power available. While new startups emerged with AI-specialized computing hardware (Lisp / Symbolic Machines) that could process AI-specialized languages (Lisp, again), the cost of that hardware outweighed the promised business returns. Indeed, companies realized they could get away with less-intelligent systems for far cheaper with business outcomes that weren’t far worse. If only there was a way to get access to almost infinite data with much less cost, and computing power that could be purchased on an as-needed basis without having to procure your own data centers…

Winter Reason #4: Complexity and Brittleness

As the expert systems became more complex, handling increasingly greater amounts of data and logic flows, maintaining those data and flows became increasingly more difficult. Expert systems developed a reputation of being too brittle, depending on specific inputs to get desired outputs, and too ill-suited to more complex problem solving requirements. The combination of the labor required for updating with increasing application challenges resulted in businesses re-evaluating their need for expert systems. Bit by bit, other non-intelligent software applications such as the emergence of Enterprise Resource Planning (ERP) and various process and rules-based systems starting eating at the edges of what could previously only be done with expert systems. Combined with the cost and complexity of Lisp machines and software, the value proposition for continuing down the expert system path grew more difficult to justify. Simply put, expensive complex systems were replaced by cheaper, simpler systems, even though they could not meet overall AI goals.

One possible warning sign for the new wave of interest in AI is that expert systems were unable to solve certain, specific, computationally “hard” logic problems. These sort of problems, such as trying to predict customer demand or determine impacts on resources from multiple, highly variable inputs require vast amounts of computing power, that were simply unavailable in the past. Are new systems going to face similar computationally “hard” problem limits, or is the fact that the computationally hard game of Go was successfully surmounted by AlphaGo recently a sign that we’ve figured out how to handle computational “hardness”.

The Third Wave of AI Adoption… Where We Stand Now

Given these two past waves of AI overpromising and underdeliverinf, combined with increasing and then decreasing levels of interest and funding, why are we here now with resurging interest in AI? In our “Why Does AI Matter?” podcast and follow-on research, we come to the conclusion that the resurgence in interest in AI revolves around three key concepts: advancement in technology (big data and GPUs in particular), acceptance of human-machine interaction in our daily lives, and integration of intelligence in everyday devices from cars to phones.

Thawing Reason #1: Advancement in Technology

Serving as a direct answer to the Winter Reason #3, the dramatic growth of Big Data and our ability to handle almost infinite amounts of data in the cloud combined with specialized computing power of Graphical Processing Units (GPUs) is resulting in a renaissance of ability to deal with previously intractable computing problems. Not only does the average technology consumer now have access to almost limitless amounts of computing power and data storage at ridiculously cheap rates, but we also have the increasing access to large pools of data that allow organizations to share and build upon each other’s learnings at exceptionally fast rates.

With just a few lines of code, companies have access to ginormous data sets and training data, technologies such as TensorFlow and Keras, access to cloud-based machine learning toolsets from Amazon, Google, IBM, Microsoft, Facebook, and all sorts of capabilities that would previously have been ridiculously difficult or expensive to attain. It would seem that there are no longer long-term technical hurdles for AI. This reduction in cost for access to technical capabilities gives investors, companies, and governments increasing appetite for AI investment.

Furthermore, the emergence of Deep Learning and other new AI approaches is resulting in a revolution of AI abilities. Previous problems that seemed intractable for AI are now much more accessible. Indeed, computing and data capabilities alone can’t explain the rapid emergence of AI capabilities. Rather, Deep Learning and related research developments have enabled organizations to harness the new almost limitless amounts of compute and data to solve problems that have previously been difficult (Winter Reason #4).

Thawing Reason #2: Acceptance of Human-Machine interaction

In addition, ordinary non-technical people are getting accustomed to talking and interacting with computer interfaces. The growth of Siri, Amazon Alexa, Google’s assistant, chatbots, and other technology have proven that people are accepting of human-like intelligence and interactions in their daily experiences. This sort of acceptance gives investors, companies, and governments confidence in pursuing AI-related technologies.  If it’s been proven that the average Joe or Jane will gladly talk to a computer and interact with a bot, then more development on that path makes sense.

Thawing Reason #3: Integration of Intelligence in Everyday Technology

Continuing on that theme, we’re now starting to see evidence of more intelligent, AI-enabled systems everywhere. Cars are starting to drive and park themselves. Customer support is becoming bot-centric. Problems of all shapes and sizes are being enabled with AI capabilities, even if they aren’t necessarily warranted. Just as in the early days of the Web and mobile computing, we’re starting to see AI everywhere.  Is this evidence that we’re in another hype wave, or is this different? Perhaps we’ve crossed some threshold of acceptance.

The Cognilytica Take: Will There be another AI Winter?

It is important to note that the phenomenon of the AI Winter is primarily a psychological one. While it’s certainly the case that technology hurdles limit AI advancement, this doesn’t explain the rapid uptick and decline in funding in the first two cycles. As Marvin Minsky and Roger Schank warned in the 1980s, the AI winter is caused by a chain of pessimism that infects the otherwise rosy outlook on AI. It starts within the AI research community, percolates to press and general media, which then impacts investors who cut back, and eventually this has direct impact on the beginning of the cycle, dropping interest in research and development.

While we might have successfully addressed Winter Reasons #3 and #4, we still have Winter Reasons #1 and #2 to grapple with. Are we still overpromising what AI can deliver? Are we still too dependent on single-sources of funding that can dry up tomorrow when people realize AI’s limitations? Or are we appropriately managing expectations this time around, and are companies deep-pocketed enough to weather another wane in AI interest?

James Hendler in 2008 famously observed that the cause of the AI winters is not related to just technological or conceptual challenges, but to the lack of basic funding for AI research that would allow those challenges to be surmounted. He rings the warning bell now that we’re diverting much of our resources and attention away from AI research to AI applications and that we’ll once again hit a natural limit on what we can do with AI. This is the so-called AI research pipeline, which Hendler warns is already starting to run dry. This will then inevitably cause the next AI winter.  We worry, similarly, that an over-focus on doomsday scenarios for AI could cause an AI Winter without even reaching those limitations.

Indeed, a recent article was written about how much of our current AI advancement is based upon decades-old research, that while resulting in much value and fruits of benefit now, will soon start to decrease in return. The challenge is to invest again in basic AI research to discover new methods and approaches so that we can continue the current thawing of AI and reach a new eternal summer of AI, rather than fall into the coldness of another AI winter.

Our expectation and hope is that the next AI winter will never come.  Many companies are now taking an AI-first approach which we hope will continue to make advancements in AI research as well as continue to push the needle forward with practical AI solutions.  With AI also becoming more integrated in everyday use cases, which was not the case in the past, it will become too difficult to just pull the plug on AI as had happened in AI Winters past.  For these reasons, we expect and hope another AI winter will not come.

cropped-CogHeadLogo.png

Register to View Event

cropped-CogHeadLogo.png

Get The Will There Be Another AI Winter?

cropped-CogHeadLogo.png

AI Best Practices

Get the Step By Step Checklist for AI Projects

login

Login to register for events. Don’t have an account? Just register for an event and an account will be created for you!