One of the main reasons why AI Projects fail: Overpromising & Underdelivering
The world of AI is full of exciting possibilities, but some of the claims being made about AI veer into the realm of breathless overpromising and underdelivering what AI can really do. Here are a few examples:
“AI systems are becoming sentient! Watch out for the Superintelligence”
Claims that AI is already, or will soon be, sentient or conscious are often overblown. While AI can mimic human-like behavior and responses, there’s no evidence of true consciousness or subjective experience existing in machines yet.
If your voice assistant still struggles with basic questions a 5-yr old can answer, it’s pretty clear we’re far away from the Singularity.
“Generative AI can do anything”
So many people see AI as a general approach for everything, especially Generative AI which is so easy for anyone to use. The idea that a single “general AI” will solve all our problems, from climate change to healthcare, is unrealistic. AI is a powerful tool, but it’s not a magic bullet.
“All our jobs will be replaced by robots!”
While automation certainly affects some jobs, the fear of mass unemployment caused by AI is often exaggerated. New jobs will be created in fields related to AI development, maintenance, and ethical oversight. Additionally, many existing jobs will likely be transformed rather than eliminated.
While these claims might be the far ends of AI claims, one of the most common reasons why 80% of all AI projects fail is that people continuously overpromise what AI is possible of doing, and then underdeliver on those promises in their AI solutions.
Why do we keep Overpromising and Underdelivering on AI Projects?
When people go about starting their AI projects, which are one or more of the 7 Patterns of AI, the initial goals of the AI project are to solve some direct business problem. Or at least they are supposed to…
One of the other main reasons why AI projects fail is that they don’t even set a realistic, practical, short-term goal to solve an immediate business problem where AI is even the right solution. The projects that have a chance of succeeding set realistic goals.
But even with a well-defined business problem and an AI solution identified that can solve the problem, there’s something about AI that gets people to think in grandiose ways. Simple projects quickly become complicated projects with difficult goals. Projects that can have narrow project scopes rapidly expand scope to tackle projects that even advanced researchers would have difficulty achieving.
The number one reason why we see projects fail is over-promising and under delivering on what that AI project can do.
If you may not be familiar with the history of AI – AI is actually a fairly old concept. AI is the oldest new technology, as they say.
This reason for AI failure goes back to the very beginnings of AI in the 1950s when the media and researchers breathlessly claimed that we were just months away from sentient machines. Governments invested in AI projects with the promise that they could automatically translate and understand foreign texts, autonomously fly airplanes, and do so many other things. While decades later some of those capabilities are here today, many of those capabilities are still far away from acceptable levels of performance.
Regardless, those projects were canceled back in the 1960s because we simply didn’t have decades to invest in what might not have been possible. So, they failed.
AI is very futuristic, because it gets people to think about the ways that machines can think and do things without people telling them what to do, process the real world around them just like we can, deal with ambiguity, and be agile and flexible and adaptive.
So when people see glimpses of these capabilities, our imaginations run wild. How can we push AI even further? Make it do even more amazing things? If it can do some of these simple things now, maybe it can do even more powerful and complicated things?
Even Alan Turing, the originator of the concept of the programmable computer had the belief that sentient and capable AI-powered machines were just around the corner.
Yes, for sure AI seems powerful and capable, so let’s expand our projects and increase the scope! What could go wrong?
Keeping AI Enthusiasm in Check
The sad reality is that when AI projects fail to deliver, enthusiasm and interest in AI also declines. We had two prior waves of interest in AI, in the 1950s to early 1970s and in the middle to late 1990s. And yes, despite all the money and interest that went into AI with all that promise, we had two subsequent AI Winters in which interest and funding dried up.
With enough AI project failures, not only do individuals and organizations start to withdraw interest in AI, but perhaps the whole industry pulls back. It’s happened twice before, so it’s possible it could happen again.
Where we are right now is that AI is still in its honeymoon phase. Generative AI has re-invigorated the market with possibilities for applying large language models and increasingly powerful foundation models to a wide range of problems. Not only is the technology available today, but it’s accessible to almost anyone who wants to access it. You don’t need to be a data scientist or ML engineer to get immediate and positive value from ChatGPT, Midjourney, Bard, or any number of Gen AI-powered solutions.
As a result, people’s eyes are widening with all the possibilities of AI. Once again, we’re in the breathless phase of AI that can do anything and promises to be just at the cusp of superintelligent sentience. Or at the very least will make our own applications enormously powerful and valuable. For sure, anything is possible now.
But beware.
This over-enthusiasm easily leads to overpromising what your AI solution can do, which will lead to underdelivering on what it actually does. And when that happens, your AI project will fail.
There’s something about AI that always gets us here. And this is one of those traps that we fall into so many times with our AI projects.
Getting out of the Overpromising and Underdelivering Trap
You might be thinking that, well, I’m not promising the world, so how can I fall into this trap?
So many large companies with well-funded and highly skilled teams have fallen exactly into this trap.
Walmart set out to create a robot to do inventory and a bunch of tasks and had to pull back when they overpromised what the robot could do and undelivered on its actual capabilities.
Olive shut down because it couldn’t deliver on its grand AI healthcare promises.
Tesla and other vendors keep promising that we’re just inches away from fully autonomous vehicles, and yet here we are many years past the 2020 deadline promised where there would be a million autonomous taxis on the road.
Overpromise. Underdeliver. Every time.
The key to getting out of this trap is to focus on the most immediate, short-term need of the organization that can be fulfilled with the smallest scope AI project possible with the quickest iterative delivery, while making sure that the ROI is well defined at the beginning of each iteration, and achieved at the end.
Whew I just said a lot there. But there’s power in each one of those words.
Think Big. Start Small. Iterate (and Succeed) Often.
The mantra you can keep in mind when aiming to be successful with AI projects and avoiding the overpromising and underdelivering trap is to Think Big, but Start Small, and Iterate Often.
It’s ok to have a big plan and project in mind. In fact, that’s a good thing – have a big plan to solve a big problem.
However, it is not a good thing to throw all the resources you have at that big problem. The odds of underdelivering are huge since that think big plan might actually be an overpromise.
So, start with the very smallest iteration on that big idea that you can achieve in a very quick iteration.
And furthermore, while your Think Big plan might have an overall ROI planned, each of your microiterations should have their own well-defined ROI that can be achieved in those small iterations, with each small ROI contributing to that big ROI.
This is how you succeed. One step at a time. Another way to put it: the best way to succeed is to not fail.
Moving Forward with AI Best Practices: CPMAI
While overpromising and underdelivering is a major cause of AI project failure, there are still many ways to fall into the traps of AI project failure. Sometimes it’s the lack of data, or poor quality data. Sometimes it’s misaligned ROI with AI capability or a poorly defined business objective. Other times it’s the vendors who oversell their capabilities, or falling into the “uncanny valley”.
There are often many causes of AI Project failure. The best way to learn from failure is to learn from other people’s failures. And then consciously work to avoid those failure reasons.
In fact, the core of our CPMAI Methodology training & certification, which provide a step-by-step approach to running and managing AI projects is following the well defined Seven Patterns of AI in a logical progression across six key phases of AI project development for each AI iteration. And applying the lessons learned from the Top 10 reasons for AI project failure.
If you want to be successful, learn from the successes and failures of others, and apply the industry’s best practice in AI project management, join the growing group of those who are CPMAI Trained & Certified.
Be a success statistic, not a failure. This is something we can promise and deliver!
Want to learn more about Overpromise and Underdeliver? Listen to the AI Today Podcast on this Topic!
Hear more details about this and other AI project failure reasons on the AI Today podcast.