Top 10 Reasons Why AI Projects Fail

Join Thousands of Others Who are Certified in AI Best-Practices

AI is everywhere it seems these days, but the truth is that many organizations are struggling to implement AI with success. The rate of AI project failure to be at around 70-80%. According to Gartner, 85% of Machine Learning (ML) projects fail. According to TechRepublic, 85% of AI projects eventually fail to bring their intended results to the business. With all these smart minds, resources, and effort being put into these projects, the failure rate should not be so high. It’s not bad technology or bad people that’s the problem, it’s the lack of following best practices for managing AI projects. After observing thousands of AI projects, Cognilytica has identified the top 10 reasons why AI projects fail and how you can learn from others to avoid becoming another AI failure statistic. 

1. Applying application development approaches to data-centric AI 

AI projects are not like traditional application development projects. In fact, the code for AI projects are generally fairly simple and standard across different AI implementations. What differs one chatbot from another, or a recognition system from a predictive analytics system isn’t as much the code, but the data that trains the machine learning model to perform its task.

As such, AI projects need to be treated like data centric projects, as should all data science and machine learning projects. The code is just a small part of making AI work, and it’s not even the most important part. Data is the heart of AI, so data should also be at the heart of your AI projects.  If you run your AI Projects like you run your Application Development Projects you’re going to find out the hard way that it won’t work. Don’t use application development specific methodologies such as Agile, or data process mining specific methodologies such as CRISP-DM. AI project managers should use proven methodologies such as the Cognitive Project Management for AI (CPMAI) methodology that were built specifically for AI projects.

2. ROI Misalignment of AI solution to problem

So many AI projects fail because  they fail to deliver their promised benefits or returns. Before you embark on your AI project the first question you need to ask is: What problem are we attempting to solve? If you don’t have a good answer to this question, or can’t figure out what business problem you’re actually going to be addressing, then you should not move forward with your AI project. This may seem obvious, but far too often you’re getting pressure from upper management, colleagues, or external teams about getting started, and projects move forward without a clear answer to the problem they are actually trying to solve or the ROI that’s going to be seen. This is one of the top 10 reasons why AI and data science projects fail.

In Phase I of the CPMAI methodology, teams need to address Business Understanding. AI project owners and teams need to answer critical questions such as: 

  • Should we solve this problem with AI / Cognitive Technology?
  • What portions of the project require / do not require AI?
  • What AI pattern(s) are we implementing?
  • What are the criteria for project success?
  • What requirements are needed to complete the project?

Far too many teams power forward with AI projects and not answer these questions up front. For example, after heavily investing in shelf-scanning robots, Walmart ended their contract with a robotics company at the end of 2020 in favor of humans performing this work again. It should come as no surprise then that real ROI was not actually achieved and humans ended up being more accurate and cheaper. By addressing the above questions up front you can avoid potentially building an AI system that does not return the value you were seeking, only to find out that a human could have done this more accurately, or a simple non-cognitive Level 0 automation solution was a quick and effective alternative.

3. Lack of sufficient quantity of data

If data is at the heart of AI, then it should come as no surprise that AI and ML systems need enough good quality data to “learn”. In general, a large volume of data is needed, especially for supervised learning approaches, in order to properly train the AI or ML system. The amount of data needed may vary depending on your project, the algorithm you’re using, and other factors such as in house versus third party data. For example, neural nets need a lot of data to be trained while decision trees or ​​k-means clustering don’t need as much data to still produce high quality results.

Far too often, organizations jump right into AI projects without first addressing and accessing their data including figuring out the amount of data they have, where these data sources are coming from (is it in-house data or third party data), data access, requirements to augment existing data, and other crucial factors and questions that should be addressed before beginning the project. In fact, in the CPMAI methodology, Data Understanding is phase II. If you are completely skipping this Data Understanding phase, you’ll find out further down the project timeline after a lot of time, resources, people, and money were invested into this project that you’re missing critical data and either your project will need to be put on hold or be scrapped completely.  For example, Amazon created a ML powered recruiting tool. After being developed for four years, once deployed  it was discovered the AI recruiting tool was biased toward women applicants. This was obviously a very costly mistake for Amazon. If they addressed data quantity issues earlier in the project, this might have been mitigated.

4. Lack of sufficient quality of data

Another critical factor for AI projects is the quality of your data.The adage “garbage in is garbage out” applies in a significant way with AI projects. The quality of the data that is being fed to the AI system is extremely important for the system to learn and create an accurate model. During CPMAI Phase III: Data Preparation you must prep your data to make sure it’s in a usable state. This means cleaning, transforming, and manipulating the data, making any modifications if needed of third-party data, deciding if human-involved data annotation and manipulation (“labeling”) is needed, and performing  any additional data augmentation steps as needed.

By jumping ahead in projects and skipping this step, you will only set yourself and the project up for failure. In October 2020 a GPT-3 based chatbot that was created with the intent to decrease doctor’s workloads actually advising a fake patient to commit suicide. Had this been deployed in the real world there could have been catastrophic consequences. According to Cognilytica research, 80% of AI Projects are Data Engineering. This means It’s never a good thing to find out that your AI model is not performing the way it should because you had a lack of good quality data to train the system.

5. Applying proof of concept thinking to real-world pilots

Proof of concept projects are often AI failures. XWhy? Because when you run a test in a controlled environment, as a proof of concept usually is run,  you miss all the challenges that are presented in the real world. What does the real world data actually look like? How will this model perform on our systems? A controlled environment does not give an accurate depiction of problems and obstacles the model will face when it is actually being used, so companies and organizations should evaluate its performance by placing it in a realistic setting first to gain a more accurate understanding. 

Rather than purposeless, wasteful proofs-of-concept, CPMAI Phase IV: Model Development focuses on real-world AI pilots that use real-world data in real-world scenarios. In this phase, you need to address critical factors for success such as the performance of model training activities and model optimization activities, algorithm selection, and model development as appropriate for selected machine learning techniques among other things. When following CPMAI methodology and running pilots, you can quickly iterate your way to success, rather than spending time developing a model in a bubble only to find it does not work when used in the real world.

6. Misalignment of real world data and interaction against training data and models

Another mistake often made in AI projects is that there is a misalignment between the ideal world of AI training data and the real world of messy data and interactions. Assuming you are running a pilot, and not a proof of concept, you know that your model needs to be in a real-world environment to be accurately tested and measured. However, just as critical is to think about where the model is actually going to used. In CPMAI Phase V: Model Evaluation you need to address critical questions around the performance and use of the model. 

During the phase you need to determine  and evaluate concerns on overfit and underfit of models, evaluation of training, validation, and test curves for overall acceptability, and evaluation of models against business Key Performance Indicators (KPIs). You also need to determine model suitability with regards to operationalization approach. If you need to deploy your model on an edge device such as a phone does it always need to be connected to the internet to work? If so, is this feasible? During Phase V you are also determining means for model monitoring, iteration and versioning. If any of these critical questions are skipped, we see AI projects failing to meet the desired level or performance and accuracy or just straight up failing for lack of understanding where the model really needed to live.

7. Underestimating time and cost of the data component of AI projects

Organizations often underestimate the amount of time and resources it takes to really run AI projects. Far too often we see projects get started without first addressing data needs and accessibility. Once they get to that step in the process they are often stalled by lack of access to needed data, needing to send data out for labeling, or internal quarreling over access. obtain the data needed to fuel the AI projects. Because AI requires a data-centric approach, not having the sufficient funds or time to collect data will result in AI project failure. Companies should carefully consider if they are able to take the time and money to provide their projects with enough data and good quality data. 

8. Lack of planning for continued AI, model, data iteration and lifecycle

It is often said that “if you fail to plan, you are planning to fail”. Many companies often fail to understand that model creation is never a “set it and forget it” thing. Real world data is constantly changing, which means your model will need to be retrained to keep up with the real world. Companies need to plan for continued model and data iteration including making sure they have set aside the necessary budget for resources such as computing power, people to perform the work as well as governance policies in place to handle different model versions that are created. Otherwise, your model will eventually stop performing at the desired level of accuracy and you won’t have the set resources set aside to be able to retrain.

All of this should be thought through and planned out at the very beginning of each iteration to ensure the results produced by the model are accurate and reflective of any potential changes in the data. Phase VI: Model Operationalization of CPMAI makes you address areas such as implementation of model monitoring, implementation of model versioning and governance, evaluation of business performance, determination of project success and iteration requirements and other critical factors once the model has been operationalized or put into the real world.

Companies that don’t understand that models need to be monitored and retrained once operationalized often see their AI projects fail because once the model stops performing to the level of accuracy it needs to, they aren’t prepared for retraining. Why spend all this time and devote resources, people and money to your AI project only to see it fail due to lack of resources for retraining. 

9. Vendor misalignment on promise vs. reality

Companies often get caught up in the vendor hype and promises about their solutions. Or, sometimes companies go with a solution from one vendor only to find out that it doesn’t actually fit their needs. If this has happened to you, you’re not alone. Vendor-driven reasons are often not thought of as one of the top reasons why AI projects fail. A common reason why this can happen is failing to ask the right questions up front so you don’t realize that even though the product might be great, it simply doesn’t fit your needs. Furthermore, companies may also overlook doing their own research because they get caught up in all the industry hype and believe that just because AI is popular, or just because one particular vendor is getting a lot of attention, they should use it. Make sure to do your research, ask the right questions, and understand how to run AI projects so you too don’t get caught up in the hype.

10.  Overpromising AI capabilities and underdelivering on projects

One of the major top 10 reasons why AI projects fail is rests in a common human failure: overpromising what AI can actually accomplish, and the resulting inability for AI to meet those promises, leading to underdelivered projects. This mismatch in expectations comes from a lack of understanding of what AI can and can’t do, its limitations, and setting realistic expectations and scope for project iterations. Do you even know what problem you’re trying to solve? Why are you tackling the hardest possible problem first? Why are you trying to tackle many AI Patterns at the same time?

Overpromising and underdelivering has been a problem with AI since its inception in the 1950s. The mismatch in expectations is a significant reason for the last two AI winters (a period of decline in investment and research in AI), and it’s still a major concern today. Don’t let your AI project fail for this reason. A key mantra in the CPMAI methodology is to “Think Big. Start Small. Iterate Often.” Apply best practices CPMAI Methodology for project success.

Avoiding the top 10 reasons why AI projects fail: Adopt proven methodology for AI Success

While the above shares the top 10 reasons why AI projects and initiatives fail, you don’t have to be one of the statistics. The complexity of AI is often underestimated giving companies the misconception that it can be utilized to achieve great goals only for them to under deliver on projects, fail to show ROI, or apply AI to a problem that could have been solved with a different (cheaper, less risky) solution. This is how you can avoid repeating the mistakes listed above in  the top 10 reasons why AI projects fail.

Don’t let your successes and failures of artificial intelligence vary depending on team dynamics or short-term organizational goals. Learn how to do AI right by applying best practices methodology, including an AI project template that gives you a straightforward way to adapt CPMAI methodology to your AI projects. Take the next step by getting a CPMAI Certification.

Join Thousands of Others Who are Certified in AI Best-Practices

Login Or Register

cropped-CogHeadLogo.png

AI Best Practices

Get the Step By Step Checklist for AI Projects

login

Login to register for events. Don’t have an account? Just register for an event and an account will be created for you!