Share on facebook
Share on google
Share on twitter
Share on linkedin

Why AI Projects Fail – What to Expect from your First AI Project

Everyone has a plan until they get punched in the mouth.

Mike Tyson

This is one of my favorite quotes because it's true. I have managed software projects for over a dozen years, and I have learned that nothing goes according to plan. Every project has a plan (even projects that don't have a plan have a plan – even if it's informal.)

Any number of variables that can go wrong (or not according to plan). You have assumptions that someone made that turned out to be false. Developers often overcommit what they can accomplish in a certain timeframe. Perhaps they don't have the skills and need to learn it, or something is more complex than it should be (which happens pretty frequently in software projects.) Then day-to-day emergency fires pop up and eat away the day. There are disconnects between the vision (the result) and the ‘devil in the details' (the reality) that often arises as you are designing software. 

It's not uncommon that an executive wants to do X, and the team doing the work wants to do something entirely different.

AI projects experience these types of problems, but they encounter an entirely new set of issues based on the nature of the technology.

Variable Results

In traditional programming, the SME (subject matter expert) defines a business rule - if this, then do that. With predictive analytics, you don't know the answer. It hasn't happened yet. Most AI is a form of calculus or linear algebra used in new ways with new business processes. Most business processes are reactive, not proactive, and AI predicts what might happen.

This means that the answers can dramatically change based on the data provided - and the accuracy of the answer is correct also changes. With simple math in most programming (calculate the rolling three month average of the revenue), there is a measurable result and one right answer.

It's hard for people who are responsible for inventing business rules to draw the lines when they shift and move, and then you need to plan for multiple outcomes. What if we predict an answer, and we are correct? What if we predict the wrong result? Where's the line that determines a prediction is 'good enough'? If it's 80% likely a customer will cancel, is that the line? What if it's 79.4% likely? Is that the line? AI is about experimentation and moving the line to optimize the result.

Often in traditional software development, it's hard to think through all the problems that could arise and all the ways people need to accomplish a task using the software. With AI, it's even more complicated. You not only have to plan for this, but you also have to understand and plan for wrong answers. It's is VERY hard for most people to think through and plan for the 'shades of grey' that happens.

Data Problems

Data problems on a project can exist in three different forms.
The data doesn't exist. You need historical data to predict an answer. Without the right data, it's impossible to use AI.
The data takes a long time to cleanse and process. Many times data stored across multiple systems is hard to pull together in one central file. The second part of the challenge is ensuring that it's accurate.
The data provided doesn't match reality. Often a file is provided that we think is accurate, only to learn once it rolls out in production is 'That's not what we expected.'

Expectations vs. Reality

AI is supposed to be the latest, greatest magic box. People selling AI services have a severe disconnect with the data scientists performing the work. They often sell 'magic', regardless or not if it's achievable.

And then, the next challenge is when the data scientist begins the project. They find: they don't have the data, the algorithm doesn't do a good job of predicting the result, it takes longer than expected to find the right algorithm, or the data doesn't predict the answer with enough accuracy.

Either way, colleges and bootcamps teaching data science focus on the math and techniques and not on the communication skills of their students.

It's easier to renegotiate expectations early in the project - "I can't do this but I can do that" instead of banging your head against the wall and scraping to find the missing link. If the data isn't predictive, it's not predictive. Many data scientists miss the mark on how to communicate this to executives.

Asking the Wrong Question

I've gone to dozens of AI seminars and someone invariably asks 'how can we begin using AI if we are new to it.' In every session I have seen, the first answer to that question is always the same: 'ask the right question.' The second answer: 'make sure the data exists that answers the question'.

What does asking the right question mean?
If you take this up a layer, AI can answer five questions:
Is this weird? Example: Is this fraud?
Is this A or B? Example: What's the sentiment of this tweet?
How much/how many? Example: how much revenue can I predict from this ad campaign?
How is it organized? Example: which customer segments are my best customers?
What should I do next? Example: Where should I put this ad on a page where a user is likely to click on it?

If you are asking questions like 'how can I increase my revenue' there are AI models that can predict this answer. You can perform pricing models that predict if you increase prices what happens with demand. For most companies, it's a multi-part answer that isn't solved by one part of one business process. It's an entire redesign of an end-to-end business process, then experimenting with each part of the process to optimize the result.

Reducing Risk of Failure

There are a lot of ways to reduce the risk of failure on any AI project. 

First – target use cases that are commercialized. If you detect sentiment on tweets and link that to your customer support processes – there are many solutions on the market that support these cases. Existing models are likely available to support those tasks that can be leveraged and refined.

Second – stay away from controversial use cases. The worst area to target for AI is in the HR space because of the probability associated with lawsuits and the inherent bias in data. If you have an app that predicts which job candidates are screened, and it's trained on a white male workforce, well, be prepared for some problems. Data scientists aren't familiar with HR laws and this type of scenario can cause massive problems for a company if they inadvertently racially profile their candidates.

Third – be flexible in the beginning. You don't know the quality of your data. (You may think you know – but you don't.) So many sins are hidden in software development and dangers are lurking in every corner of the database. Be prepared that the first project may take longer. Be prepared that you might not have the data you need. Be prepared that people are different and that you may not be able to predict their behavior.

Share on facebook
Share on google
Share on twitter
Share on linkedin

More to explorer


How to Predict Customer Behavior using AI

Predicting customer behavior can be easy if you are analyzing your customers and leads. Using advanced segmentation, you can predict how your customer will respond in a number of scenarios including churn, offers, upsells and more.

Read More »

Becoming AI-Driven in 2020

AI is still in the very early stages of adoption at most companies or a ‘wish list’ item for others.

For even more companies it’s a source of confusion – what is AI, what does it do, how can I leverage it for my business, and how can I prepare?

Read More »

Leave a Reply

Your email address will not be published. Required fields are marked *

This website uses cookies to monitor and measure traffic and optimize our site.

Free Book

The AI Revolution
The Future of Profit

Enter your email to receive the link instantly.

You'll also get a free copy of our book Marketing AI: 90 Days to AI-Driven once it's released.

We might send you a product update every once in a while, but we promise not to spam you.