The Same Mistake, Different Technology: What MIT's AI Study Really Shows

Falling Dominos

The headlines surrounding MIT’s latest study on AI in business have been pretty daming: highlighting that 95% of generative AI pilots are failing, despite enterprises investing $30-$40 billion.

I've been asked for my opinion on this by multiple people over the last few weeks and in summary my response has been that I'm not surprised. After 20+ years working with AI, machine learning, and data science, I've seen this exact pattern play out before: with mobile apps, big data, blockchain, and many other "revolutionary" technologies.

The high failure rates aren't surprising: they're a consequence of some well-established patterns of how organisations consistently approach (and usually struggle with) new technologies.

However, there's one critical difference with generative AI that makes these figures more pronounced than other technologies, and understanding this is key to avoiding becoming part of that 95% statistic.


A Quick Note on Those Headlines

It's worth clarifying that the 95% figure specifically measures meaningful P&L impact (direct bottom-line results).

The research found that tools like ChatGPT and Copilot are widely adopted (80% explored/piloted, 40% deployed) for individual productivity, just not translating to measurable profit and loss performance.

  • So the reality isn't quite as stark as the headlines suggest.

But having said that, the failure rates are still significant and worthy of significant discussion, and for my part, I want to explore why these high failure rates are likely not down to the technology itself, but are part of a wider pattern.


The Universal Technology Adoption Pattern

The core issue revealed by the MIT study is a classic case of what’s called "Technology Push" vs "Market Pull”: it’s a general framework that explains why many technology initiatives fail (particularly in large enterprises):

Technology Push vs Market Pull

Technology Pull vs Market Push


Of course it’s not the case that technology push always ends in failure or that market pull ends in success, but research has consistently shown that market pull innovations have dramatically higher success rates. The MIT study's high failure rate is entirely consistent with historical technology push failure rates.

Or to put it another way: this is not an AI problem, it’s part of a repeated pattern related to technology adoption.



The GenAI Amplifier: Why This Time is Different

But aside from this general issue of technology adoption, there is another amplifying factor that is specific to generative AI:  its extraordinarily low barrier to entry.

Unlike previous enterprise technologies that required significant setup (as well as technical expertise), anyone can start using ChatGPT or call an OpenAI API and get impressive results within minutes.

This accessibility creates an illusion where users experience the power of these tools quickly and so assume that building enterprise solutions will be equally straightforward. 

The result? More organisations than ever can’t help but be drawn into the “technology push” initiatives because of this low initial barrier and because a fully fledged solution feels tantalisingly close.


The MIT study perfectly captures this phenomenon: outlining how tools like ChatGPT are widely adopted for individual productivity (the research found over 40% of knowledge workers use them personally), these same tools consistently fail when applied to enterprise workflows.

  • Getting impressive demos is fundamentally different from solving real business problems, but the low barrier to entry obscures this critical distinction.


Just to be clear, I am not saying that having this low barrier to entry is necessarily a bad thing, even in some cases having less-technical people, with more business or product expertise using these tools can potentially be a good thing.

In particular, having the ability to rapidly test and rule out ideas is hugely beneficial:

  • It's now much quicker and more cost-effective than previous technology cycles.

  • This agile approach can lead to better results overall, even if it creates more 'failed' experiments along the way.

What the MIT Data Actually Reveals

The study's findings provide clear evidence of these patterns in action, revealing some striking differences in how organisations approach AI implementation:

External vs Internal Success Rates:

One of the most revealing findings in the MIT research was the dramatic difference in the success rates of “internally built tools” (33% success) vs those built with “external partnerships” (67% success)

Now, as an AI consultant, I'd love to simply tell you that external partnerships more than double your success rates and leave it at that - it would certainly be good for business! But while this statistic is real, the story behind it is more likely more complicated - but I have a good idea of what is going on here too.

External partnerships don't succeed because external providers possess some magical AI expertise that are fundamentally lacking in internal teams, instead they succeed for two key reasons:

  • First, they bring “market pull” perspective.

    • When organisations engage external resources, the conversation typically starts with "We need to reduce customer service response times by 30%" rather than "Let's explore what we can do with AI."

      • The external engagement itself forces problem-first thinking from the very start.

  • Second - and this connects directly to the low barrier to entry problem - external providers filter out the novices.

    • As I mentioned earlier, GenAI's accessibility means that anyone can start building prototypes that can make for impressive demos.

    • Internal initiatives often include well-meaning but inexperienced team members who mistake early success with simple API calls for genuine implementation expertise.

    • External providers, by contrast, are selected specifically for their proven capabilities. Organisations don't engage consultants or vendors unless they can demonstrate relevant experience and successful track records.


Judging from some of the recent coverage in relation to this study, it seems that many have read the headline of 95% failure rates associated with AI, and have jumped to the conclusion that these tools are overrated and cannot deliver value.

But if we consider that it is possible to double our success rates depending on who carries out the work, then it should be only logical to conclude that the key issue is not with these AI models themselves, but how they are being used.


Workflow Embedding vs General Purpose:

The MIT researchers also found a clear pattern in what types of AI implementations actually work. The study revealed the “standout performers are not those building general-purpose tools, but those embedding themselves inside workflows, adapting to context, and scaling from narrow but high-value footholds”


Again, this aligns perfectly with the “market pull” principle: where successful AI projects start with specific business problems and build targeted solutions, rather than using broad AI capabilities and hoping they'll find useful applications.

The successful organisations don't build general-purpose tools—they solve specific friction points within existing business processes.

This confirms what I've observed across multiple industries: success comes from problem-first thinking, not seeking out applications for a new technology.


The Innovator's Dilemma in Practice

For anyone looking to understand some of the challenges associated with introducing new innovations in large companies, I would highly recommend Clayton Christensen's book "The Innovator's Dilemma", which provides a number of insights into why internal teams struggle… and yes, you’ve guessed it… this aligns with the higher success rates that the MIT study reports in relation to external partners.

Internal teams, no matter how talented, are often too embedded in existing processes and assumptions to effectively implement disruptive technologies.

  • They know too much about why certain processes exist, making it harder to see opportunities for fundamental reimagining.

  • There are often a number of other factors, such as internal politics, as well as operationally focused teams getting drawn back into the day-to-day activities and not having the focus that is necessary to develop these new technologies.

External teams aren’t bogged down by a number of these internal factors and are typically brought in with a specific problem-solving mandate. This, along with their technological expertise can often lead to better outcomes: again which is consistent with the MIT findings.

What Actually Works: Breaking the 95% Failure Pattern

The reality is that the 5% of organisations succeeding with AI aren't doing anything magical: they're following consistent principles that I've seen work across multiple industries:


AI Steps for Success

The Problem-First Approach to AI Success


Whether this expertise comes from internal teams or external partners matters less than ensuring these principles are used.


The Pattern Will Continue

I’ve little doubt that this cycle will repeat with the next wave of innovation: the technology will change, but the fundamental adoption patterns tend to remain constant.


The organisations that will succeed will be those that remember that technology serves business needs, not the other way around: the MIT study provides a $40 billion reminder of what happens when we forget this lesson.


But as I’ve already outlined, this pattern is predictable and therefore avoidable:

  • The 95% failure rate isn't a limitation of the technology, it's the consequence of approaching powerful new tools with the wrong mindset.

  • Those who learn from these patterns can consistently find themselves in that successful 5%.

The Real Opportunity Ahead

Despite these challenges, I'm genuinely optimistic about the potential of generative AI.

Having worked with older technologies for years, I can see the huge potential of these new GenAI models, across a range of use cases:

  • They allow us to tackle problems that were simply impossible just a few years ago

  • Even for problems that could be solved before, we can now do so more efficiently.

The takeaway message from the MIT study shouldn’t be that AI doesn’t work - it's the way it is implemented (and even considered before implementation) that matters enormously.

For organisations willing to learn from these patterns and focus on genuine business problems first, the opportunities are genuinely exciting.


If you're planning an AI initiative and want to avoid joining the 95%, I'd be happy to discuss how these principles apply to your specific situation. Get in touch to explore how we can ensure your project lands in the successful 5%.

Next
Next

RAG vs Fine-Tuning for Enterprise LLMs: A Practical Guide to Choosing the Right Approach