Why 95% of AI Investments Fail
With 2025 drawing to a close, AI’s trajectory is being shaped by human decisions and organizational behavior faster than by advances in technology.
As global spending on artificial intelligence climbs toward $1.5 trillion, faster deployment, tighter regulation, rising leverage, and renewed bubble fears have made 2025 both a pivotal and turbulent year for the technology.
Over the past 12 months, that turbulence has also unfolded alongside an extraordinary shift in how omnipresent AI has become. It is now fully cemented in the public, corporate, and cultural consciousness. The AI debate tends to recycle a familiar set of talking points - as technology and economics writer Derek Thompson wryly observed in a recent analysis, the same 12 narratives now circulate on repeat across business, media, and everyday life.
However, what’s often missed is not what is wrong with the technology, but what is wrong with the way we’re using it.
Myth Busting
Arguably the biggest obstacle facing AI as we enter 2026 may still be the ridiculous mythology surrounding it. Over-hype from hyperscaling cheerleaders and fear-mongering from their doomsday counterparts has distorted how many leaders think about artificial intelligence. One camp treats AI as a cure-all, the other as an existential threat.
A recent MIT study on the "GenAI Divide" shows that confusion is widespread. Based on a review of more than 300 publicly disclosed AI initiatives and surveys of senior leaders, the report highlights five persistent misconceptions about AI:
It will trigger mass job displacement
It has already transformed business
Large firms are slow adopters
Model quality and regulation are the main bottlenecks
The best results come from building everything in-house.
From a practical perspective, both perspectives obscure the same operational reality - that AI delivers neither automatic upside nor inevitable destruction. Outcomes depend on human choices around use case, design, testing, development, and governance.
A Problem of Expectations
For all the talk of deployment, integration, scale, and performance metrics, the gap between expectation and reality is also now visible in the data. An MIT study suggests that confusion is widespread.
According to MIT's report, despite $40 billion in enterprise investment, 95% of organizations are seeing zero return from generative AI, while only 5% of integrated pilots are extracting measurable value.
The study also finds that "only a small fraction of organizations have moved beyond experimentation to achieve meaningful business transformation." It points to brittle workflows, weak integration into daily operations, and limited feedback loops as common barriers to ROI.
Furthermore, research published by Boston Consulting Group found that roughly 74% of companies still struggle to achieve and scale value from their AI investments, indicating that most remain trapped at the pilot or early deployment stage.
In its new “State of AI” report, McKinsey & Company found that while 88% of organizations now use AI in at least one business function, “most organizations are still in the experimentation or piloting phase,” and just 39% report any enterprise-level EBIT impact. Notably, the same report found that companies that focused on redesigning workflows, rather than simply adding AI to existing processes, were far more likely to see a return on their investment.
Complicating matters further, in a 2025 survey by Writer, 42% of C-suite leaders said generative-AI adoption is actively "tearing their company apart," pointing to internal power struggles, siloed development, and persistent friction between technology and business units as core obstacles.
Private Equity Rollout
Across private equity firms, AI deployment is concentrating on deal sourcing, due diligence, and portfolio oversight. It’s being used to screen targets, automate parts of financial and legal review, and monitor portfolio performance in real time, with the aim of speeding decisions and improving visibility.
Rollout, however, remains uneven. Many firms are still experimenting across teams without shared data standards or centralized oversight, leading to duplication, inconsistent results, and uncertainty over what should be scaled.
A recent study from Bain & Company found that while a majority of portfolio companies are testing or developing generative AI, only around 20% have put it into everyday business operations and are seeing concrete results.
Projects are being approved and rolled out quickly, but too often without a clearly defined investment case, a clear way to track performance, or sufficient safeguards against AI errors. The result is a growing volume of activity with far fewer measurable returns.
The clearest divide is now between sponsors building firm-wide AI infrastructure and governance and those still relying on disconnected, deal-by-deal experimentation. A recent survey by wealth manager Pictet Group further highlights the unusual disconnect in how private equity is approaching AI. Just over a quarter of general partners said the technology is overhyped, while roughly two-thirds reported that they are still only exploring potential uses or testing early applications.
More than 40% of firms surveyed said they now have an AI strategy of their own, yet, shockingly, the funds most concentrated in technology investing remain the least likely to have a strategy in place.
Best Laid Plans
The AI projects that produce real returns begin with a clear business problem and measurable financial targets, and not with a technology investment. Firms that move directly into pilots without setting clear goals and success metrics generally end up with scattered experiments, rising costs, and limited ability to scale or validate results.
A clear AI roadmap begins with defining the business problem, confirming data availability, and setting how results will be measured. It also requires agreeing on the business case and implementation requirements before anything is built. Handled this way, AI is treated like any other investment, with ROI quantified upfront and success defined in advance.
The answer is fairly simple: leadership teams need to create a clear, structured roadmap that ties AI initiatives directly to business objectives and measurable returns, using a disciplined framework.
Saasinct’s AI Strategy Accelerator is built around this approach. Our program helps firms identify, document, and pursue AI initiatives based on clear business impact, cost, and risk—before implementation begins. The result is a practical, prioritized plan that defines what should be built, why, when, and how. Just as important, it establishes the key performance indicators that support the investment and measure success.
Strip the myths away and the challenge looks familiar. The issue is not the AI technology itself, but how effectively humans pursue it and integrate it into their organizations.
The AI winners of the next 12 months won’t necessarily be the firms with the most advanced tools or the largest investments, but those that excel at redesigning how humans actually work alongside AI.