Stop Blindly Investing in AI
Oct 1, 2025

Since the inception of LLMs and the explosion of interest in AI, nearly every tech conversation ends up circling back to “What’s our AI strategy?” Teams have been restructured, budgets shifted, and pilots launched with fanfare. Yet for the vast majority, those investments are ghosts — invisible on the balance sheet. According to MIT’s State of AI in Business 2025, 95% of companies haven’t seen a measurable return.¹ That’s not a rounding‑error statistic. It’s a flashing red warning.
So why do we see so many AI initiatives fail?
The Allure of the Swiss Army Tool (and its Pitfall)
A favorite sales pitch goes like this: “One tool to do everything.” And it’s seductive. But it’s also a trap. Most AI vendors build broad toolkits—search, summarization, chat, analytics, dashboards—all thrown in. In demos, it’s magical. In practice, each function works “okay,” but none work exceptionally. It’s the classic “Jack of all trades, master of none.”
Imagine being the general manager of the Celtics (bear with me, I’m a fan): your team needs rebounding, corner shooting, and a floor general. You can pay a king’s ransom for one “versatile” superstar—or you can assemble three players, each excellent at their role. The latter wins more games. AI is the same. Don’t overspend on one tool that tries to do 10 things; invest in multiple tools (or modules) each engineered for the use case that matters most.
MIT’s study reinforces this. The authors argue that the 95% failure rate isn’t about poor models or lack of compute. It’s about approach. Only a narrow set of AI pilots—those that are specific, simple, and embedded in workflows—deliver actual value.¹
The “Learning Gap” — Why Even Good AI Fades
One of the sharpest insights from the MIT research is what they call the learning gap: many AI pilots fail not because they lack technical sophistication, but because they don’t learn. They are static. They don’t retain user feedback. They don’t evolve with context. Over time they become stale, janky, or worse — broken in the real world.¹
To quote Forbes’ take on the study:
“Pilots stall because most tools cannot retain feedback, adapt to context, or improve over time.” ²
In other words, deploying AI isn’t enough. You have to build in feedback loops: let the system get better, or it’ll degenerate into an expensive toy.
This isn’t just MIT talk. Broader surveys find that a leading factor in AI project failure lies in adoption, scaling, and integration—not core model quality.³ And even beyond the hype, researchers studying organizational readiness show that the way people perceive AI’s limitations strongly influences whether it will stick.
When “More Features” Kills Traction
There’s another pattern worth calling out: the “feature arms race.” Teams demand everything up front—predictive analytics, CRM hooks, automated insights, user analytics, content generation, etc. It sounds powerful. But the paradox is that adding more features often makes the tool harder to adopt, more brittle to changes in the business, and more expensive to maintain.
The MIT report suggests that the difference between the 5% of scaling pilots and the rest often comes down to restraint. The ones that succeed pick one core problem, nail it, and expand only after they’ve proved value.¹
Harvard Business Review puts it succinctly in their recent commentary:
“If 95% of the tens of billions invested in experimentation has failed to produce value, is the effort to experiment with AI a complete waste?” ⁴
Not as long as your experiments are done with rigor and a clear path to scale.
What Actually Works: Lessons from the 5%
So if you’re not going to bet on monolithic AI, where do you put your chips? Here’s what we can learn from the 5% that win (or at least move beyond pilot purgatory):
1. Start extremely narrow.
Don’t try to solve the entire proposal, legal, marketing, or operations workflow at once. Solve one step—e.g. automated answers for RFP questionnaire modules. Be the best tool in that slice.
2. Embed the AI where people already work.
Don’t force users into new systems. Embed your AI into existing CRMs, document systems, workflows. The less cognitive friction, the better.
3. Require learning & feedback loops.
Every user correction, every accepted or rejected suggestion, should be harnessed. The system should adapt, improve. If version N+1 is no better than version N, you’re doing it wrong.
4. Measure hard—and early.
Set clear KPIs (reduced response time, error rate, man‑hours saved, conversion uplift). Use them to decide whether to scale or kill.
5. Don’t underestimate domain alignment.
Vertical AI (industry or task specific) often outperforms generalized models precisely because the domain rules are known, the content is structured, and the constraints are tighter.
When you do this well, you’re not just deploying AI—you’re rewiring how work gets done.
What Goes Wrong in Many AI Programs
Here are a few of the typical failure modes that drag 95% of pilots into limbo:
Integration malpractice. A great model stuck in a silo isn’t useful. If your AI doesn’t talk to your systems, your users, your data flows—it won’t stick.
Bad data hygiene. Garbage in, garbage out. If your source content is messy, inconsistent, siloed, it drags any AI system down—even the good ones.
Feature creep. Every new “cool” idea balloons complexity and fragility.
No governance or ownership. Who steers, who owns, who iterates? Without clear accountability, the project dies.
Ignoring trust. If users don’t trust the suggestions, they ignore them. Interpretability helps, but feedback and transparency often matter more.
A Better Play: The AI You Need, Not the AI You Want
Let me be blunt: you don’t need 50 AI modules. You need one or two that solve your biggest bottleneck, tightly engineered, with feedback baked in.
That’s what we build at Settle. We don’t try to be the Swiss Army AI. We try to be the scalpel. We tackle RFP responsiveness—where speed, coherence, knowledge, and accuracy matter. We build the feedback loops, integrate into existing workflows, and let you expand only when the return is obvious.
Because if you’re going to invest in AI, don’t just do it. Do it right.
Give us a call and let’s chat about a successful AI implementation to get your team to the championship.
¹ The GenAI Divide: State of AI in Business 2025 (MIT Media Lab / Project NANDA) MLQ+1
² “MIT Finds 95% Of GenAI Pilots Fail Because Companies Avoid Friction,” Forbes Forbes
³ DemandLab commentary on MIT report (5 takeaways) DemandLab
⁴ “Beware the AI Experimentation Trap,” Harvard Business Review Harvard Business Review