Our CEO, Eric Barroca, recently posted “The Future of AI: My Top 8 Predictions for 2026,” with some sharp predictions about where enterprise AI is headed. Eric’s closer to the front lines than most—he’s been driving our platform since 2023 and working directly with customers on the hard stuff—so he’s earned the right to forecast.
I’m going to do something different: talk about where enterprise AI is right now, the business challenges that keep showing up, and why so many organizations still struggle to turn AI into real, measurable ROI. Along the way, I’ll share a few observations that (in my experience) make the difference between experimentation and production success.
If you’ve seen the MIT Nanda “State of AI in Business 2025” study (July 2025), you know the story: plenty of activity, limited impact.
Source: “State of AI in Business 2025,” MIT Nanda, July 2025
Yeah, yeah—I hear you. “Old news.” “The data is dated.”
Here’s the sad truth: after dozens and dozens of customer and prospect conversations over the last six months, not much has changed.
My read of the study—and what I see in the market—looks like this:
That gap—between pilot and production—is the whole game.
Almost every company starts with a chatbot. Some are experimenting with enterprise Copilot, ChatGPT, and similar tools. Some are even building lightweight “agents” on top.
And to be clear: I use the enterprise version of ChatGPT all the time. It’s excellent for research, synthesis, first drafts, and routine tasks (yes, like spicing up a blog post). But chatbots have limits that become painfully obvious the moment you try to scale beyond personal productivity.
Here are the ones I see most often:
The chat interface is “one size fits all”
Chat is great for ad hoc Q&A. But when you’re trying to automate repeatable, multi-step work, chat becomes an awkward wrapper around a complex process. And when the tool doesn’t “show its work,” users understandably question how it arrived at the result.
Prompt quality is a hidden dependency
Better prompts produce better outcomes. That’s not a feature—it’s a risk. Two users can ask “the same thing” and get wildly different results based on experience, wording, and context. In enterprise settings, that variability shows up as inconsistent quality and unpredictable value.
Complex documents and large collections still break things
Context windows are real. And converting complex documents into plain text is often not the best way to preserve meaning. Tables, charts, embedded visuals, long threads—these are common failure points. When you can’t tell what the model used (or missed), trust erodes fast.
Behavior can be unpredictable
When you combine variable prompts, uncertain context, and complex inputs, you get a tool that can be brilliant one moment—and confidently wrong the next. Hallucinations don’t just create bad answers; they create mistrust, which kills adoption.
The output is usually just… text
Sometimes that’s fine. Often it isn’t. Want a properly formatted slide deck? A usable spreadsheet? An update to your CRM? Have fun cutting and pasting.
So yes—chatbots have their place. But they’re mostly “working with AI,” where AI assists a human. The bigger opportunity is “AI doing the work” in the flow of a process—using the right interfaces, orchestration, and integrations.
And here’s the biggest problem: chatbots have become so ubiquitous that they constrain how people think about AI. Too many organizations assume AI equals chat. One of the biggest “a ha” moments we see is when teams realize AI can fully automate knowledge-worker tasks—not just help someone do them faster.
We’re repeating a familiar pattern: putting technology before transformation.
Twenty-five years ago, when workflow automation was the hot topic, we used to say:
“If you automate a bad business process, you end up with a faster, bad business process.”
Welcome to AI in 2026.
Many organizations treat AI as the goal, instead of a means to a business outcome. They start with the tool they know (usually a chatbot) and then go hunting for “AI-shaped nails” to match that hammer. The result is predictable: limited impact, shallow adoption, and disappointment.
If you want AI to transform your business, start like a transformation project:
Over-simplified? Sure. But the point stands: the promise of AI is transformation, not technology.
Eric made a great point in his post: AI makes things economically viable that weren’t before.
That is a very different objective than “reducing headcount” or “cutting costs.” And yet, many companies fixate there.
Quick disclaimer: yes, AI can absolutely drive efficiency and automate labor-intensive work.
But if cost reduction is your only lens, you’ll miss the bigger opportunity. Our most successful customers are using AI to do things they simply couldn’t do before: launching new products, opening new revenue streams, increasing personalization, improving service levels, and creating real competitive differentiation.
A framing I like: imagine you suddenly have access to a virtually unlimited number of newly minted MBAs. What would you have them analyze? What decisions would you accelerate? What initiatives would you finally tackle? What new markets could you explore? What changes would you make to sales, service, and strategy?
That’s the mindset shift.
DIY is slowing innovation and value delivery
Full disclosure: this is self-serving. We’re a platform vendor, and we often end up debating DIY approaches.
But here’s the sincere version: DIY is often about ownership, not velocity—and it can be a real inhibitor to time-to-value.
If you’ve seen the “AI stack” diagrams from AWS and others, you know what I mean: you’re stitching together storage, security, orchestration, model access, evaluation, observability, governance, retrieval, document processing, integration layers… and you still haven’t built the actual AI applications that move the business.
Enterprises often justify DIY to avoid vendor lock-in. Ironically, many DIY efforts create a different lock-in: dependence on one hyperscaler’s tooling and one model ecosystem. And switching later is rarely as simple as people think.
We saw this movie with Cloud: early DIY, then standardization and platforms, and now most teams focus on building differentiated applications—not rebuilding plumbing.
Accuracy begins with your data
We keep relearning the same lesson: garbage in, garbage out.
This gets harder in AI because enterprise data is often unstructured—documents, images, PDFs, scans, tables, charts, and messy collections of “stuff.”
Many failed AI projects I see share a pattern:
But LLMs aren’t humans. They can struggle with charts, tables, and embedded visuals. They can lose context. They can contradict themselves across long threads. And accuracy gaps—small or large—are often what separate “nice demo” from “ready for production.”
Try it yourself: upload a long, complex document with tables and charts into a chatbot and ask detailed questions. Then ask yourself: would you trust this app to run your enterprise?
The good news is there are solutions: document preprocessing, structured extraction, chunking strategies, summarization, retrieval techniques, and better context management. But the point remains:
If outputs aren’t accurate, it’s usually not an “AI problem.” It’s a data readiness and preparation problem.
I’ll close with this. A lot of organizations are stuck between pilots and production. Accuracy is hard. Integration is hard. Change management is hard. And many leaders are starting to wonder if reality will ever catch up to the hype.
At Vertesia, we like to say: AI is easy; getting AI right is difficult.
Some of the lessons are business lessons:
Some are technical:
Eric predicted we’re approaching a serious divergence: the few will win big in specific domains, and the many will keep struggling to get value.
I believe that. And I also believe this:
AI success is achievable. The impact is real. And the organizations that scale AI across the enterprise will build lasting competitive advantage.