LEARNING

The Future of AI: My Top 8 Predictions for 2026

AI in 2026: The year of reckoning where specific solutions will thrive, generic tools will falter, and new challenges will redefine enterprise AI adoption.


2025 was the year of AI deployment. 2026 will be the year of AI reckoning.

We're about to see a significant divergence: spectacular successes in specific domains, painful failures in generic approaches, and the emergence of infrastructure challenges that will define the next wave. Some of you will nod along. Others will forward this to your CFO with "told you so" in the subject line.

Here are my predictions.

#1. Enterprise AI adoption hits a wall (and generic Chatbots face their Enterprise Search Moment)

A lot of enterprise AI investment has been "use AI" without proper business case mapping. Companies deployed "enterprise ChatGPT" and Microsoft Copilot without understanding where to apply them for actual leverage. Technology first, problem second. The classic enterprise software disease, now with more GPUs.

The result? Disappointing adoption, unclear ROI, and renewal conversations that make everyone uncomfortable.

Here's the thing: transformative AI adoption isn't about an assistant chatbot. It's about reimagining processes from the ground up based on AI capabilities, defining new workflows, not just sprinkling AI on existing ones and hoping for magic.

The chatbot-everywhere approach reminds me of Enterprise Search in the 2010s. (Yes, I'm old enough to have lived through that hype cycle.) Enterprise Search failed for specific reasons: access control complexity, difficulty mapping permissions across systems, lack of context. Users would search, get results they couldn't access or didn't trust, and go back to asking Linda in accounting.

Generic AI assistants will hit the same wall. They'll get replaced by force multipliers embedded in specific workflows – not a magical search box that answers all questions, but tools that transform how particular jobs get done.

What does real transformation look like? A few examples:

Dynamic data products. Data vendors sitting on massive repositories can now generate custom analyses, reports, and products on demand based on user requests. Instead of selling static datasets, you sell answers. The economics change completely and so does the competitive moat.

Private equity due diligence and sourcing. Large PE firms have amassed enormous troves of data across decades of deals, portfolio company performance, market analyses, valuation models, sector deep dives. Most of it sits in SharePoint folders that haven't been opened since 2019. AI can mine this to find new gems, improve valuation accuracy, and identify acquisition targets that would have taken armies of analysts to surface. The firms that figure this out first gain a durable edge. The firms that don't will keep paying for armies of analysts.

The pattern: not "add AI to the workflow" but "rebuild the workflow around what AI makes possible." The teams that think this way will deliver genuine successes. Everyone else will be stuck explaining why adoption is lagging or ROI is elusive.

#2. Renewal pressure hits; valuations get more distributed
Why "off-the-shelf" AI strategies are hitting a renewal crisis

Here's what happened: enterprises "bought AI" from their current vendors or whoever had the most hype. Microsoft Copilot, Salesforce Einstein, the usual suspects. The board wanted an AI strategy, so they bought one “off the shelf.”

These expensive tools have lagging adoption. Microsoft already slashed their Copilot revenue forecast, and they won't be alone. When you deploy a tool without a clear workflow transformation attached to it, usage stays low and value stays theoretical. "We have AI" makes for a nice slide. It doesn't make for a renewal.

This leads to pressure at renewal time. Net revenue retention takes a hit. Not every vendor, but the ones who sold the dream without delivering daily value will face a reckoning in 2026. Some very impressive ARR numbers are about to get a lot less impressive.

The flip side: valuation spreads widen. Proven winners pull further ahead while the "AI-powered everything" crowd discovers that AI-powered nothing is still nothing. And it opens the door for new vendors with different, more specific approaches – even in categories that seem crowded.

The market will reward specificity over generality, and the technologies that deliver value over hype and early adoption.

#3. Code quality goes up and Software Engineers are front and center

If there's one area where we've hit undeniable product-market fit, it's coding.

Claude Code, Cursor, Codex, Gemini, these tools fundamentally change how software gets built. But the real story isn't "developers work faster." It's that work we always wanted to do but couldn't justify is suddenly viable.

Everyone wants comprehensive test coverage. Everyone knows they should have it. But writing tests for every edge case, maintaining them, updating them when requirements change, was never economically viable. The ROI math didn't work. So we wrote the critical tests and iterated as needed.

Now the economics flip. Full test coverage becomes cheaper than debugging production issues. Code review on every commit becomes economically achievable. Documentation at creation time becomes cheaper than a “doc team” trying to guess intent and create documentation.

Here's the counterintuitive take: code quality goes up, not down. Quality improves because state-of-the-art quality finally makes economic sense.

Yes, there will be examples of badly supervised AI coding producing disasters. Someone will ship an AI-generated authentication system that stores passwords in plain text, and we'll all share it on Twitter and feel superior. But overall, the quality bar rises.

One more thing: software engineers are back, front and center. This isn't citizen development finally arriving. (It's not arriving. Stop waiting.) You need expertise to wield these tools well. Senior engineers who learn them become mass force multipliers. The ones who refuse to learn... won't.  AI will make mediocre developers good, and good developers great.

#4. Innovation economics change everywhere
Lowering the cost of iteration in marketing and product development

The software story is just one instance of a broader pattern. The constraint on quality and innovation has always been the cost of experimentation, iteration, and all the mechanical work surrounding creative and engineering work. AI removes that constraint.

2026 will be just the start of this shift, but the companies that figure it out early will compound their advantage for years to come.

The same economics flip is happening everywhere:

  • Creative marketing: Test fifteen copy variations in hours instead of weeks. Iterate on creative until it actually works, not until you run out of budget.
  • Market Analysis: Model 50 micro-segments and "white space" product opportunities in a single afternoon, iterating on market strategy until it’s a mathematical certainty rather than a quarterly best guess.
  • Investment analysis: Model twelve scenarios for your thesis instead of the three you had analyst time for. What should be table stakes in due diligence becomes genuinely thorough.
  • Product development: Prototype five concepts in days instead of months, then automate the downstream launch mechanics—the legal, compliance, and GTM hurdles—that usually kill time-to-market.

The companies that internalize this don't just work faster. They work differently. They try things that were never worth trying before. And some of those things will be transformative.

This is a multi-year revolution, not a 2026 event. But 2026 is when the early movers start pulling away, and is when the gap will become visible.

#5. Access control and Agent authorization become critical infrastructure

Here's a massive unsolved problem that's about to get very loud: you can't give an AI agent root access to all enterprise knowledge and systems. But you also can't manually configure permissions for every agent interaction.

Today's approach is too often "give the agent some permissions and hope for the best." That works until it doesn't. And when it doesn't, it'll be spectacular.

Agent authorization – dynamic, contextual, auditable, user-restricted – becomes a real infrastructure category in 2026. How do you scope what an agent can access? How to pass user and work-scoped permissions? How do you audit what the agent accessed? How do you revoke access when something goes wrong? These aren't theoretical questions anymore.

This is foundational for any serious enterprise AI deployment. We have all we need (workload/workforce federation, token bearing credentials, etc.); now it’s time for wide-spread adoption.

#6. Dynamic tool discovery goes mainstream

We can't stuff a thousand tool descriptions into every prompt. Context windows are big, but they're not infinite, and every token spent on tool descriptions is a token not spent on actual reasoning.

Models need help discovering what's available and how to use it contextually, at runtime, based on the task at hand. This drives specialized UX patterns and agent architectures. MCP and similar protocols are early signals of where this is heading.

Significant innovation is right around the corner, as the number of available tools and integrations explodes. Expect new patterns, and new approaches.

Recently we’ve innovated in this area regarding specialized skills for agents. The ability for agents to dynamically load expert instructions and executable code when relevant to a task. Regardless of model, skills allow for on-demand tool access, more efficient context management, code execution when appropriate, and significantly improved calculations - we all know LLMs aren’t good at math, but that’s changing now. Expect more of this in 2026.

#7. Google and Anthropic extend their enterprise lead

Both Google and Anthropic have differentiated, complementary model families that are – in my view – significantly ahead for enterprise tasks.

Google excels at information understanding, analysis, and visual processing. For anything involving comprehension at scale, they're exceptional. Gemini's ability to process and reason over large documents and complex data is genuinely impressive.

Claude excels at complex reasoning, nuanced judgment, and sophisticated task execution. When you need a model to think through a hard problem or handle something that requires actual understanding, Claude is what we task.

For enterprise applications, I see both pulling further ahead of the pack in 2026. OpenAI remains strong in consumer and developer mindshare – ChatGPT is still the verb people use – but the enterprise gap is real and growing.

Worth watching: Grok 4.1 is impressive – really fast, strong in many aspects, and its progress on tasks is impressive. xAI isn't going away, and I’m really curious to see the next iterations.

#8. Training junior staff in an AI-native world

Here's an emerging challenge nobody's discussing enough: AI is a force multiplier. Fantastic for experienced engineers, analysts, and operators who can use it to do 10x or even 100x more.

But what about junior staff who need to learn on the job?

If seniors are AI-augmented and juniors use the same tools, how do juniors develop the judgment and experience that makes the tools valuable in the first place? You can't learn taste by having AI do the tasting for you. You can't develop engineering intuition by accepting every suggestion Copilot makes.

We need new approaches to staff development – likely AI-assisted, but deliberately constrained. The goal is building expertise, not just output. Training wheels, not autopilot.

Expect this to become a significant conversation in 2026. The companies that figure it out will build durable talent advantages. The companies that don't will discover that their "AI-native" junior hires can't actually do anything without AI – which will be fine until it isn't.

Conclusion: closing the gap between technology and workflow

2026 won't be the year AI hype dies. The hype is too profitable to die. But it'll be the year reality catches up for better and for worse.

The winners will be specific, opinionated solutions that transform defined workflows. The losers will be generic tools waiting for users to figure out the use case. (Spoiler: They won't figure it out. They'll just stop logging in.)

We’re already hearing the battle-weary refrain from customers we talk to "We’ve tried AI three times with three different vendors and failed." This isn't a failure of technology—it's a failure of approach. If you apply 2010s "off-the-shelf" procurement logic to 2026 AI, you are destined for a fourth failure.

An AI backlash is brewing, but don't mistake a botched implementation for a limited technology. The "backlash" is actually the market’s way of demanding maturity. If your first three attempts hit a wall, don't walk away—re-examine the workflow. 

The technology is ready. The workflows are not. That's the gap to close.

Similar posts

Get notified when a new blog article is published

Be the first to know about new blog articles from Vertesia. Stay up to date on industry trends, news, product updates, and more.