Vertesia Blog

Simpler Agent Architecture Wins: A New Way to Think About AI Agents

Written by Mary Kaplan | May 5, 2026

As enterprises race to deploy AI agents across their operations, many teams are discovering a frustrating truth: the frameworks they rely on to build those agents are getting in the way. Instead of accelerating delivery, complex frameworks introduce sprawling abstractions, bespoke vocabularies, and mounting technical debt — before a single line of business value is shipped.

At Vertesia, we have been thinking hard about this problem. And we believe the answer is not a better taxonomy. It is a simpler one. Our CEO, Eric Barroca, recently laid out his thinking on this in detail — if you want the full technical perspective, his post is worth reading.

The cost of complexity

Every new concept a framework introduces carries a price: more onboarding time for new engineers, more surface area for mistakes, more decisions that end up being framework-shaped rather than business-shaped, and more friction between a prototype and a production deployment.

Ultimately, teams that should be focused on solving business problems spend weeks negotiating with framework abstractions instead.

The instinct to add “more” - more concepts, more specialization, more hierarchy - is understandable. AI systems are often complex. But complexity in a framework does not make the underlying challenge simpler. It just relocates it into your application code, where it compounds over time.

What makes Vertesia simpler?

At Vertesia, we organize our agent runtime around three foundational primitives: Interactions, Skills, and Tools.

  • Interactions are the core building block. An Interaction carries the prompt, model configuration, tool access, and optional output schema for a given capability. What makes Interactions powerful is that the same definition can run in multiple modes: as a direct structured call for fast, synchronous tasks, or as a durable, multi-turn agent run with streaming, checkpointing, and signals. Interactions don’t require separate implementations for each case. One definition. Multiple execution paths.
  • Skills address one of the most persistent pain points in agentic systems: tool overload. Research and real-world deployments confirm the same finding — agents perform worse when exposed to too many tools at once. Our Skills model addresses this through progressive disclosure. Capabilities are made available on demand, and when an agent activates a skill, it receives both the tools associated with that capability and the guidance for using them well. Domain knowledge and capability travel together. This also means that institutional knowledge, things like compliance requirements, data quality standards, approval workflows, can be encoded in Skills and made available consistently across every agent that needs it.
  • Tools are the action layer. But not all tool systems are created equal. The difference between a tool system that functions as infrastructure and one that is merely a collection of wrappers is significant. Vertesia's tool layer provides consistent output and error formats, context protection when results are large, artifacts as first-class outputs, and interoperability across search, document management, files, and external integrations. Custom tools built in TypeScript, Python, Go, or any language that speaks HTTP, join the same runtime model, with the same observability and the same guarantees. Extensibility does not come at the cost of reliability.

We’re built for the open ecosystem

A modern agent runtime cannot operate as an island. Vertesia is designed to participate in the emerging open protocol ecosystem, in both directions.

Capabilities from MCP servers become available inside Vertesia with full scoping and governance. At the same time, anything built in Vertesia (an Interaction, an agent, a tool), is automatically published as an MCP endpoint and an A2A endpoint, consumable by external systems and other agents without bespoke integration work.

In practice, this means a single Interaction definition can serve simultaneously as an internal agent, a callable sub-agent tool, an MCP capability for a third-party application, and an A2A endpoint for another agent system.

As the AI infrastructure ecosystem converges on open protocols, runtimes that speak only their own language will become bottlenecks. Runtimes that speak to the open protocols in both directions become the foundation that other systems are built on.

We offer deterministic control and flexibility

There is an important architectural distinction worth naming clearly: agents are not the right tool for every job.

Business workflows that require typed state, deterministic transitions, human approval gates, and a full audit trail need a different layer. At Vertesia, that is the role of our Process Engine: a control plane that sits above the agent runtime and governs hybrid workflows where deterministic steps and AI reasoning work together.

The key insight is that the same tools available to an agent inside a reasoning loop can also run as deterministic process nodes with no model in the path at all. If a step is known and defined, execute it deterministically. If a step requires judgment, bring an agent in with access to the same tools.

The separation of having agents for open-ended reasoning, and processes for deterministic control is what makes enterprise-grade AI systems both flexible and auditable.

What this means for your teams

Vertesia is just simpler than most agent platforms on the market. With three primitives, your teams can build a single-turn document classifier, a long-running contract analysis agent, a specialist sub-agent callable by other agents, a skill-first assistant scoped to a specific domain, and a bounded AI worker inside a deterministic business process, without inventing a new abstraction for each case and without the onboarding burden that comes with a framework designed to cover every possible scenario.

The complexity of AI systems does not go away. But with the right architecture, it lives in the platform, not in your application code.

That is the bet we have made at Vertesia. And we think it is the right one for enterprise teams that need to move from experimentation to production without accumulating architectural debt along the way.

For a deeper technical dive into the architecture behind these ideas, read Eric’s original post: Why Agent Frameworks Have Too Many Nouns.