As enterprises race to deploy AI agents across their operations, many teams are discovering a frustrating truth: the frameworks they rely on to build those agents are getting in the way. Instead of accelerating delivery, complex frameworks introduce sprawling abstractions, bespoke vocabularies, and mounting technical debt — before a single line of business value is shipped.
At Vertesia, we have been thinking hard about this problem. And we believe the answer is not a better taxonomy. It is a simpler one. Our CEO, Eric Barroca, recently laid out his thinking on this in detail — if you want the full technical perspective, his post is worth reading.
Every new concept a framework introduces carries a price: more onboarding time for new engineers, more surface area for mistakes, more decisions that end up being framework-shaped rather than business-shaped, and more friction between a prototype and a production deployment.
Ultimately, teams that should be focused on solving business problems spend weeks negotiating with framework abstractions instead.
The instinct to add “more” - more concepts, more specialization, more hierarchy - is understandable. AI systems are often complex. But complexity in a framework does not make the underlying challenge simpler. It just relocates it into your application code, where it compounds over time.
At Vertesia, we organize our agent runtime around three foundational primitives: Interactions, Skills, and Tools.
A modern agent runtime cannot operate as an island. Vertesia is designed to participate in the emerging open protocol ecosystem, in both directions.
Capabilities from MCP servers become available inside Vertesia with full scoping and governance. At the same time, anything built in Vertesia (an Interaction, an agent, a tool), is automatically published as an MCP endpoint and an A2A endpoint, consumable by external systems and other agents without bespoke integration work.
In practice, this means a single Interaction definition can serve simultaneously as an internal agent, a callable sub-agent tool, an MCP capability for a third-party application, and an A2A endpoint for another agent system.
As the AI infrastructure ecosystem converges on open protocols, runtimes that speak only their own language will become bottlenecks. Runtimes that speak to the open protocols in both directions become the foundation that other systems are built on.
There is an important architectural distinction worth naming clearly: agents are not the right tool for every job.
Business workflows that require typed state, deterministic transitions, human approval gates, and a full audit trail need a different layer. At Vertesia, that is the role of our Process Engine: a control plane that sits above the agent runtime and governs hybrid workflows where deterministic steps and AI reasoning work together.
The key insight is that the same tools available to an agent inside a reasoning loop can also run as deterministic process nodes with no model in the path at all. If a step is known and defined, execute it deterministically. If a step requires judgment, bring an agent in with access to the same tools.
The separation of having agents for open-ended reasoning, and processes for deterministic control is what makes enterprise-grade AI systems both flexible and auditable.
Vertesia is just simpler than most agent platforms on the market. With three primitives, your teams can build a single-turn document classifier, a long-running contract analysis agent, a specialist sub-agent callable by other agents, a skill-first assistant scoped to a specific domain, and a bounded AI worker inside a deterministic business process, without inventing a new abstraction for each case and without the onboarding burden that comes with a framework designed to cover every possible scenario.
The complexity of AI systems does not go away. But with the right architecture, it lives in the platform, not in your application code.
That is the bet we have made at Vertesia. And we think it is the right one for enterprise teams that need to move from experimentation to production without accumulating architectural debt along the way.
For a deeper technical dive into the architecture behind these ideas, read Eric’s original post: Why Agent Frameworks Have Too Many Nouns.