LEARNING

5 Surprising Lessons From My First Two Months at an AI Startup

Our Senior Solutions Engineer provides his insights 3 months after starting at an AI startup on how AI can address real-world challenges.


When I joined an AI startup, I expected to be thrown into the deep end of model architecture, prompt engineering, and mind-blowing innovations. And while I definitely got a front-row seat to some of that, the biggest surprises weren’t what I expected at all.

This post captures five of the most eye-opening things I’ve learned in my first two months and the kind of insights you don’t really discover until you’re working with this technology all day every day.

1. Chatbots Are Just One Way to Use Generative AI

When most people think about generative AI, they picture a chatbot. Ask the average person, or even me a few months ago, how businesses can use AI beyond a chat interface, and you’d probably get a blank stare.

But here’s the truth: chatbots are just one of many ways to apply generative AI, and in many cases, they may not be the best one.

One of the biggest things I’ve learned is that LLMs can be integrated directly into workflows and systems — powering decisions, summarizing data, generating content, or extracting insights — all without the end user ever knowing an AI was involved.

Here are just a few examples:

  • An internal tool that automatically summarizes thousands of customer support tickets and flags recurring issues (This is a task that wasn’t even possible before LLMs were on the scene).
  • A backend process that reads insurance claims and pulls out key fields for review or inserts that data into a customer management system, eliminating manual data entry.
  • A content pipeline that takes structured inputs and generates SEO-optimized product descriptions on the fly.

No chat interface. No human asking questions. Just AI doing useful work in the background.

It’s not that chatbots aren’t useful. They are, and I personally interact with LLMs through a Chat UI every day; But, the more interesting, scalable, and quietly powerful use cases are often the ones you don’t see. Once you stop thinking “AI = chatbot,” an entire world of possibilities opens up.

2. Use RAG instead of Fine-Tuning

I had heard of RAG (Retrieval-Augmented Generation), and I understood it conceptually, but not practically. I assumed that “training” or “fine-tuning” a model was the way to go for most businesses. Boy, was I wrong.

Training a model is insanely resource-intensive — in time, data, and money. And even after all that effort, it’s still an inferior approach for most real-world use cases. Companies like OpenAI and Google spend billions to train foundational models like GPT-4o, tuning countless parameters and weights to produce intelligence. When you fine-tune, you’re essentially poking at that black box — changing things without ever really knowing what you’ve changed.

There’s no visibility, and debugging becomes a nightmare.

RAG, on the other hand, plays to the model’s strengths. It feeds these already powerful models extra context at runtime — without altering their internal structure. Think of it like this:

Instead of rewiring the brain of an expert consultant, you’re simply handing them your company documentation before they start working.

You have full transparency and control over the knowledge being used, and that knowledge is dynamic, easy to update, and able to be versioned. Need to change a pricing policy? Just swap the source content. No retraining, no guesswork.

Once I saw this in action, it clicked. RAG isn’t just easier — it’s smarter. For most companies, it’s not just the better option; it should be the default.

3. AI Isn’t the Hard Part - It’s Everything Around It

It’s funny — almost all the hype around generative AI focuses on the models themselves. GPT-4o, Claude, Gemini, Mistral… they dominate the headlines. But what I’ve realized is that for most companies, the model isn’t actually the hard part.

The real challenges live around the model.

I’m talking about things like:

  • Content preparation - Converting raw documents, PDFs, and media into formats the AI can understand
  • Security and compliance - Ensuring systems stay compliant, and all the work being done by LLMs is fully transparent and auditable.
  • Infrastructure scalability – How do you manage and orchestrate hundreds, or even thousands of AI operations, agents, and workflows?

Take content, for example, and by “content,” I don’t mean blog posts or social media assets. I mean information - documents, PDFs, spreadsheets, audio clips, videos - the stuff a model needs in order to be useful.

It turns out that information can’t just be plugged in as-is. It needs to be prepared. That means creating embeddings — numerical representations of the data that large language models can actually understand. It also means generating metadata so that documents, images, and videos can be retrieved and filtered in meaningful ways.

Then there’s scalability. Sure, it’s easy to ask ChatGPT a question about a single document. But having an AI agent review hundreds or thousands of documents? That’s a different story. You need infrastructure to handle batch processing, retries, indexing, and caching — not to mention monitoring, logging, and audit trails.

And what happens when your provider goes down, even for a few minutes? How do you handle failovers, load balancing, or fallback logic when a model times out or hits a snag during a live run in production?

I could go on, but the point is this: the activities that surround the model, things like the information, architecture, orchestration, and operations, are just as important as the model itself. And often, they’re the difference between a cool demo and a real product that delivers consistent business value.

autonomou-agents-featured-image-2

4. AI Agents Aren’t as Mysterious as They Seem

You can’t talk about AI these days without hearing about “AI agents.” Before I joined Vertesia, I had this vague idea that they were autonomous pieces of software doing tasks that only humans could do. But how large language models actually fit into that? I had no clue.

Now I see that agents aren’t magic — they’re just open-ended workflows that loop through a process using an LLM’s ability to reason, plan, and make decisions. They combine that reasoning ability with tools (e.g., APIs, search functions, spreadsheet analysis, database querying, email, etc…) so they can interact with systems, take action, and respond in a way that is more flexible than a traditional workflow.

Yes, the full system is complex under the hood. But at a high level? It’s surprisingly understandable:

  1. Give the agent a goal. This could be something like “summarize these documents,” “analyze this customer feedback,” or “respond to this claim.”
  2. Let the LLM break it into steps. The model plans out how to achieve the goal, step by step.
  3. The agent follows the plan, using tools as needed. It might call APIs, run searches, pull in files, or write to a database — all orchestrated behind the scenes.
  4. When a decision needs to be made, the LLM is prompted with all relevant context. It analyzes the state of the workflow and outputs what to do next.
  5. Repeat or exit based on the outcome. The loop continues until the task is complete, or the goal is reached.

That’s it. Once you grasp that structure, you stop thinking of agents as “sci-fi software” and start seeing them as very smart, very flexible automation layers that can transform how work gets done in your organization.

I could go deeper, but our CEO actually wrote a fantastic piece on how AI agents work, and that article does a much better job than I can. You can find it here.

5. Think About Problems Before You Think About Software

This might sound strange coming from someone who works at a software company — but one of the biggest mistakes I see every day is companies thinking about software before they’ve clearly defined the problem they’re trying to solve.

It usually goes something like this:

“We want to use AI. What tool should we buy? Tell us what your software does.”

At first glance, that seems logical. But it’s likely to lead to a mess a few months after implementation.

The right question is more along the lines of:

“Where can generative AI add value to our business and what would that look like?”

Start by identifying specific pain points or inefficiencies. What’s costing time, money, or morale across departments? What kinds of decisions or workflows could benefit from intelligence, automation, or better content generation?

Once you’ve identified the problems and opportunities, then it’s time to step back and ask some critical questions. Types of questions such as:

  • Is generative AI actually the right solution here? Not every problem needs an LLM, and forcing it can lead to overcomplicated workflows.
  • What needs to happen internally to make this work? Maybe your content is scattered across 12 systems. Maybe your processes aren’t documented. Generative AI can help with this! However, it’s important to understand the problem first.
  • How broadly will this be used? Will GenAI be isolated to one department, or does it need to support multiple teams and use cases? That changes what kind of solution you need.

This approach can help you avoid chasing shiny tools and steer you toward a solution that will solve real challenges and prove ROI along the way.

Conclusion: From Hype to Impact

In writing this, my goal was to help you cut through all the AI hype, see beyond the buzzwords, and avoid costly mistakes that we see companies make every day.

If there’s one takeaway from all of this, it’s that the world of AI isn’t just about cutting-edge technology - it’s about applying that technology thoughtfully to solve real business problems.

Whether it’s choosing RAG over fine-tuning, focusing on workflows instead of chatbots, or asking better questions before buying tools, the real magic happens when AI meets real-world constraints and clear business goals.

Before you dive into your next AI initiative, ask yourself:

  • What specific problem are we trying to solve?
  • How will we measure success?
  • What infrastructure and processes do we need to support this?

At Vertesia, we’ve learned these lessons through hands-on experience building AI systems that deliver real business value. The most powerful AI implementations aren’t necessarily the most complex — they’re the ones designed with a deep understanding of both the technology and the business context in which it operates.

Similar posts

Get notified on new blog articles

Be the first to know about new blog articles from Vertesia. Stay up to date on industry trends, news, product updates, and more.

Subscribe to our blog