Why agents (not chatbots) change the economics of support
By Marian Sanjur · April 2, 2026 · 7 min read
- agents
- architecture
- rag
The right question
Chatbots from five years ago could answer questions. Agents today can solve problems. That distinction is not marketing.
A chatbot returns text. An agent decides what to do with your system, in what order, and stops when the problem is resolved.
Most companies evaluating AI for support ask "which chatbot should we use?" The better question is: "do we need a chatbot at all, or do we need something that can act?"
Three components that make the difference
1. Working memory. The agent knows which step of the process it is on. It is not answering a single question in isolation — it is navigating a workflow with state, and it can pick up where it left off if interrupted.
2. Tools. The agent can call your ERP, your CRM, your ticketing API. It does not just describe what should happen — it executes it. A support agent that can look up an order, check inventory, trigger a refund, and send a confirmation email is not a chatbot. It is a junior employee that never sleeps.
3. Continuous evaluation. Every decision is scored against a dataset and adjusted. This is where the moat is. A well-evaluated agent improves over time. A chatbot stays exactly as wrong as the day you shipped it.
// Simplified example of a node in LangGraph
const triage = async (state: TicketState) => {
const category = await classify(state.ticket);
return { category, next: routeForCategory(category) };
};
This is not magic. It is engineering.
What this means for cost
A tier-1 support ticket costs between $8 and $15 to resolve by a human agent in Latin America. A well-designed AI agent resolves 60–70% of tier-1 volume at a fraction of that — not by replacing the team, but by handling the cases that never needed a human in the first place.
The remaining 30–40% — edge cases, escalations, emotionally complex interactions — go to humans who now have more time and better context because the agent already captured the relevant data before handing off.
The economics only work if the agent can actually do things. A bot that answers FAQs and then asks the user to call the support line has not reduced cost. It has added a step.
Where most implementations fail
The failure mode is almost always the same: teams buy the model and skip the architecture.
They deploy a RAG system over their knowledge base, call it an AI agent, and wonder why resolution rates are low. RAG is retrieval, not action. It makes the model smarter about your domain. It does not make the model capable of operating in your systems.
An agent that can reason but cannot act is a very expensive FAQ page.
The build pattern that works
Every agent engagement starts with the same question: what are the three to five actions the agent needs to take to resolve the most common case?
From there, you work backwards:
- What APIs need to be exposed?
- What data does the agent need at each step?
- What does a human handoff look like when the agent hits its limit?
- How do you evaluate whether the resolution was actually correct?
The model choice is the last decision, not the first. The architecture is what ships.