Customer support automation that does not break trust.
Automating customer support is not a binary. Some things should be automated yesterday. Some things should never be automated. The difference between a team that loves their automation and a team that quietly turns it off is whether they drew that line correctly. Most teams over-automate the wrong tickets and under-automate the right ones.
Automate the repetitive informational tickets (shipping, returns, sizing, policy) and keep the judgment-heavy ones with humans (refunds, complaints, escalations, billing disputes). A good AI agent handles sixty to seventy percent of volume and escalates the rest cleanly with full context.
- ✓Automate volume tickets with one-fact answers. Where is my order, what is the return window, do you ship to Spain.
- ✓Do not automate tickets that need judgment. Refunds, complaints, edge-case billing, anything with emotion in it.
- ✓The AI should escalate, not bluff. A handoff with summary beats a wrong answer every time.
- ✓Measure success by trust, not just deflection. A 90% deflection rate with 30% customer complaints is a failure.
The right things to automate
The tickets to ship to AI are the ones with a single answer that exists somewhere in your content. Where is my order. What is your return window. Do you ship to Belgium. What sizes are in stock. These get the same answer every time, they are boring for humans to handle, and customers want them in seconds, not hours. A good AI agent will answer these with citations, in the customer's language, before your team has finished its coffee. That is sixty to seventy percent of typical support volume.
The wrong things to automate
Refunds. Complaints. Billing disputes. Anything where the customer is upset, anything where the policy might bend, anything involving a refund decision or a goodwill gesture. Automating these turns angry customers into furious customers. A human can read the tone, look at the order history, and decide whether to refund or stand firm. An AI cannot, and pretending it can is what gives AI customer service a bad name. Keloa is explicitly tuned to escalate on negative sentiment.
The gradient between them
Most tickets sit in between, and the right move is suggest-only mode. The AI drafts every reply, your team approves or edits before sending. You get the speed benefit on the easy ones (operators send the draft in two seconds) and a safety net on the hard ones (operators rewrite when the AI is off). Most teams run suggest-only for a month, then graduate the simple intents to autonomous and keep the rest under human supervision.
What to measure
Deflection rate is the famous number, but it lies on its own. A high deflection with bad answers is worse than no automation. Track three numbers together: deflection rate (how many tickets the AI closed), escalation reasons (why the others escalated, so you can fix the gaps), and CSAT on AI-handled tickets (whether customers were happy with the AI's answer). When all three are good, you have automation that works. When deflection is high and CSAT drops, the AI is bluffing and you need to lower its confidence threshold.
How Keloa lets you draw the line
Three switches matter. Suggest-only mode (the AI drafts, humans approve), confidence threshold for handoff (default seventy percent), and sentiment-based escalation (any negative emotion gets a human). You can tune each independently, per AI agent, so the customer-service agent might autonomously answer shipping questions while the billing-dispute agent always defers to a human. The point of automation is not to replace your team. The point is to free their attention for the conversations that need it.
Frequently asked questions about customer support automation.
Where do I start with customer support automation?
Start by listing your top twenty ticket types over the last month. Score each on two axes: how often it comes in, and how often the answer is identical. The intersection (frequent, identical) is the obvious automation target. Shipping ETAs, return window, store hours, sizing questions, country availability. Those go first. Refunds, complaints and escalations stay with humans. You will probably automate the right thirty percent of intents and cover sixty percent of volume.
What is a good deflection rate?
Sixty to seventy percent is healthy for most teams. Below that you might be under-utilizing the AI, the knowledge base is probably missing answers. Above eighty percent and you might be over-automating, the AI is sending replies it should be escalating. CSAT on AI-handled tickets is the cross-check: if it drops below your overall CSAT, lower the confidence threshold so the AI hands off more.
Should I automate refunds?
Not the decision, but you can automate the workflow around it. The AI can ask the customer for the order number, look up the order, verify eligibility, and present the case to your operator with a recommendation. The operator clicks approve or deny in seconds. You get speed without giving up the judgment call, which is the right balance for refund-shaped tickets. Keloa explicitly keeps destructive actions behind a human.
Can I roll back if the automation goes wrong?
Yes. Flip the AI agent to suggest-only mode in one click and every reply waits in your operator inbox for approval. You can also pause an agent entirely. The audit log lets you go back and see every reply the AI sent, with the sources it cited, so you can find the conversation that went wrong and fix the underlying knowledge gap. Rollback is a single toggle, not a migration project.
Does automation work for B2B support?
Yes, with the right scope. B2B tickets skew toward billing, contracts and account-specific questions, which need more guardrails than B2C. The AI handles informational layers well (documentation lookups, status updates, scheduling) and escalates the account-specific judgment calls. Many B2B teams run Keloa for the first-touch reply and let the account managers handle deeper threads.
How long until the automation pays back?
Most teams break even on Keloa's Growth plan (€49 a month) within the first hundred conversations. The math: if a human reply takes three minutes and the AI handles sixty percent of fifteen hundred monthly conversations, you save roughly forty-five hours of operator time a month. At any reasonable hourly cost, that pays back the plan in the first week.
Automate the right things. Keep the rest human.
Free Starter, fifty AI replies, no credit card. Run Keloa in suggest-only mode for a month, then graduate the intents you trust to autonomous.