The Problem Nobody Talks About
Most AI chatbots get the answers right. They pull the correct information, format it nicely, maybe even throw in some bullet points. And then the user says "thanks" and leaves. Two messages. Maybe three. Done.
We kept seeing this on our platform. The chatbot answered correctly. But the user didn't ask anything else. They got their answer and bounced. No follow-ups, no browsing around, nothing. Technically the bot worked. But nobody wanted to actually talk to it.
The accuracy was fine. The engagement was terrible. And we couldn't figure out why for a while.
More Rules, More Robotic
We looked at how the chatbots were being configured. Turns out, most setups had somewhere around 30 behavioral rules. Specific formatting for different question types. Rigid escalation steps. Word count caps. Mandatory follow-up questions after every answer.
The logic made sense at the time: if you tell the AI exactly what to do in every situation, it'll be perfect. More control, better output. Right?
Wrong. What actually happened is the AI spent all its energy following rules instead of having a conversation. Every answer had the same rhythm. Same structure. Same tone. Didn't matter if the user was joking around or genuinely frustrated. The bot responded the same way every time because it was too busy checking boxes.
What We Changed
We threw out most of the rules and replaced them with a handful of principles. Sounds risky, but it worked way better than we expected.
Here's a concrete example:
Before: "For yes/no questions, respond in 1-2 sentences. For how-to questions, use a numbered list. For comparison questions, use a markdown table. Stay under 150 words unless the user asks for detail."
After: "Be concise by default. Expand when the topic needs it."
Same idea. Completely different result. The first version turns the AI into a format-checker. The second one lets it actually read the room and respond like a person would.
We did the same thing across the board:
Tone. Instead of writing separate instructions for casual users, formal users, and angry users, we just said: match their energy. The AI is already good at this. We were just getting in the way.
Not knowing stuff. We used to have this three-step formula: "acknowledge the question, share something related, guide them forward." Every single "I don't know" answer came out sounding identical. We replaced it with: just be honest and helpful. Sometimes that means admitting you don't know. Sometimes it means pointing them somewhere useful. The AI figures it out.
Follow-ups. We used to force a follow-up question after every response. Users saw right through it. It felt pushy and fake. Now it's optional. Add one if it feels natural. Don't if it doesn't. Turns out, the occasional genuine follow-up works ten times better than a mandatory one every time.
What Happened After
The difference showed up pretty quickly. Conversations got longer. Not because the bot was padding responses, but because users were actually asking more questions.
People stayed longer. Average conversation went from 2-3 exchanges to 5-6. Users started exploring topics instead of just getting one answer and leaving.
People came back. Return rate went up. Turns out when a chatbot feels like talking to a helpful person instead of a help center widget, people use it differently.
Complaints went down. Fewer "that's not what I asked" type responses. The information was the same. The knowledge base didn't change at all. But the way answers were delivered felt better, and that made a real difference.
The Weird Part
Giving the AI fewer instructions made it perform better. That's backwards from how most people think about it.
LLMs are actually pretty good at reading tone and context on their own. When you hand them 30 rules, they spend their effort on compliance. When you hand them a few clear principles, they spend their effort on the actual conversation. There's a big difference.
That doesn't mean zero guidance. You still need hard limits: don't make stuff up, don't go off-topic, stick to the knowledge base. Those are guardrails and they matter. But personality stuff? Tone? Formatting? That's where less is more.
If Your Chatbot Sounds Boring
It's probably not a data problem or a model problem. It's an instruction problem. Here's what we'd suggest:
- Cut your rules in half. Seriously. If you have 30 rules, a bunch of them probably contradict each other anyway. Boil them down to principles.
- Let the AI handle tone. It can match formality, detect frustration, and adjust without being told step-by-step. Just get out of its way.
- Drop the word limits. Artificial caps lead to artificial-sounding answers. Let the AI decide what's appropriate.
- Stop forcing follow-ups. People can tell when a question is genuine vs. when it's there because someone made it mandatory. Make follow-ups optional.
- Personality, not a playbook. "Be warm and direct" is a personality. "Start with a greeting, then acknowledge, then answer in bullets" is a script. One of those sounds human. The other doesn't.
The best chatbot conversations feel like talking to that one person at the company who actually knows their stuff and genuinely wants to help. They don't follow scripts. They pay attention. They adapt. That's all we're trying to build here.
Frequently Asked Questions
Build chatbots that actually connect with your users.
Get Started Free