Convai State of Mind: How Emotions Drive Character Behavior
By
Convai Team
November 28, 2025
You might have practiced against generic training bots that feel like a script on legs. They don't get frustrated, they don't calm down when you acknowledge them, and they certainly don't teach you how to "read the room."
This is the exact problem Convai's State of Mind is built to solve. It’s a real-time emotion layer for Convai characters that listens to context (persona, conversation, objectives) and updates the character’s feelings turn-by-turn.
This emotional state then influences their tone, wording, and delivery (i.e., AI voice, lip-sync, and facial expression). Use it to design realistic soft-skills scenarios, de-escalation practice, customer service training, coaching dialogues, and more.
How it feels in practice
You don't just see the change on the visual emotion wheel in the Playground; you feel it.
When a user's angry turn makes the "Frustrated" slice spike, it's not just a color change. That shift instantly transforms the character’s delivery. You'll hear their voice get more clipped, their pace quicken, and their friendly tone disappear. Their language will adapt, becoming more direct or urgent. This is how you build practice that feels real, not robotic.
Conversation strategy: insist on policies when stressed; expand options after acknowledgment; de‑escalate with empathy when tension is high.
See it in action
Playground → Character Customization → State of Mind
Open your character in Playground.
Go to Character Customization → State of Mind to see the wheel.
Start a conversation (text or voice). After each turn, the wheel updates to reflect the current state.
Refine persona & speaking style; run another turn and observe how the emotional state changes.
Tip: Keep persona instructions tight (“how they tend to behave”) and put domain facts in the Knowledge Bank. You’ll get cleaner emotional signals and more predictable behavior.
A concrete example: “Difficult traveler” scenario
User: “Don’t tell me there’s nothing you can do. I paid good money for this ticket. Get me on another flight.” State of Mind:Frustrated/Angry spikes (colored); Calm mutes. Character (agent): “I’m really sorry this derailed your plans. Let me check options that still get you to Chicago tonight…” State of Mind (next turn):Frustrated eases; Hopeful/Neutral rise. Result: The conversation naturally de-escalates as the agent offers specific, time‑bound options.
Design patterns that work
Getting predictable, powerful emotional reactions is all about how you layer your instructions. Don't just rely on one field. Instead, pay more attention for all of the following:
Persona sets the baseline If your customer tends to be direct and impatient, the system starts from that baseline and moves from there.
Speaking style shapes delivery Add style instructions (e.g., “short sentences under stress; offer two options before asking follow‑ups”). Emotions will modulate how that style shows up.
Narrative objectives give purpose When you design an objective like “Acknowledge, then propose two concrete options,” you’ll see emotions shift as the learner meets—or misses—those steps.
Knowledge Bank prevents “emotional improv” Policy and exception docs keep the avatar’s emotional reactions grounded in what’s actually possible (“no bumps on weather events,” “free hotel only for overnight delays”).
Mindview + State of Mind = clarity Convai’s Mindview shows exactly which instructions and knowledge were injected; State of Mind shows how the character felt while applying them.
Setup in minutes (step‑by‑step)
Create or open a character → set Description (current context, backstory, personality), add Speaking Style.
Attach Knowledge Bank docs (policies, FAQs, exception tables).
Escalation curve: Begin “frustrated,” allow a jump to “angry” if the learner stalls.
Cooling triggers: Reduce intensity when the learner acknowledges the issue and offers a concrete plan.
Policy pressure: Increase frustration when learners propose options that violate policy; soften when they recover with a correct option.
Time pressure: Add a timer; emotions spike when the agent delays.
Troubleshooting quick wins
Emotions feel “flat”: Add or tighten Speaking Style; confirm it appears in Mindview.
Avatar reacts off‑policy: Check Knowledge Bank attachment and rerun the turn.
Too volatile: Simplify persona; reduce conflicting tone instructions.
Not de‑escalating: Add explicit goals (“acknowledge → options → confirm”) in narrative design; practice until the wheel consistently cools.
FAQ
Does State of Mind work with voice? Yes. Pair it with AI voice + lip‑sync to hear changes in tone, pace, and warmth as emotions shift.
Can I control which emotions appear? You guide emotions through personality traits, speaking style, objectives, and conversation context. The wheel reflects the current result of those inputs.
How do I verify what the model actually saw? Open Mindview to inspect the constructed prompt (persona, style, objectives, attached knowledge) for the exact turn you’re reviewing.
Does this work in 3D/XR scenes? Yes. Emotions still update turn‑by‑turn; you can also trigger face/body animations or actionsto match the state for immersive practice.