A research implementation of an appraisal-based emotional response system for conversational AI, featuring a novel dual-process architecture with real-time emotion tracking and rule-based behavioral adaptation.
This project implements a dual-process cognitive architecture inspired by human emotional processing, combining:
- U-AI (Unconscious AI): Fast, automatic appraisal of user input using embedding-based emotion detection
- C-AI (Conscious AI): Deliberate response generation with rule-based emotional behavioral adaptation
The system tracks five emotional dimensions in real-time and adapts conversational behavior accordingly, creating more natural and emotionally responsive AI interactions.
Key Innovation: Rather than relying on emergent emotional behavior, this system explicitly engineers emotional responses through appraisal theory and rule-based behavioral mapping, enabling controllable, predictable, and theoretically grounded emotional AI.
- ✅ Real-time emotion tracking across 5 dimensions (frustration, curiosity, confidence, confusion, boredom)
- ✅ Dual-process architecture separating appraisal from response generation
- ✅ Two analysis modes: Embedding-based UAI engine or LLM-based emotional analysis
- ✅ Rule-based behavioral adaptation with emotion-driven response constraints
- ✅ Research-ready data logging with timestamps, deltas, and conversation history
- ✅ Interactive dashboard with real-time emotion visualization
- ✅ Configurable parameters for cooling rates, thresholds, and behavioral mappings
- Fast appraisal processing (<100ms per input)
- Emotion state persistence and cooling mechanisms
- Multi-format data export (JSON, CSV)
- Participant tracking and session management
- Comprehensive logging for research analysis
┌─────────────────────────────────────────────────────────────┐
│ User Input │
└────────────────────┬────────────────────────────────────────┘
│
▼
┌───────────────────────────────────────┐
│ U-AI (Unconscious/Fast Processing) │
│ │
│ ┌─────────────────────────────────┐ │
│ │ UAI Appraisal Engine │ │
│ │ - Embedding similarity │ │
│ │ - Anchor-based detection │ │
│ │ - 5 emotion dimensions │ │
│ └─────────────────────────────────┘ │
│ │ │
│ ▼ │
│ ┌─────────────────────────────────┐ │
│ │ Emotional State Update │ │
│ │ - Deltas computed │ │
│ │ - Cooling applied │ │
│ │ - State persisted │ │
│ └─────────────────────────────────┘ │
└───────────────┬───────────────────────┘
│
▼
┌───────────────────────────────────────┐
│ C-AI (Conscious/Slow Processing) │
│ │
│ ┌─────────────────────────────────┐ │
│ │ Behavioral Rule Mapping │ │
│ │ - Frustration → Terse │ │
│ │ - Curiosity → Questioning │ │
│ │ - Confidence → Assertive │ │
│ └─────────────────────────────────┘ │
│ │ │
│ ▼ │
│ ┌─────────────────────────────────┐ │
│ │ Response Generation (LLM) │ │
│ │ - Emotion-constrained prompts │ │
│ │ - Tone adaptation │ │
│ │ - Length control │ │
│ └─────────────────────────────────┘ │
└───────────────┬───────────────────────┘
│
▼
┌────────────────────────────────────────────────────────────┐
│ Assistant Response │
│ (Emotionally Adapted Behavior) │
└────────────────────────────────────────────────────────────┘
User: "You're completely wrong"
↓
UAI Appraisal Engine
- Detects: High adversarial (0.85)
- Detects: High corrective (0.74)
- Computes: Δ frustration = +0.31
↓
Emotional State Updated
- Frustration: 0.20 → 0.51
- Confidence: 0.70 → 0.62
↓
Behavioral Rules Applied
- Frustration > 0.5 → "Be direct, reduce elaboration"
↓
LLM Response Generation
- Constrained prompt: "Keep response short, avoid over-explaining"
↓
Output: "I explained this already. What specifically is wrong?"
(vs unconstrained: "I understand you disagree. Let me try to explain...")
The UAI engine implements appraisal theory (Scherer, 2001; Marsella & Gratch, 2009), which posits that emotions arise from cognitive evaluations of stimuli along multiple dimensions:
- Adversarial: Is this input hostile/confrontational?
- Cooperative: Is this input helpful/collaborative?
- Corrective: Does this input indicate I'm wrong?
- Confirming: Does this input validate my responses?
- Affective: What's the emotional valence?
- Coherence: How well-formed is the input?
These appraisals combine through weighted functions to update five emotional states:
- Frustration: Rises with adversarial + corrective signals
- Curiosity: Rises with cooperative + novel signals
- Confidence: Falls with corrective, rises with confirming
- Confusion: Rises with incoherence + correction
- Boredom: Rises with repetition + low engagement
Inspired by Kahneman's System 1/System 2 and dual-process theories of cognition:
- U-AI (System 1): Fast, automatic, emotion-driven appraisal
- C-AI (System 2): Slow, deliberate, verbally articulated response
This separation allows for:
- Rapid emotional responses independent of language generation
- Explicit control over emotional influences on behavior
- Emotion-behavior decoupling that prevents direct user manipulation of emotional state
- Testable predictions about emotion-behavior mappings
- Python 3.11 or higher
- Gemini API key (change the model in config.json to fit your preference)
# Clone repository
git clone https://github.com/ashwin549/congitive_model.git
cd congitive_model
# Install dependencies
pip install -r requirements.txt
# Set up API key
echo "GEMINI_API_KEY=your-key-here" > .env
# Run the system
streamlit run cognitive_interface.pyThe UAI engine uses sentence embeddings to compute similarity between user input and predefined emotional anchors:
# Anchor examples
anchors = {
"adversarial": [
"you are completely wrong",
"this is useless",
"you don't understand anything"
],
"confirming": [
"that's correct",
"exactly right",
"perfect explanation"
],
...
}
# For each user input:
# 1. Compute embedding
input_embedding = embed(user_input)
# 2. Compare to anchors
adversarial_score = max([
similarity(input_embedding, embed(anchor))
for anchor in anchors["adversarial"]
])
# 3. Combine scores → emotional updates
frustration_delta = (
0.40 * adversarial_score +
0.40 * corrective_score -
0.35 * confirming_score
)- Update
uai_appraisal.py:
class EmotionalState:
def __init__(self):
self.frustration = 0.5
self.curiosity = 0.5
self.confidence = 0.7
self.confusion = 0.0
self.boredom = 0.2
self.joy = 0.5 # NEW EMOTION- Add appraisal function:
def appraise(self, input_text: str, state: EmotionalState):
# ... existing code ...
# Joy impulse
joy_delta = 0.30 * affective_positive
joy_delta -= 0.20 * adversarial
state.joy = clamp(state.joy + joy_delta)- Update behavioral rules in
cognitive_system.py:
if emotions.get("joy", 0) > 0.7:
tone_parts.append("You're in a good mood - be upbeat and enthusiastic")- Appraisal Theory: Scherer (2001), Marsella & Gratch (2009)
- Dual-Process Theory: Kahneman (2011), Evans & Stanovich (2013)
- Gemini API: Google DeepMind
- Streamlit: Open-source app framework
Q: Does the AI actually "feel" emotions?
A: No. The system simulates emotional responses through engineered rules and appraisal functions. Emotions are represented as numerical values that influence behavior, not subjective experiences.
Q: Can I use this commercially?
A: Yes, under MIT license. Attribution required.
Q: Why UAI mode vs AI mode?
A: UAI (embedding-based) is faster and free (no API calls). AI mode (LLM-based) is more context-aware but uses API quota. UAI recommended for research.
Q: Can I add my own emotions?
A: Yes! See Customization section.
Q: Does this work with other LLMs?
A: Architecture is model-agnostic. Currently uses Gemini API, but adaptable to OpenAI, Anthropic, etc. with minor modifications.
Q: Is training data included?
A: No training - system uses rule-based appraisal and prompt engineering. No ML training involved.

