Skip to content

LlmKira/Alice

Repository files navigation

Project Alice — IAUS 效用人工仿生意识

Caution

This project was designed, coded, tested, and documented entirely by AI agents. The human author directed the vision and made decisions, but does not have the capacity to maintain this codebase alone — it has grown far too complex. Contributions and forks are welcome, but please understand the maintenance reality before depending on this project.

Alice

stars telegram issues

Sponsored by OhMyGPT


Have you ever closed a chat app, and wondered — does anyone on the other side notice you've gone quiet?

With ChatGPT, Character.ai, Replika — the answer is no. You close the window, and they stop existing. They don't wonder where you went. They don't notice your friend hasn't been heard from in two weeks. They don't remember that you promised someone a book recommendation last Thursday.

Alice is our attempt to change that.

She lives on Telegram as a userbot — not behind an API, not in a chat window you open and close, but as an always-on entity sharing the same messenger you use every day. She has her own inner state that keeps running whether you talk to her or not. She builds relationships with the people around her. She sleeps at night and wakes up in the morning.

We didn't set out to build a smarter chatbot. We wanted to answer a different question:

What happens when you give an LLM a nervous system — and let it live in a real social environment?

Why Not Just a Chatbot?

Every AI chatbot today follows the same interaction pattern:

Human social life:   A thinks of B → A reaches out → conversation → B thinks of A → ...
AI interaction:      You send message → AI replies → end
                     You send message → AI replies → end
                     You send message → AI replies → end

The first is a bidirectional relationship. The second is a unidirectional tool call. There is no "AI thinks of you" step — because the AI has no internal state that keeps running.

Alice has that state. It's called a pressure field — six forces derived from the real physical constraints of digital existence. Not simulated hunger or loneliness, but genuine pressures: finite attention (bounded context windows), memory decay (information-theoretic retrieval limits), information staleness, and computational cost.

Force Source constraint What it feels like
P1 Attention Debt Finite context window — can't process everything at once "Something just happened over there"
P2 Information Pressure Memory decays without maintenance; knowledge has a shelf life "I'm losing track of what we talked about"
P3 Relationship Cooling Social bonds decay without interaction — equally real for digital and human entities "We haven't talked in a while"
P4 Thread Divergence Open commitments accumulate urgency the longer they go unresolved "There's something unresolved between us"
P5 Response Obligation Directed messages create social debt that grows with silence "Someone is waiting for my reply"
P6 Curiosity Information entropy deficit — the drive to reduce uncertainty about the world "Something interesting might be happening"

These forces feed into four competing inner voices — Diligence (process tasks, fulfill commitments), Curiosity (explore, learn, share discoveries), Sociability (maintain relationships, chat), and Caution (wait when uncertain). Each voice's loudness is weighted by an evolving personality vector π ∈ Δ³. The loudest voice picks the next target. Then an LLM writes a shell script to act.

The approach is inspired by physics rather than ethology. Instead of specifying "what a good companion should do" (telling a bird "you should have wings"), we define the laws governing the companion's world (gravity and aerodynamics) and let behavior emerge. We prove two formal guarantees:

  • Non-Quiescence — as long as open commitments or active relationships exist, aggregate pressure grows super-linearly, ensuring Alice cannot fall permanently silent
  • Structural Homeostasis — built-in negative feedback prevents pressure divergence, producing natural rhythm as a corollary

No intent detection. No slot filling. No state machines. No tool-calling required. Just physics.

Runs on small models. Alice does not require tool-calling support from the LLM. She generates shell scripts directly — any OpenAI-compatible chat completion endpoint works, including Gemini 2.5 Flash, DeepSeek, and other cost-effective models.

What We Actually Want

We don't want a better chatbot. We want an entity that can:

  • Be a night watchman — While you sleep, she reads 200 messages across 3 groups, answers the important @mention, tells your friend you'll reply tomorrow, and gives you a morning briefing.

  • Maintain your relationships — She notices you haven't talked to an old friend in two weeks. She forwards them an article they'd like, with a note: "Saw this and thought of you." She doesn't spam — she knows who likes being reached out to, and who prefers silence.

  • Hunt your promises — You said "I'll send you the doc by Friday." Thursday evening, she reminds you. Not as a calendar notification — in the right conversation, at the right moment.

  • Exist in group chats — She listens mostly. Occasionally contributes when she knows something. Helps when nobody else answers. Has her own opinions and style. Never dominates.

  • Grow over six months — Her personality drifts. Her understanding of you deepens. She develops shared memories. She occasionally surprises you — not from bugs, but from emergent behavior.

  • Handle crises — When she detects your behavior pattern suddenly changes (you're firing messages in one chat, ignoring everything else), she goes quiet everywhere else and asks "Are you okay?" after it's over.

The full vision — nine scenarios, interaction primitives, ethical boundaries — is documented internally in our goal specification. What you see here is a living system pursuing that vision.

Alice in the Wild

💬 Group Banter 🔒 Security Alert 🌏 Sharing Discoveries
Group banter — Alice jokes about pocket money Security alert — Alice forwards and warns about trojans NASA article — Alice shares because the place shares her name
📰 Forwarding with Opinions 😢 Emotional Empathy 💕 Private Chat
Channel forward — Alice reads an article and shares her own take Emotional response — Alice gets upset and sends sticker Private chat — Alice senses something is wrong and reaches out

Field-Validated

Alice is not a proof-of-concept. She has been running 24/7 on real Telegram accounts, in real conversations with real people — across private chats, group chats, and supergroups. The pressure field model has been validated through simulation experiments driven by Telegram chat export data spanning 1,000+ days of conversation history.

Every design decision went through a formal Architecture Decision Record (226 ADRs and counting), with simulation validating the model before deployment and production logs validating it after.

Sawtooth Pattern — Pressure builds during idle, drops on action, rebuilds. This is Alice's heartbeat: a self-regulating cycle that emerges from the math, not from timers.

Sawtooth pressure pattern

Dunbar Tiers — Intimate friends get frequent attention, acquaintances get less. The system follows Dunbar's social brain hierarchy — different pressure accumulation rates, no hand-tuned rules.

Dunbar tier action intervals

Personality Crystallization — Over days, Alice's perception of each person converges and stabilizes. Variance drops as observations accumulate — like actually getting to know someone.

Personality crystallization convergence

Six-Force Decomposition — Real-time pressure across all dimensions. P1 spikes on new messages, P5 rises when someone is waiting, P6 builds during quiet periods.

Six-force pressure time series

Current Progress

  • Autonomous decision-making — pressure field dynamics, not if-else chains
  • Four competing voices — diligence / curiosity / sociability / caution with evolving personality vector
  • Habituation — attention naturally decays with repeated exposure
  • Circadian rhythm — dormant mode with sleep/wake cycles
  • Working memory — diary that consolidates and forgets like a real one
  • Social reception — backs off when ignored, leans in when welcomed
  • Multi-chat awareness — social panorama across all conversations
  • Topic clustering — LLM-powered automatic thread detection
  • Extensible skills — weather, music, search, calendar, and more
  • Sticker expressions — context-aware emotional selection
  • Vision — understands photos, stickers, and media
  • Voice synthesis — TTS voice message generation
  • Impression formation — Bayesian belief system that crystallizes personality traits over repeated observations
  • Channel awareness — subscribes, reads, digests, and forwards content from Telegram channels
  • Voice transcription (incoming voice → text)
  • Crisis mode (behavioral anomaly detection → automatic quiet mode)
  • Autonomous exploration (interest-driven channel/group discovery)

Architecture

Perceive ──→ Evolve ──→ Act
(events)    (pressure)  (LLM → shell scripts → Telegram)
  • Perceive — Telegram events flow in, the companion graph updates, pressure contributions accumulate
  • Evolve — Six forces compute, voices compete, a target is selected
  • Act — An LLM writes a shell script, the sandbox executes it, Telegram actions fire

The whole system runs on a tick loop. Every tick, the pressure field evolves. When pressure crosses threshold, Alice acts. When she acts, pressure releases. Then it builds again — the sawtooth heartbeat.

Quick Start

git clone --recurse-submodules https://github.com/LlmKira/alice.git
cd alice/runtime

pnpm install
cp .env.example .env   # configure Telegram session + LLM endpoint
pnpm run db:migrate
pnpm run dev           # first run: interactive Telegram login

After login, use pm2 for production:

pm2 start ecosystem.config.cjs   # starts runtime + wd-tagger + anime-classify

Full deployment guide → — Telegram credentials, LLM setup, auxiliary services, systemd hardening, troubleshooting.

Simulation

The pressure field model has a standalone Python simulation for validation:

cd simulation && uv sync && uv run python run_all.py

Prerequisites

  • Node.js 20+ / pnpm
  • Python 3.13+ / uv + pdm (auxiliary services)
  • A Telegram account (userbot, not bot API)
  • An OpenAI-compatible LLM endpoint

Project Structure

runtime/                    # TypeScript — the living system
├── src/engine/             # Three-thread loop (perceive/evolve/act)
├── src/pressure/           # P1-P6 force computation
├── src/voices/             # Voice competition + personality vectors
├── src/graph/              # Companion graph (entities + relations)
├── src/telegram/           # @mtcute MTProto client
├── src/mods/               # Diary, observer, clustering, ...
├── src/skills/             # App toolkit (weather, music, ...)
├── src/db/                 # SQLite + Drizzle ORM
└── test/                   # vitest

simulation/                 # Python — the proving ground
├── pressure.py             # Six-force calculations
├── voices.py               # Voice competition
├── sim_engine.py           # Tick-by-tick evolution
└── experiments/            # 10 validation experiments

Tech Stack

Technology
Runtime TypeScript, Node.js, tsx
Telegram @mtcute (MTProto)
Database SQLite + Drizzle ORM
LLM OpenAI-compatible API
Validation zod
Tests vitest
Simulation Python, NetworkX, NumPy, SciPy

Standing on the Shoulders of

Alice didn't come from nowhere. The pressure field model and companion architecture draw directly from established theory and remarkable open-source projects:

Theoretical Foundations

  • The Sims / IAUS — Target selection adapts the Infinite Axis Utility System from The Sims (Wright, 2000). Multi-dimensional need decay → behavior selection via utility competition. Alice replaces simulated biological needs with digital-native pressure functions.
  • Disco Elysium — The four-voice system is a direct simplification of ZA/UM's 24 competing skills. Decisions emerge from competition, not optimization.
  • Dwarf Fortress — Multi-dimensional personality with dual-track emotions (short-term mood vs long-term stress) and memory-driven personality change. Alice inherits the personality-as-weight-vector paradigm.
  • Active Inference — The pressure field shares the core intuition of Friston's free-energy principle: behavior is gradient descent on an internally maintained scalar. P6 (Curiosity) directly corresponds to Active Inference's epistemic value.
  • Braitenberg Vehicles — Simple sensor-motor couplings producing complex-seeming behavior. Alice operates on the same principle at a higher abstraction: pressure-action couplings (high P3 → social action, high P6 → exploration), with the LLM replacing the motor.
  • Dunbar's Number — Social graph tiering follows Robin Dunbar's social brain hypothesis (5 → 15 → 50 → 150 → 500). Validated by Gonçalves et al. (2011) as persistent in online networks.
  • Ebbinghaus / FSRS — P2 memory decay uses the power-law forgetting curve (Ebbinghaus, 1885) with stability update following Ye (2022). Retrievability decays unless rehearsed; consolidation increases stability.
  • Weber-Fechner Law — P1's tonic component uses logarithmic scaling of unread message counts, following the psychophysical law that subjective intensity grows logarithmically with stimulus magnitude.
  • Posner & Petersen — P1's tonic/phasic decomposition follows the two-component attention model (1990): phasic alerting for fresh stimuli, tonic vigilance for accumulated backlogs.
  • Anderson's Information Integration — Impression formation uses weighted averaging of repeated observations (Anderson, 1965). Bayesian belief EMA with asymmetric update (Siegel, 2018).
  • Goffman's Participation Framework — Group chat behavior follows Goffman's ratified/unratified participant distinction. Alice knows she's not always the addressee.
  • Prigogine's Dissipative Structures — Alice's behavioral rhythm can be interpreted as a dissipative structure: continuous pressure influx maintains the system far from equilibrium, and periodic action pulses constitute emergent temporal order.
  • Wiener's Cybernetics — Structural homeostasis is a direct application of negative feedback loops producing self-regulating systems.
  • Stanford Generative Agents — Memory architecture builds on Park et al. (2023)'s three-layer model, but couples memory to the pressure field: memory generates pressure (P2) that drives consolidation as autonomous behavior.
  • Spectral Graph Theory — Pressure propagation across the social graph uses Laplacian-based diffusion. The Fiedler value (algebraic connectivity) determines propagation strength.

Open-Source Projects We Learned From

  • Project AIRI — Pioneering open-source digital companion. Inspiration for the "living entity" philosophy and README storytelling.
  • OpenClaw — The gold standard for companion architecture at scale. Skill system design reference.
  • mem0 — Memory layer abstraction (CRUD + decay + retrieval). Informed our diary and fact memory design.
  • Concordia — Google DeepMind's generative social simulation. ActionSpec pattern and component-based entity design.
  • Voyager — Embodied lifelong learning agent. Code-as-Skill pattern and progressive disclosure of action space.
  • mtcute — The MTProto client that makes Alice's Telegram existence possible.

Open-Source Projects We Studied (design inspiration only, not code dependency)

  • nanobot MIT — Ultra-lightweight companion architecture. Minimalist design philosophy reference.
  • PageIndex MIT — Reasoning-based RAG without vector embeddings. Informed our conversation-context retrieval approach.
  • nanoclaw MIT — Lightweight alternative companion. Anthropic Agents SDK usage patterns.
  • AstrBot AGPL-3.0 — Multi-platform IM bot with pipeline architecture, plugin marketplace, and Agent sandbox. Design study only (AGPL license).

Ancestor Projects

Alice's pressure field model evolved from two predecessor projects by the same author:

  • Narrative Physics — The theoretical blog post that started it all. Graph-theoretic state management (G=(V,E,φ)), priority function families, Laplacian propagation, and the tanh(P/κ) saturation mapping.
  • A narrative engine (production system) — the Mod architecture (defineMod/contribute/handle), QuickJS sandbox execution, and the Storyteller context assembly pattern.

Star History

Star History Chart

Sponsor

Sponsored by OhMyGPT

We didn't simulate a human. We grew a digital native — and gave it a place to live.

About

✨ An autonomous digital companion — pressure field engine + Telegram userbot / Infinite Axis Utility Systems

Topics

Resources

Stars

Watchers

Forks

Packages

 
 
 

Contributors