Clean-slate Rust firmware for the M5Stack CoreS3 Stack-chan — no_std, embassy, no cloud.
cargo install espup && espup install
source ~/export-esp.sh
just fmr # flash + monitor over USB-Serial-JTAGNeeds a CoreS3 Stack-chan kit,
a USB-C cable, Rust 1.88+, and dialout group membership for serial access.
See the justfile for the full recipe set (host tests, CI gates,
sensor bench examples).
M5Stack ships Stack-chan with an xiaozhi firmware stack: a Chinese
LLM-agent pipeline with cloud dependencies, questionable security posture, and
a C++ codebase that's hard to audit. stackchan-kai rebuilds just the local
desk-toy surface — animated face, head motion, local sensors — in no_std
Rust on top of esp-hal and
embassy. The engine is modeled as data and the render
path is shared with a host-side simulator, so most of the firmware is testable
without touching the hardware.
stackchan-core models the avatar as data: an Entity (face, motor,
perception, voice, mind, events, input, tick) plus a Director that
sorts Modifiers by phase and ticks them each frame.
use stackchan_core::{Director, Entity, Instant, modifiers::Blink};
let mut entity = Entity::default();
let mut blink = Blink::new();
let mut director = Director::new();
director.add_modifier(&mut blink).expect("registry has room");
for ms in (0..10_000).step_by(33) {
director.run(&mut entity, Instant::from_millis(ms));
}Each Modifier declares a phase (Affect, Expression, Motion,
Audio) and a priority; the Director sorts once and ticks per frame.
Stock modifiers cover blinking, breathing, idle eye drift, occasional
head glances, emotion transitions, touch / IR / voice / ambient /
battery reactions, attention-driven head tilt, and audio-driven mouth
motion. A parallel
Skill surface carries predicate-fired capabilities. See the
architecture overview
and modifier authoring guide
for the details.
- Animated face — eased transitions across the m5stack-avatar emotion palette, blink / breath / idle-drift at double-buffered 30 FPS
- Head motion — Feetech SCServo pan/tilt with a calibration bench (
just bench) - 9-axis sensing — BMI270 accel + gyro, BMM150 magnetometer (compensated µT, live bench via
just mag-bench) - Local inputs — FT6336U touch, Si12T body-touch strip, LTR-553 ambient + proximity, NEC IR decoder
- Timekeeping + peripherals — BM8563 RTC, PY32 co-processor, WS2812 neck LED ring (
just leds-bench) - Camera tracking — GC0308 capture into a block-grid motion tracker, engagement-driven gaze with microsaccades and lost-target search
- Host-side sim — runs the full modifier stack on the host with pixel-golden tests + an
eguivisualiser (cargo run -p stackchan-sim --bin viz --features viz); cuts behaviour iteration from ~30 s build cycles to under a second - Safe by default — no
unwrapin library code, typed errors throughout,unsafedenied workspace-wide
STACKCHAN.RON on an SD card brings up Wi-Fi station, mDNS, and SNTP-on-link-up,
and the firmware exposes a LAN-only HTTP control plane:
GET /— operator dashboard, single-page HTML embedded in the binaryGET /state/stream— live state via Server-Sent EventsPOST /emotion,/look-at,/reset,/speak— manual overridePOST /volume,/mute— persistent audio control (atomic SD writeback)GET/PUT /settings— persistent config with atomic SD writeback- Bearer-token auth on writes; constant-time compare; LAN-scoped (no TLS)
Without an SD card the firmware boots offline and the desk-toy surface works the same. See HTTP control plane for the full reference.
- No voice agent or LLM. This is not a xiaozhi replacement.
- No cloud APIs or telemetry.
- No C/C++ in the firmware binary. Drivers are written directly against datasheets.
- Not an M5Unified port. Only the desk-toy surface area is covered.
Licensed under either of
at your option.