A wall-mounted display in the hallway that shows a painting of the room you're currently in. The painting changes when you change rooms. When both inhabitants are in the same room, the painting changes again to show them both there together. When they're in different rooms, the screen splits, and you see two paintings — one of each.
The quietest piece in the studio. It does not announce itself. It is also the piece visitors notice first.
Companion to haroldathome.com/emanations. MIT (code) + CC-BY-NC 4.0 (docs/images).
- Is: a Python generation pipeline for creating per-room paintings via gpt-image-2 + a small WebSocket-driven viewer that swaps them based on (committed_room, who's home) state. Three bundled demo paintings to verify the wiring on day one
- Is NOT: a complete production application. You bring your own paintings (generated from your own reference photos with your own room descriptions) and your own committed_room sensors (from
alfiedennen/presence-paradigmor equivalent)
generate/generate.py— Python pipeline calling OpenAI gpt-image-2 with reference photos. Customisable per-room prompts (axonometric Hopper × Monument Valley default style)viewer/stage.js+viewer/index.html— front-end painting renderer. Three viewport states (together / apart / one-home), portrait + landscape adaptive, configurable per-deploymentrooms/{office,library-person-b,kitchen-both}.png— three vetted demo paintings, the same ones on haroldathome.com. Drop-in works immediatelydocs/—why-this(BERG Immaterials lineage + the philosophical argument),generate(full recipe for your own painting set),deploy-viewer(deployment recipe),tune(per-knob tuning),known-limits
- Home Assistant 2024.6+ with per-person
committed_roomsensors (seealfiedennen/presence-paradigm) - Python 3.11+ (for the generation pipeline)
- An OpenAI API key with access to
gpt-image-2(~$2 for a typical 17-painting set) - Reference photos: one per inhabitant + one per room
- A wall-mounted display capable of running a web browser (Fire HD 10, Pixel 6, Pi + monitor, anything)
git clone https://github.com/alfiedennen/emanations.git
cd emanations
# Verify the viewer renders with the bundled demos
ssh root@homeassistant.local "mkdir -p /config/www/emanations/rooms"
scp viewer/index.html viewer/stage.js root@homeassistant.local:/config/www/emanations/
scp rooms/*.png root@homeassistant.local:/config/www/emanations/rooms/
scp viewer/viewer-config.example.js root@homeassistant.local:/config/www/emanations/viewer-config.js
# Edit viewer-config.js for your HA entity names
# Visit http://homeassistant.local:8123/local/emanations/
# Verify the demo paintings render
# Then generate your own set
cd generate
pip install -r requirements.txt
echo "OPENAI_API_KEY=sk-..." > .env
# Provide your reference photos in refs/ — see refs/README.md
# Edit ROOM_DESC + PROMPTS in generate.py for YOUR rooms
python generate.py
# Deploy your generated set
scp ../rooms/*.png root@homeassistant.local:/config/www/emanations/rooms/
# Bump CACHE_BUST in stage.js, redeploy stage.js, reload viewerFull step-by-step in docs/generate.md and docs/deploy-viewer.md.
For the philosophical argument — the BERG Immaterials lineage, the Hopper × Monument Valley aesthetic decision, the "quietest piece in the studio" framing — see docs/why-this.md.
For the technical mechanics — the four HA entities subscribed to, the three viewport states, the painting-naming convention — see viewer/README.md.
emanations/
├── README.md this file
├── LICENSE / LICENSE-CONTENT / REDACTION.md / CHANGELOG.md / .gitignore / .githooks/
├── generate/
│ ├── README.md
│ ├── generate.py the gpt-image-2 generation pipeline
│ ├── prompts/
│ │ └── room-prompt.template.txt annotated prompt template
│ ├── refs/
│ │ └── README.md what photos to provide (gitignored)
│ └── requirements.txt
├── viewer/
│ ├── README.md
│ ├── stage.js the renderer (~430 lines)
│ ├── index.html minimal host page
│ └── viewer-config.example.js per-deployment config template
├── rooms/ 3 vetted demo paintings
│ ├── README.md
│ ├── office.png
│ ├── library-person-b.png
│ └── kitchen-both.png
├── docs/
│ ├── why-this.md the lineage
│ ├── generate.md end-to-end generation recipe
│ ├── deploy-viewer.md deployment recipe
│ ├── tune.md per-knob tuning
│ └── known-limits.md
└── credits/
└── haroldathome.md
- haroldathome.com/emanations — narrative
- alfiedennen/presence-paradigm — committed_room sensors this viewer subscribes to
- alfiedennen/wall-display-kit — OLED wall display hardening (pixel-shift, brightness curve, charge cycling) — pair with this for the actual wall device
- alfiedennen/the-long-take — sibling visualisation also driven by household state
- BERG London's Immaterials series (the lineage)
| Scope | Licence |
|---|---|
Code (generate/, viewer/, .githooks/) |
MIT — see LICENSE |
| Documentation, prose | CC-BY-NC 4.0 — see LICENSE-CONTENT |
Bundled demo paintings (rooms/*.png) |
CC-BY-NC 4.0 — see LICENSE-CONTENT. Depict the haroldathome reference household, used with consent |
