Skip to content

alfiedennen/emanations

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

emanations

Emanations — a hallway-mounted display showing an axonometric painting of one of the rooms in the house, swapped live by who's home

A wall-mounted display in the hallway that shows a painting of the room you're currently in. The painting changes when you change rooms. When both inhabitants are in the same room, the painting changes again to show them both there together. When they're in different rooms, the screen splits, and you see two paintings — one of each.

The quietest piece in the studio. It does not announce itself. It is also the piece visitors notice first.

Companion to haroldathome.com/emanations. MIT (code) + CC-BY-NC 4.0 (docs/images).


What this is, what this isn't

  • Is: a Python generation pipeline for creating per-room paintings via gpt-image-2 + a small WebSocket-driven viewer that swaps them based on (committed_room, who's home) state. Three bundled demo paintings to verify the wiring on day one
  • Is NOT: a complete production application. You bring your own paintings (generated from your own reference photos with your own room descriptions) and your own committed_room sensors (from alfiedennen/presence-paradigm or equivalent)

What you get

  • generate/generate.py — Python pipeline calling OpenAI gpt-image-2 with reference photos. Customisable per-room prompts (axonometric Hopper × Monument Valley default style)
  • viewer/stage.js + viewer/index.html — front-end painting renderer. Three viewport states (together / apart / one-home), portrait + landscape adaptive, configurable per-deployment
  • rooms/{office,library-person-b,kitchen-both}.png — three vetted demo paintings, the same ones on haroldathome.com. Drop-in works immediately
  • docs/why-this (BERG Immaterials lineage + the philosophical argument), generate (full recipe for your own painting set), deploy-viewer (deployment recipe), tune (per-knob tuning), known-limits

What you need

  • Home Assistant 2024.6+ with per-person committed_room sensors (see alfiedennen/presence-paradigm)
  • Python 3.11+ (for the generation pipeline)
  • An OpenAI API key with access to gpt-image-2 (~$2 for a typical 17-painting set)
  • Reference photos: one per inhabitant + one per room
  • A wall-mounted display capable of running a web browser (Fire HD 10, Pixel 6, Pi + monitor, anything)

Quickstart

git clone https://github.com/alfiedennen/emanations.git
cd emanations

# Verify the viewer renders with the bundled demos
ssh root@homeassistant.local "mkdir -p /config/www/emanations/rooms"
scp viewer/index.html viewer/stage.js root@homeassistant.local:/config/www/emanations/
scp rooms/*.png root@homeassistant.local:/config/www/emanations/rooms/
scp viewer/viewer-config.example.js root@homeassistant.local:/config/www/emanations/viewer-config.js
# Edit viewer-config.js for your HA entity names

# Visit http://homeassistant.local:8123/local/emanations/
# Verify the demo paintings render

# Then generate your own set
cd generate
pip install -r requirements.txt
echo "OPENAI_API_KEY=sk-..." > .env
# Provide your reference photos in refs/ — see refs/README.md
# Edit ROOM_DESC + PROMPTS in generate.py for YOUR rooms
python generate.py

# Deploy your generated set
scp ../rooms/*.png root@homeassistant.local:/config/www/emanations/rooms/
# Bump CACHE_BUST in stage.js, redeploy stage.js, reload viewer

Full step-by-step in docs/generate.md and docs/deploy-viewer.md.

How it works

For the philosophical argument — the BERG Immaterials lineage, the Hopper × Monument Valley aesthetic decision, the "quietest piece in the studio" framing — see docs/why-this.md.

For the technical mechanics — the four HA entities subscribed to, the three viewport states, the painting-naming convention — see viewer/README.md.

Repo layout

emanations/
├── README.md                          this file
├── LICENSE / LICENSE-CONTENT / REDACTION.md / CHANGELOG.md / .gitignore / .githooks/
├── generate/
│   ├── README.md
│   ├── generate.py                    the gpt-image-2 generation pipeline
│   ├── prompts/
│   │   └── room-prompt.template.txt   annotated prompt template
│   ├── refs/
│   │   └── README.md                  what photos to provide (gitignored)
│   └── requirements.txt
├── viewer/
│   ├── README.md
│   ├── stage.js                       the renderer (~430 lines)
│   ├── index.html                     minimal host page
│   └── viewer-config.example.js       per-deployment config template
├── rooms/                             3 vetted demo paintings
│   ├── README.md
│   ├── office.png
│   ├── library-person-b.png
│   └── kitchen-both.png
├── docs/
│   ├── why-this.md                    the lineage
│   ├── generate.md                    end-to-end generation recipe
│   ├── deploy-viewer.md               deployment recipe
│   ├── tune.md                        per-knob tuning
│   └── known-limits.md
└── credits/
    └── haroldathome.md

See also

Licences

Scope Licence
Code (generate/, viewer/, .githooks/) MIT — see LICENSE
Documentation, prose CC-BY-NC 4.0 — see LICENSE-CONTENT
Bundled demo paintings (rooms/*.png) CC-BY-NC 4.0 — see LICENSE-CONTENT. Depict the haroldathome reference household, used with consent

About

A wall-mounted display that shows a painting of the room you're in, swapped live by who's home. gpt-image-2 generation pipeline + WebSocket-driven viewer + 3 demo paintings. Companion to haroldathome.com/emanations.

Resources

License

Unknown, Unknown licenses found

Licenses found

Unknown
LICENSE
Unknown
LICENSE-CONTENT

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors