Magic Wall turns a Raspberry Pi touchscreen into an ambient AI news-art frame.
It finds a current news story, turns it into intentionally outrageous AI-slop wallpaper, and shows the story context when you walk up and touch the screen.
It is built for a Raspberry Pi with a 7-inch touchscreen, but it also runs on any machine with Python 3.11+ and a browser.
- Uses OpenAI for story selection fallback and image generation.
- Builds a free public-source mesh for art story discovery before using paid web search.
- Prefers news from the last hour for the ambient art story.
- If the last-hour window is thin, uses the biggest verifiable story of the day instead.
- Generates an intentionally outrageous AI-slop news meme poster.
- Shows a touch-first story details overlay over the artwork.
- Allows one short readable meme caption inside the image.
- Uses public-figure caricatures only when they are central to the story.
- Runs locally on the device with a user-provided OpenAI API key.
- Refreshes every 4 hours by default, for 6 generated images per day.
Each generation follows this rough flow:
- Collect current story candidates from public feeds such as Google News RSS and Hacker News.
- Dedupe, cluster, and score candidates locally for freshness, source quality, traction, novelty, and visual potential.
- Ask the configured OpenAI text model to choose from the finalist list without web search.
- Fall back to OpenAI web search only when the public source mesh has no usable story.
- Compress the chosen story into a short meme-label title.
- Generate a chaotic landscape artwork with the story as the visual anchor.
- Atomically replace the current image on the local kiosk page.
The output style is intentionally absurd: glossy, overcrowded, cinematic, neon, meme-readable, and funny on inspection.
On touch, the kiosk reveals only the information used for the current wallpaper: title, summary, source, published time, generation time, next refresh, and style.
On the Pi:
git clone https://github.com/diptobiswas/magic-wall.git
cd magic-wall
./install.shThe installer creates a Python virtual environment, installs Magic Wall, writes user-level systemd services, and launches Chromium in kiosk mode at:
http://127.0.0.1:8765
For detailed touchscreen setup, see docs/raspberry-pi.md.
Magic Wall is bring-your-own-key. Your key stays on your device.
The default config lives at:
~/.config/magic-wall/config.toml
Generated images and metadata live at:
~/.local/share/magic-wall/
You can also supply:
export OPENAI_API_KEY="sk-..."Then initialize:
magic-wall initDefault config:
[openai]
api_key = ""
text_model = "gpt-5.4-mini"
image_model = "gpt-image-2"
image_quality = "low"
image_size = "1344x800"
output_format = "jpeg"
[refresh]
minutes = 240
news_window_minutes = 60
[server]
host = "127.0.0.1"
port = 8765
timezone = "local"magic-wall init
magic-wall run
magic-wall generate-now
magic-wall statuspython3 -m venv .venv
.venv/bin/python -m pip install -e ".[dev]"
.venv/bin/python -m pytestRun locally:
.venv/bin/magic-wall init
.venv/bin/magic-wall runOpen:
http://127.0.0.1:8765
Deploy the current workspace to the Raspberry Pi after SSH is configured:
scripts/deploy-pi.sh- Art story discovery uses public feeds first, then OpenAI web search only as a fallback.
- The OpenAI key is stored locally in
~/.config/magic-wall/config.toml. - Generated images and state are stored locally in
~/.local/share/magic-wall/. - Local runtime folders, generated images, logs, virtual environments, caches, and archives are ignored by git.
- Do not commit
config.toml,.env, generated images, or runtime logs.
Before publishing, run:
rg -n --hidden "sk-|OPENAI_API_KEY|api_key|password|secret|token" .The default image quality is low because the display is small and the default cadence is only six generations per day. Art generation uses the source mesh before paid web search, so normal hourly art mode should mostly pay for image generation plus a small text-model finalist selection rather than a web-search tool call every hour.
MIT