Skip to content

Ba5bit/echoscape

Repository files navigation

Echoscape

Echoscape is an interactive Python project that turns human motion, face presence, and sound into a live city simulation.

Using a webcam and microphone, the system detects:

  • hand gestures
  • number of visible faces / people
  • sound intensity

These signals dynamically affect the state of a pixel-art city in real time:

  • Greenery
  • Pollution
  • Density
  • Energy

The city is rendered live in Pygame.
When the user presses P, the current city state is converted into a prompt and a Stable Diffusion image is generated to reflect the resulting urban mood, such as utopia or dystopia.


Features

  • Real-time webcam interaction
  • Hand gesture tracking with MediaPipe
  • Face / people detection
  • Microphone-based environmental input
  • Dynamic city-state simulation in Pygame
  • Stable Diffusion image generation based on live metrics
  • Utopia / dystopia visual outcomes depending on city conditions

Tech Stack

  • Python 3.12
  • pygame
  • opencv-python
  • mediapipe
  • numpy
  • sounddevice
  • torch (CUDA-enabled)
  • diffusers
  • transformers
  • accelerate
  • safetensors

Interface Preview

Dystopian state in the live map

Dystopian Interface

Utopian state in the live map

Utopian Interface

The interface displays the current city simulation together with live sensor metrics such as greenery, pollution, density, energy, detected faces, hand gestures, and microphone activity.


Stable Diffusion Outputs

Dystopian generation

Dystopian Example

Utopian generation

Utopian Example

The generated images reflect the environmental state of the simulated city.
Higher pollution, density imbalance, and aggressive gesture patterns can lead to dystopian outputs, while greener and calmer states can produce utopian generations.


Demo

A demonstration video is included in the repository:

  • assets/demo/Demonstration.mp4

Project Structure

ECHOSCAPE/
├── assets/
│   ├── demo/
│   │   └── Demonstration.mp4
│   ├── examples/
│   │   ├── Example_1.png
│   │   ├── Example_2.png
│   └── interface/
│       ├── Interface_1.png
│       └── Interface_2.png
├── city.py
├── faces.py
├── main.py
├── prompt.py
├── render.py
├── sensors.py
├── stable_generation.py
├── utils.py
├── requirements.txt
├── .gitignore
└── README.md

Requirements

  • Python 3.12
  • Webcam
  • Microphone
  • NVIDIA GPU recommended
  • CUDA-enabled PyTorch recommended for Stable Diffusion generation

Installation

1. Clone the repository

git clone https://github.com/Ba5bit/echoscape.git
cd echoscape

2. Create and activate a virtual environment

Windows

python -m venv .venv
.venv\Scripts\activate

macOS / Linux

python3 -m venv .venv
source .venv/bin/activate

3. Install CUDA-enabled PyTorch

This project was tested with:

  • torch 2.5.1+cu121

Example installation for CUDA 12.1:

pip install torch==2.5.1+cu121 --index-url https://download.pytorch.org/whl/cu121

4. Install the remaining dependencies

pip install -r requirements.txt

Running the Project

Start the application with:

python main.py

Controls

  • P — generate an image using Stable Diffusion

Tested Environment

  • Python 3.12
  • pygame 2.6.1
  • opencv-python 4.12.0.88
  • mediapipe 0.10.14
  • numpy 2.2.6
  • sounddevice 0.5.3
  • torch 2.5.1+cu121
  • diffusers 0.37.0
  • transformers 5.3.0
  • accelerate 1.13.0
  • safetensors 0.7.0

Notes

  • On the first generation, Stable Diffusion model files may be downloaded automatically.
  • The first model download can take several gigabytes.
  • Image generation speed depends heavily on GPU support.
  • Without CUDA-enabled PyTorch, generation may be very slow or may not work as intended.

Disclaimer

This project is an experimental interactive portfolio prototype combining computer vision, simulation, and generative AI.
Full functionality depends on a working local Python environment, webcam, microphone, and GPU support.

About

Interactive generative city simulation using webcam motion, audio input, computer vision, and Stable Diffusion.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages