A Python/PyQt5 research tool for simulating how sound behaves as it travels through connected environments. Synthesises room impulse responses procedurally from parametric room descriptions, applies frequency-dependent material absorption, air absorption (ISO 9613-1), and distance attenuation — then renders your source audio along a drawn path in real time.

pip install PyQt5 numpy scipy
python spasa.py| File | Role |
|---|---|
spasa.py |
PyQt5 GUI — main entry point |
acoustics_engine.py |
All signal-processing & simulation logic |
wav_export.py |
Mono 16-bit WAV writer (stdlib only) |
spasa.app |
MACOS executable - conversions to EXE are on the to do list |
Click + Add Room, then drag on the canvas to draw a rectangle. Each rectangle is a room. Select it and adjust its size, ceiling height, and wall material in the right-hand panel. Rooms can be moved with the ⤢ Move Room tool and deleted with ✕ Delete Selected (or press Delete).
Click ✎ Draw Path. Each click on the canvas adds a waypoint. When you're done, press Enter, double-click, or hit ✓ Finish Path. The green dot is the start; the red dot is the end. The source audio will travel along this path at the speed you set.
Click 📂 Load Source WAV to import your dry source sound (mono or stereo — it will be mixed to mono internally).
Import a measured or synthetic IR via 📂 Load IR WAV, then toggle ☑ Use External IR. The IR Mix slider blends between the tool's synthesised IR and your imported one, letting you layer real captured reverb on top of the parametric simulation.
| Parameter | Effect |
|---|---|
| Source Speed | How fast (m/s) the sound moves along the path |
| Air Temperature | Affects speed of sound & air absorption curve |
| Air Humidity | Affects high-frequency air absorption |
| Max Reflections | Image-source count — more = richer reverb tail, slower render |
Hit ▶ Render. Processing runs on a background thread so the UI stays responsive. Once done, 💾 Export WAV saves the result as a 16-bit mono WAV at the project's sample rate.
Save serialises rooms, path, all parameters, and file references to a .ssp JSON file. Open reloads everything (audio files are re-read from their original paths).
| Material | Character |
|---|---|
| concrete | Hard, reflective masonry |
| brick | Rough masonry, slight diffusion |
| wood_panel | Mid-freq absorption, warm |
| plaster | Smooth drywall / plaster |
| glass | Very reflective, bright |
| carpet | Strong high-freq absorption |
| acoustic_foam | Broadband absorption |
| curtain | Fabric, moderate absorption |
| metal_sheet | Highly reflective, metallic ring |
| tile | Hard ceramic, bright |
These have no single authoritative absorption data — the values are research-informed estimates tuned for plausible spatial character. Treat them as starting points and adjust via the parametric controls or by blending with a measured IR.
| Material | Character |
|---|---|
| grass | Ground cover, diffuse, mid-high absorption |
| foliage | Dense leaf canopy, very diffuse |
| flesh | Soft biological tissue, broad absorption |
| water_surface | Calm water, low absorption, slight scatter |
| soil_earth | Loose earth, high diffusion |
| snow | Fresh snow, very high absorption (deadening) |
| bark_wood | Rough tree bark, moderate diffusion |
| rubber | Thick rubber, broad absorption |
-
Image-Source RIR Synthesis — For each room the path passes through, a Room Impulse Response is synthesised using the image-source method. The number of virtual sources is controlled by Max Reflections.
-
Frequency-Dependent Absorption — Each material defines absorption coefficients across seven octave bands (125 Hz – 8 kHz). These are interpolated on a log-frequency scale and applied as a transmission filter in the FFT domain after each reflection.
-
Exponential Decay Envelope — A Sabine-estimated RT60 drives an exponential decay envelope over the RIR, giving the tail its natural length.
-
Air Absorption — A simplified ISO 9613-1 model attenuates high frequencies based on distance, temperature, and humidity.
-
Distance Attenuation — Inverse-distance (1/r) amplitude scaling relative to a 1 m reference.
-
Segment Convolution — The source audio is split into short segments. Each segment is convolved with the RIR appropriate to the room the source occupies at that moment, then overlap-added into the output buffer.
-
External IR Blend — If an external impulse response is loaded and enabled, it is linearly cross-faded with the synthetic RIR before convolution, controlled by the IR Mix slider.
- New materials: Add an entry to the
MATERIALSdict inacoustics_engine.py. Provide seven octave-band absorption values (125–8k Hz) and a scattering coefficient. - Richer room geometry: The current model uses a single material for all six surfaces. Splitting
synthesize_rir()to accept per-surface materials is straightforward. - Binaural / HRTF: The pipeline outputs mono. Wrapping the per-ear convolution with measured HRTFs is a natural next step.
- Real-time preview: The segment-based architecture is already latency-friendly; hooking it into a streaming audio callback (e.g. via
sounddevice) would enable live preview as you drag the path.
- Python 3.8+
- PyQt5 — GUI
- NumPy — array math
- SciPy — FFT convolution (
scipy.signal.fftconvolve)
No external audio libraries are required for file I/O — WAV reading and writing uses the Python standard library (wave + struct).