Skip to content
Code and material repository for AAAI 2019 demo paper Realtime Generation of Audible Textures Inspired by a Video Stream
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
README.md
poster.pdf
summary.png
video.mov
video.mp4

README.md

Realtime Generation of Audible Textures Inspired by a Video Stream

Simone Mellace, Jerome Guzzi, Alessandro Giusti, Luca M. Gambardella

Dalle Molle Institute for Artificial Intelligence, USI-SUPSI, Lugano (Switzerland)

Abstract

We showcase a model to generate a soundscape from a camera stream in real time. The approach relies on a training video with an associated meaningful audio track; a granular synthesizer generates a novel sound by randomly sampling and mixing audio data from such video, favoring timestamps whose frame is similar to the current camera frame; the semantic similarity between frames is computed by a pre-trained neural network. The demo is interactive: a user points a mobile phone to different objects and hears how the generated sound changes.

Summary of the approach

AAAI 2019 demo paper

See proceedings of AAAI 2019 (not yet online)

Poster: PDF

Video: VIDEO

Code release

Coming soon. Please inquiry by email

You can’t perform that action at this time.