Skip to content
master
Switch branches/tags
Code

Latest commit

 

Git stats

Files

Permalink
Failed to load latest commit information.
Type
Name
Latest commit message
Commit time
 
 
 
 
 
 
 
 
 
 

Realtime Generation of Audible Textures Inspired by a Video Stream

Simone Mellace, Jerome Guzzi, Alessandro Giusti, Luca M. Gambardella

Dalle Molle Institute for Artificial Intelligence, USI-SUPSI, Lugano (Switzerland)

Abstract

We showcase a model to generate a soundscape from a camera stream in real time. The approach relies on a training video with an associated meaningful audio track; a granular synthesizer generates a novel sound by randomly sampling and mixing audio data from such video, favoring timestamps whose frame is similar to the current camera frame; the semantic similarity between frames is computed by a pre-trained neural network. The demo is interactive: a user points a mobile phone to different objects and hears how the generated sound changes.

Summary of the approach

AAAI 2019 demo paper

See proceedings of AAAI 2019 (not yet online)

Poster: PDF

Video: VIDEO

Code release

Coming soon. Please inquiry by email

About

Code and material repository for AAAI 2019 demo paper Realtime Generation of Audible Textures Inspired by a Video Stream

Resources

Releases

No releases published

Packages

No packages published