Skip to content

Zagan202/LoFiAi

Repository files navigation

LoFiAi

AI Driven Music Composition

Based on a TensorFlow implementation of Google WaveNet

Training data is fed to the neural network afterwards the network models
the conditional probability to generate the next audio sample in respect
to the previous samples and parameters.

The deep convolutional neural network (CNN) takes the data as an input and synthesises an output one sample at a time.
this requires immense computational power, therefore training is done on Google's computers

Also, utilizing (MongoDB, Express, React, Node.js, Google Cloud Platform), developed a web application to listen and share
the CNN generated music.

TL;DR: Using AI trained on Google's computers to compose new music.

Currently training to output more lofi hip hop beats to relax/study to

About

AI generated music application

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages