Skip to content

Latest commit

 

History

History
19 lines (13 loc) · 1.07 KB

README.md

File metadata and controls

19 lines (13 loc) · 1.07 KB

LoFiAi

AI Driven Music Composition

Based on a TensorFlow implementation of Google WaveNet

Training data is fed to the neural network afterwards the network models
the conditional probability to generate the next audio sample in respect
to the previous samples and parameters.

The deep convolutional neural network (CNN) takes the data as an input and synthesises an output one sample at a time.
this requires immense computational power, therefore training is done on Google's computers

Also, utilizing (MongoDB, Express, React, Node.js, Google Cloud Platform), developed a web application to listen and share
the CNN generated music.

TL;DR: Using AI trained on Google's computers to compose new music.

Currently training to output more lofi hip hop beats to relax/study to