Skip to content
DanceNet -๐Ÿ’ƒ๐Ÿ’ƒDance generator using Autoencoder, LSTM and Mixture Density Network. (Keras)
Branch: master
Clone or download
Latest commit 7041e52 Aug 28, 2018
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
LICENSE Update LICENSE Aug 8, 2018
README.md Update README.md Aug 28, 2018
create_data.py clean up code Aug 8, 2018
dancegen.ipynb update notebook Aug 9, 2018
demo.gif add demo.gif Aug 8, 2018
demo2.gif Add demo 2 Aug 8, 2018
floyd.yml add floyd config with attached dataset Aug 9, 2018
gen_lv.py Update gen_lv.py Aug 8, 2018
mdn.py
model.py Update model.py Aug 8, 2018
video_from_lv.py rename img_from_lv to video_from_lv Aug 8, 2018

README.md

DanceNet - Dance generator using Variational Autoencoder, LSTM and Mixture Density Network. (Keras)

License: MIT Run on FloydHub DOI

This is an attempt to create a dance generator AI, inspired by this video by @carykh

Main components:

  • Variational autoencoder
  • LSTM + Mixture Density Layer

Requirements:

  • Python version = 3.5.2

    Packages

    • keras==2.2.0
    • sklearn==0.19.1
    • numpy==1.14.3
    • opencv-python==3.4.1

Dataset

https://www.youtube.com/watch?v=NdSqAAT28v0 This is the video used for training.

How to run locally

  • Download the trained weights from here. and extract it to the dancenet dir.
  • Run dancegen.ipynb

How to run in your browser

Run on FloydHub

  • Click the button above to open this code in a FloydHub workspace (the trained weights dataset will be automatically attached to the environment)
  • Run dancegen.ipynb

Training from scratch

  • fill dance sequence images labeled as 1.jpg, 2.jpg ... in imgs/ folder
  • run model.py
  • run gen_lv.py to encode images
  • run video_from_lv.py to test decoded video
  • run jupyter notebook dancegen.ipynb to train dancenet and generate new video.

References

You canโ€™t perform that action at this time.