Skip to content
DanceNet -๐Ÿ’ƒ๐Ÿ’ƒDance generator using Autoencoder, LSTM and Mixture Density Network. (Keras)
Branch: master
Clone or download
Latest commit 7041e52 Aug 28, 2018
Type Name Latest commit message Commit time
Failed to load latest commit information.
LICENSE Update LICENSE Aug 8, 2018 Update Aug 28, 2018 clean up code Aug 8, 2018
dancegen.ipynb update notebook Aug 9, 2018
demo.gif add demo.gif Aug 8, 2018
demo2.gif Add demo 2 Aug 8, 2018
floyd.yml add floyd config with attached dataset Aug 9, 2018 Update Aug 8, 2018 Update Aug 8, 2018 rename img_from_lv to video_from_lv Aug 8, 2018

DanceNet - Dance generator using Variational Autoencoder, LSTM and Mixture Density Network. (Keras)

License: MIT Run on FloydHub DOI

This is an attempt to create a dance generator AI, inspired by this video by @carykh

Main components:

  • Variational autoencoder
  • LSTM + Mixture Density Layer


  • Python version = 3.5.2


    • keras==2.2.0
    • sklearn==0.19.1
    • numpy==1.14.3
    • opencv-python==3.4.1

Dataset This is the video used for training.

How to run locally

  • Download the trained weights from here. and extract it to the dancenet dir.
  • Run dancegen.ipynb

How to run in your browser

Run on FloydHub

  • Click the button above to open this code in a FloydHub workspace (the trained weights dataset will be automatically attached to the environment)
  • Run dancegen.ipynb

Training from scratch

  • fill dance sequence images labeled as 1.jpg, 2.jpg ... in imgs/ folder
  • run
  • run to encode images
  • run to test decoded video
  • run jupyter notebook dancegen.ipynb to train dancenet and generate new video.


You canโ€™t perform that action at this time.