Skip to content
'I didn’t want to imitate anybody. Any movement I knew, I didn’t want to use.' – Pina Bausch
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
functions Training/Validation for RNN, better comparisons for pose autoencoder May 18, 2019
logs Added Grace submit scripts and Chase's VAE notebook in python form May 13, 2019
models More cleanup May 17, 2019
pca
rnn_data
vae_data Consolidated RNN scripts, renamed for clarity May 17, 2019
weights RNN animation notebook works May 13, 2019
.gitignore
README.md
animate_rnn_output.ipynb
pip_req.txt
pose_autoencoder.ipynb Added pip & conda requirement files; Jupyter notebooks up and running Jun 28, 2019
rnn.ipynb Added pip & conda requirement files; Jupyter notebooks up and running Jun 28, 2019
rnn.py
sequence_autoencoder.ipynb Added pip & conda requirement files; Jupyter notebooks up and running Jun 28, 2019
sequence_autoencoder.py
train_rnn_on_grace.sh
train_sequence_autoencoder_on_grace.sh More cleanup May 17, 2019

README.md

Beyond Imitation

'I didn’t want to imitate anybody. Any movement I knew, I didn’t want to use.' – Pina Bausch

Getting started:

I like to work within Conda environments to manage package dependencies. To download Conda (Miniconda is sufficient, no need to go for the full Anaconda) for your particular system, and for Python 3, check out: https://docs.conda.io/en/latest/miniconda.html

Once that's installed, clone the repository and set up the Conda environment:

git clone https://github.com/mariel-pettee/choreography.git
cd choreography
conda create -n choreo

Type y when prompted, then:

conda activate choreo
pip install -r pip_req.txt
python -m ipykernel install --user --name choreo --display-name "choreo" # installs the Conda kernel for use in Jupyter notebooks

You can then actively develop within your environment and add packages as you see fit. If anything breaks beyond measure, you can always exit the environment with conda deactivate and can even delete the environment with conda env remove -n choreo. Then you can remake the environment by following the steps above again.

Note that when opening a Jupyter notebook, to use the same packages as you've installed here, you need to select "choreo" from the list of kernels within your notebook.

Play with the RNN model

This model, inspired by chor-rnn (https://arxiv.org/abs/1605.06921), uses 3 LSTM layers to predict new poses given a prompt of a sequence of poses. The length of the prompt is called look_back. We use a Mixture Density Network (https://publications.aston.ac.uk/id/eprint/373/1/NCRG_94_004.pdf) to create multiple Gaussian distributions of potential poses given a prompt sequence. The number of Gaussian distributions is determined by n_mixes.

You can experiment with this model interactively in a Jupyter notebook using rnn.ipynb or via the command line with commands such as:

conda activate choreo
python rnn.py rnn_test --cells 64 64 64 64 --n_mixes 25 --look_back 128 --batch_size 128 --n_epochs 10 --lr 1e-4 --use_pca True

Play with the autoencoder for poses

This model uses an autoencoder structure to compress each pose into a lower-dimensional latent space and then back into its original dimension. After sufficient training, the latent space will group similar poses together, and sequences of poses can be visualized as paths throughout the latent space. Users can also construct their own movement sequences by drawing paths throughout the latent space and decoding them into their original dimensions. The interactive Jupyter notebook is pose_autoencoder.ipynb.

Play with the autoencoder for sequences

This model also uses an autoencoder structure, but for fixed-length sequences of movements, or 'phrases'. This can be then used in two primary ways:

  1. Sample randomly from within a given standard deviation in the latent space (which, when well-trained, should resemble an n-dimensional Gaussian distribution) to generate a new fixed-length movement sequence
  2. Look at the location of a given sequence in data in the latent space, then add a small deviation to this location and observe its motion. Small deviations (~0.5 sigma or less) will usually closely resemble the original sequence with subtle differences in timing or expressiveness. Larger deviations (~1 sigma or larger) will often capture a similar choreographic idea to the original phrase, but will become increasingly inventive.

Users can experiment with the interactive Jupyter notebook sequence_autoencoder.ipynb or via the command line with commands such as:

conda activate choreo
python sequence_autoencoder.py --lr 1e-4
You can’t perform that action at this time.