Provides an NVIDIA GPU-enabled container with Magenta pre-installed on an Anaconda and TensorFlow container xychelsea/tensorflow:latest-gpu
.
Magenta is an open source research project, based on TensorFlow exploring the role of machine learning as a tool in the creative process. TensorFlow is an open source platform for machine learning. It provides tools, libraries and community resources for researcher and developers to build and deploy machine learning applications. Anaconda is an open data science platform based on Python 3. This container installs TensorFlow through the conda
command with a lightweight version of Anaconda (Miniconda) and the conda-forge
repository in the /usr/local/anaconda
directory. The default user, anaconda
runs a Tini shell /usr/bin/tini
, and comes preloaded with the conda
command in the environment $PATH
. Additional versions with NVIDIA/CUDA support and Jupyter Notebooks tags are available.
Two flavors provide an NVIDIA GPU-enabled container with TensorFlow pre-installed through Anaconda.
The base container, based on the xychelsea/tensorflow:latest
from the Anaconda 3 container stack (xychelsea/anaconda3:latest
) running Tini shell. For the container with a /usr/bin/tini
entry point, use:
docker pull xychelsea/magenta:latest
With Jupyter Notebooks server pre-installed, pull with:
docker pull xychelsea/magenta:latest-jupyter
Modified versions of nvidia/cuda:latest
container, with support for NVIDIA/CUDA graphical processing units through the Tini shell. For the container with a /usr/bin/tini
entry point:
docker pull xychelsea/magenta:latest-gpu
With Jupyter Notebooks server pre-installed, pull with:
docker pull xychelsea/magenta:latest-gpu-jupyter
To run the containers with the generic Docker application or NVIDIA enabled Docker, use the docker run
command with a bound volume directory workspace
attached at mount point /usr/local/magenta/workspace
.
docker run --rm -it \
-v workspace:/usr/local/magenta/workspace \
xychelsea/magenta:latest
With Jupyter Notebooks server pre-installed, run with:
docker run --rm -it -d
-v workspace:/usr/local/magenta/workspace \
-p 8888:8888 \
xychelsea/magenta:latest-jupyter
docker run --gpus all --rm -it
-v workspace:/usr/local/magenta/workspace \
xychelsea/magenta:latest-gpu /bin/bash
With Jupyter Notebooks server pre-installed, run with:
docker run --gpus all --rm -it -d
-v workspace:/usr/local/magenta/workspace \
-p 8888:8888 \
xychelsea/magenta:latest-gpu-jupyter
First convert MIDI or other files to a TensorFlow record file for processing.
#!/bin/bash
TRAINING_INPUT=$MAGENTA_WORKSPACE/[examples]
TRAINING_FILE=$MAGENTA_WORKSPACE/[examples].tfrecord
convert_dir_to_note_sequences \
--input_dir=$TRAINING_INPUT \
--output_file=$TRAINING_FILE \
--recursive
Next, run the training model using one of pre-trained models or your own model.
#!/bin/bash
# Pre-trained CONFIG options: basic_rnn, mono_rnn, lookback_rnn, attention_rnn
CONFIG=lookback_rnn
TRAINING_STEPS=20480
TRAINING_FILE=$MAGENTA_WORKSPACE/tfrecord/example.tfrecord
TRAINING_DIR=$MAGENTA_WORKSPACE/tensorboard
melody_rnn_train \
--config=$CONFIG \
--hparams="batch_size=64,rnn_layer_sizes=[64,64]" \
--num_training_steps=$TRAINING_STEPS \
--sequence_example_file=$TRAINING_FILE \
--run_dir=$TRAINING_DIR
Finally, generate MIDI files into the workspace
or other output directory using one of the three configurations and a primer file.
#!/bin/bash
# CONFIG options: basic_rnn, mono_rnn, lookback_rnn, attention_rnn
CONFIG=lookback_rnn
BUNDLE_PATH=$MAGENTA_MODELS/$CONFIG.mag
PRIMER_FILE=$MAGENTA_WORKSPACE/example.mid
melody_rnn_generate \
--config=$CONFIG \
--bundle_file=$BUNDLE_PATH \
--output_dir=$HOME/magenta/workspace/output \
--num_outputs=16 \
--num_steps=512 \
--primer_file="${PRIMER_FILE}"
To build either a GPU-enabled container or without GPUs, use the magenta-docker GitHub repository [TK].
git clone git://github.com/xychelsea/ [TK]
The base container, based on the xychelsea/tensorflow:latest
from the Anaconda 3 container stack (xychelsea/anaconda3:latest
) running Tini shell:
docker build -t magenta:latest -f Dockerfile .
With Jupyter Notebooks server pre-installed, build with:
docker build -t magenta:latest-jupyter -f Dockerfile.jupyter .
docker build -t magenta:latest-gpu -f Dockerfile.nvidia .
With Jupyter Notebooks server pre-installed, build with:
docker build -t magenta:latest-gpu-jupyter -f Dockerfile.nvidia-jupyter .
The default environment uses the following configurable options:
ANACONDA_GID=100
ANACONDA_PATH=/usr/local/anaconda3
ANACONDA_UID=1000
ANACONDA_USER=anaconda
ANACONDA_ENV=magenta
MAGENTA_PATH=/usr/local/magenta
MAGENTA_HOME=$HOME/magenta
MAGENTA_MODELS=$MAGENTA_PATH/magenta/models/
MAGENTA_WORKSPACE=$MAGENTA_PATH/workspace