Skip to content

Latest commit

 

History

History
93 lines (61 loc) · 4.62 KB

jukebox.md

File metadata and controls

93 lines (61 loc) · 4.62 KB

Jukebox

This model is in maintenance mode only, we don't accept any new PRs changing its code. If you run into any issues running this model, please reinstall the last version that supported this model: v4.40.2. You can do so by running the following command: pip install -U transformers==4.40.2.

Overview

The Jukebox model was proposed in Jukebox: A generative model for music by Prafulla Dhariwal, Heewoo Jun, Christine Payne, Jong Wook Kim, Alec Radford, Ilya Sutskever. It introduces a generative music model which can produce minute long samples that can be conditioned on an artist, genres and lyrics.

The abstract from the paper is the following:

We introduce Jukebox, a model that generates music with singing in the raw audio domain. We tackle the long context of raw audio using a multiscale VQ-VAE to compress it to discrete codes, and modeling those using autoregressive Transformers. We show that the combined model at scale can generate high-fidelity and diverse songs with coherence up to multiple minutes. We can condition on artist and genre to steer the musical and vocal style, and on unaligned lyrics to make the singing more controllable. We are releasing thousands of non cherry-picked samples, along with model weights and code.

As shown on the following figure, Jukebox is made of 3 priors which are decoder only models. They follow the architecture described in Generating Long Sequences with Sparse Transformers, modified to support longer context length. First, a autoencoder is used to encode the text lyrics. Next, the first (also called top_prior) prior attends to the last hidden states extracted from the lyrics encoder. The priors are linked to the previous priors respectively via an AudioConditioner module. TheAudioConditioner upsamples the outputs of the previous prior to raw tokens at a certain audio frame per second resolution. The metadata such as artist, genre and timing are passed to each prior, in the form of a start token and positional embedding for the timing data. The hidden states are mapped to the closest codebook vector from the VQVAE in order to convert them to raw audio.

JukeboxModel

This model was contributed by Arthur Zucker. The original code can be found here.

Usage tips

  • This model only supports inference. This is for a few reasons, mostly because it requires a crazy amount of memory to train. Feel free to open a PR and add what's missing to have a full integration with the hugging face trainer!
  • This model is very slow, and takes 8h to generate a minute long audio using the 5b top prior on a V100 GPU. In order automaticallay handle the device on which the model should execute, use accelerate.
  • Contrary to the paper, the order of the priors goes from 0 to 1 as it felt more intuitive : we sample starting from 0.
  • Primed sampling (conditioning the sampling on raw audio) requires more memory than ancestral sampling and should be used with fp16 set to True.

This model was contributed by Arthur Zucker. The original code can be found here.

JukeboxConfig

[[autodoc]] JukeboxConfig

JukeboxPriorConfig

[[autodoc]] JukeboxPriorConfig

JukeboxVQVAEConfig

[[autodoc]] JukeboxVQVAEConfig

JukeboxTokenizer

[[autodoc]] JukeboxTokenizer - save_vocabulary

JukeboxModel

[[autodoc]] JukeboxModel - ancestral_sample - primed_sample - continue_sample - upsample - _sample

JukeboxPrior

[[autodoc]] JukeboxPrior - sample - forward

JukeboxVQVAE

[[autodoc]] JukeboxVQVAE - forward - encode - decode