SLAM-LLM is a deep learning toolkit that allows researchers and
developers to train custom multimodal large language model (MLLM), focusing on Speech, Language, Audio, Music processing. We provide detailed recipes for training and high-performance checkpoints for inference.
- [Update May. 21, 2024] Recipes for Spatial Audio Understanding has been supported.
- [Update May. 20, 2024] Recipes for music caption (MC) has been supported.
- [Update May. 8, 2024] Recipes for visual speech recognition (VSR) has been supported.
- [Update May. 4, 2024] Recipes for zero-shot text-to-speech (TTS) has been supported.
- [Update Apr. 28, 2024] Recipes for automated audio captioning (AAC) has been supported.
- [Update Mar. 31, 2024] Recipes for automatic speech recognition (ASR) has been supported.
git clone https://github.com/huggingface/transformers.git
cd transformers
git checkout tags/v4.35.2
pip install -e .
cd ..
git clone https://github.com/huggingface/peft.git
cd peft
git checkout tags/0.6.0
pip install -e .
cd ..
pip install torch==2.0.1 torchvision==0.15.2 torchaudio==2.0.2 --index-url https://download.pytorch.org/whl/cu118
git clone git@github.com:ddlBoJack/SLAM-LLM.git
cd SLAM-LLM
pip install -e .
For some examples, you may need to use fairseq
, the command line is as follows:
git clone https://github.com/pytorch/fairseq
cd fairseq
pip install --editable ./
We provide reference implementations of various LLM-based speech, audio, and music tasks:
- Speech Task
- Audio Task
- Music Task
We provide hierarchical configuration inheritance relationships as follows:
command-line (shell file) > Hydra configuration (yaml file) > dataclass configuration (Python file)
- Easily extend to new models and tasks.
- Detailed recipes for training and high-performance checkpoints for inference.
- Mixed precision training which trains faster with less GPU memory on NVIDIA tensor cores.
- Multi-GPU training with data and model parallel, supporting DDP, FSDP and deepspeed (still need to be improved).
- Flexible configuration based on Hydra and dataclass allowing a combination of code, command-line and file based configuration.
- We borrow code from Llama-Recipes for the training process.
- We borrow code from Fairseq for deepspeed configuration.
- We thank the contributors for providing diverse recipes.