Skip to content

Latest commit

 

History

History
executable file
·
54 lines (40 loc) · 1.95 KB

installation.md

File metadata and controls

executable file
·
54 lines (40 loc) · 1.95 KB

gft (general fine-tuning): A Little Language for Deepnets

1-line programs for fine-tuning and inference

Installation

If you have docker:

docker pull kchurch4/gft:latest

Alternatively (if you have a GPU),

pip install wheel
pip install gft

You may want to set up the gft environment variable so these work. To do that, you may need to clone this repo since pip does not install these extra resources.

git clone https://github.com/kwchurch/gft
ls $gft/examples
ls $gft/datasets
ls $gft/doc

Unfortunately, there are a number of incompatibilities between adapters, paddlespeech and the latest version of HuggingFace transformers. There are a number of versions of requirements.txt in the requirements directory. We recommend setting up several different virtual environments to work around some of these incompatibilities, if you want to use some of these more advanced features.

The scripts in the examples directory will create results under $gft_checkpoints. Please set that variable to some place where you have plenty of free disk space. The results are large because most fine-tuning examples copy a pre-trained model. Given there are many dozens of such examples, there will be many dozens of copies of large models.

WARNING: Some of the fine-tuning scripts take a long time, and not all examples are working (yet).

NOTE: It is not possible to run everything under a single virtual environment. See here for more details.

Warning, gft programs will load models and datasets into your cache (typically, $HOME/.cache/huggingface and $HOME/.paddlenlp). If you have limited disk space in your home directory, you might want to use symbolic links to point them to some place with more disk space.

If you want to use $gft/datasets/VAD, see instructions for reconstituting VAD.