A Rust, Python and gRPC server for text generation inference.
Forked from HuggingFace's Text Generation Inference project (prior to its re-licensing), it's commercial-friendly and licensed under the Apache 2.0.
This fork was created mainly due to two reasons:
- Primarily, it allows us faster iteration and more flexibility, which is essential for our research uses. It also allows more control over development and documentation, crucial for our in-house uses at CMU.
- While we understand the reasons behind the re-licensing, we don't want our (research) contributions to be locked behind a restrictive license. This fork will not sync with the upstream repository, and will be updated independently.
For contributors: If HuggingFace's upstream has a feature that you want to use, please open an issue first and discuss porting the functionality independently. Do not just copy the code over, as it will be rejected.
If you are new to using this library, and as it has being used in your cluster, we recommend by starting with a client-only installation, and using models launched by other users.
To start, the TGI_CENTRAL_ADDRESS
needs to be set, so that the client can know which servers to connect to. For example, in the LTI cluster, run
echo "export TGI_CENTRAL_ADDRESS=babel-3-36:8765" >> ~/.bashrc # if using a single machine, use `0.0.0.0:8765` instead
source ~/.bashrc
To use the python client, install it with
cd clients/python
pip install .
You can then query the API to list the models available in your cluster, and use models for inference.
from text_generation import Client
# get current models and pick the first one
models = Client.list_from_central()
model_name, model_addr = models[0]["name"], models[0]["address"]
print(f"Using model {model_name} at {model_addr}")
client = Client("http://" + model_addr)
print(client.generate("What is Deep Learning?", max_new_tokens=20).generated_text)
In general, you don't have to recreate the environment every time you want to update the library. To just update the library, run in the base directory (in a previously created environment)
export DIR=`pwd`
OPENSSL_DIR=${DIR}/.openssl \
OPENSSL_LIB_DIR=${DIR}/.openssl/lib \
OPENSSL_INCLUDE_DIR=${DIR}/.openssl/include \
BUILD_EXTENSIONS=false \
make install
If you are an LTI student using one of its cluster (or generally belong to an academic cluster that doesn't have docker installed), you can side-steps problems with installing system dependencies by using the (mini)conda package manager.
Then, from your base environment, run the install script:
bash setup_scripts/conda_server.sh
Note: This takes a really long time, up to 1.5-3 hour, sit back and realx while you wait for it.
Note: if you are running in a cluster with module
installed, make sure you deactivate all modules before running the script.
This will create a conda environment with all the dependencies needed to run the model servers.
You should then be able to launch models with the text-generation-launcher
command, or by using one of the predefined MAKE rules
conda activate tgi-env
make run-llama2-vicuna-7b
If you are setting this library for use in your group/cluster for the first time, you will need (or at least benefit) from setting up a central server. See the instructions in the package folder.
Remember to set the TGI_CENTRAL_ADDRESS
environment variable (ideally for all the users in your cluster) to the address of the central server.
It is also possible to a simple web chat-ui to interact with models running in your server/cluster. This is a simple fork of HuggingFace's Chat UI that communicates with the central controller to get the list of models available in the cluster, and then connects to the corresponding servers to generate text.
For example, in Babel, you can access a running Chat-UI web-server with port forwarding by running
ssh babel -L 8888:babel-3-36:4173
and going to localhost:8888
in your browser.
Check the README for more details.
Content below is from the original README.
- Serve the most popular Large Language Models with a simple launcher
- Tensor Parallelism for faster inference on multiple GPUs
- Token streaming using Server-Sent Events (SSE)
- Continuous batching of incoming requests for increased total throughput
- Optimized transformers code for inference using flash-attention and Paged Attention on the most popular architectures
- Quantization with bitsandbytes and GPT-Q
- Safetensors weight loading
- Watermarking with A Watermark for Large Language Models
- Logits warper (temperature scaling, top-p, top-k, repetition penalty, more details see transformers.LogitsProcessor)
- Stop sequences
- Log probabilities
- Production ready (distributed tracing with Open Telemetry, Prometheus metrics)
Other architectures are supported on a best effort basis using:
AutoModelForCausalLM.from_pretrained(<model>, device_map="auto")
or
AutoModelForSeq2SeqLM.from_pretrained(<model>, device_map="auto")
The easiest way of getting started is using the official Docker container:
model=tiiuae/falcon-7b-instruct
volume=$PWD/data # share a volume with the Docker container to avoid downloading weights every run
docker run --gpus all --shm-size 1g -p 8080:80 -v $volume:/data ghcr.io/huggingface/text-generation-inference:0.9.4 --model-id $model
Note: To use GPUs, you need to install the NVIDIA Container Toolkit. We also recommend using NVIDIA drivers with CUDA version 11.8 or higher.
To see all options to serve your models (in the code or in the cli:
text-generation-launcher --help
You can then query the model using either the /generate
or /generate_stream
routes:
curl 127.0.0.1:8080/generate \
-X POST \
-d '{"inputs":"What is Deep Learning?","parameters":{"max_new_tokens":20}}' \
-H 'Content-Type: application/json'
curl 127.0.0.1:8080/generate_stream \
-X POST \
-d '{"inputs":"What is Deep Learning?","parameters":{"max_new_tokens":20}}' \
-H 'Content-Type: application/json'
or from Python:
pip install text-generation
from text_generation import Client
client = Client("http://127.0.0.1:8080")
print(client.generate("What is Deep Learning?", max_new_tokens=20).generated_text)
text = ""
for response in client.generate_stream("What is Deep Learning?", max_new_tokens=20):
if not response.token.special:
text += response.token.text
print(text)
You can consult the OpenAPI documentation of the text-generation-inference
REST API using the /docs
route.
The Swagger UI is also available at: https://huggingface.github.io/text-generation-inference.
You have the option to utilize the HUGGING_FACE_HUB_TOKEN
environment variable for configuring the token employed by
text-generation-inference
. This allows you to gain access to protected resources.
For example, if you want to serve the gated Llama V2 model variants:
- Go to https://huggingface.co/settings/tokens
- Copy your cli READ token
- Export
HUGGING_FACE_HUB_TOKEN=<your cli READ token>
or with Docker:
model=meta-llama/Llama-2-7b-chat-hf
volume=$PWD/data # share a volume with the Docker container to avoid downloading weights every run
token=<your cli READ token>
docker run --gpus all --shm-size 1g -e HUGGING_FACE_HUB_TOKEN=$token -p 8080:80 -v $volume:/data ghcr.io/huggingface/text-generation-inference:0.9.3 --model-id $model
NCCL
is a communication framework used by
PyTorch
to do distributed training/inference. text-generation-inference
make
use of NCCL
to enable Tensor Parallelism to dramatically speed up inference for large language models.
In order to share data between the different devices of a NCCL
group, NCCL
might fall back to using the host memory if
peer-to-peer using NVLink or PCI is not possible.
To allow the container to use 1G of Shared Memory and support SHM sharing, we add --shm-size 1g
on the above command.
If you are running text-generation-inference
inside Kubernetes
. You can also add Shared Memory to the container by
creating a volume with:
- name: shm
emptyDir:
medium: Memory
sizeLimit: 1Gi
and mounting it to /dev/shm
.
Finally, you can also disable SHM sharing by using the NCCL_SHM_DISABLE=1
environment variable. However, note that
this will impact performance.
text-generation-inference
is instrumented with distributed tracing using OpenTelemetry. You can use this feature
by setting the address to an OTLP collector with the --otlp-endpoint
argument.
You can also opt to install text-generation-inference
locally.
First install Rust and create a Python virtual environment with at least
Python 3.9, e.g. using conda
:
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
conda create -n text-generation-inference python=3.9
conda activate text-generation-inference
You may also need to install Protoc.
On Linux:
PROTOC_ZIP=protoc-21.12-linux-x86_64.zip
curl -OL https://github.com/protocolbuffers/protobuf/releases/download/v21.12/$PROTOC_ZIP
sudo unzip -o $PROTOC_ZIP -d /usr/local bin/protoc
sudo unzip -o $PROTOC_ZIP -d /usr/local 'include/*'
rm -f $PROTOC_ZIP
On MacOS, using Homebrew:
brew install protobuf
Then run:
BUILD_EXTENSIONS=True make install # Install repository and HF/transformer fork with CUDA kernels
make run-falcon-7b-instruct
Note: on some machines, you may also need the OpenSSL libraries and gcc. On Linux machines, run:
sudo apt-get install libssl-dev gcc -y
The custom CUDA kernels are only tested on NVIDIA A100s. If you have any installation or runtime issues, you can remove
the kernels by using the DISABLE_CUSTOM_KERNELS=True
environment variable.
Be aware that the official Docker image has them enabled by default.
make run-falcon-7b-instruct
You can also quantize the weights with bitsandbytes to reduce the VRAM requirement:
make run-falcon-7b-instruct-quantize
make server-dev
make router-dev
# python
make python-server-tests
make python-client-tests
# or both server and client tests
make python-tests
# rust cargo tests
make rust-tests
# integration tests
make integration-tests
TGI is also supported on the following AI hardware accelerators:
- Habana first-gen Gaudi and Gaudi2: checkout here how to serve models with TGI on Gaudi and Gaudi2 with Optimum Habana