Skip to content

bentoml/BentoSearch

Repository files navigation

Search-based Llama 3.1 8B with vLLM and BentoML

This is a BentoML example project demonstrating how to build a retrieval-based search engine using Llama 3.1 8B with vLLM, a high-throughput and memory-efficient inference engine.

See here for a full list of BentoML example projects.

💡 This example is served as a basis for advanced code customization, such as custom model, inference logic or vLLM options. For simple LLM hosting with OpenAI compatible endpoint without writing any code, see OpenLLM.

Prerequisites

  • You have installed Python 3.8+ and pip. See the Python downloads page to learn more.
  • You have a basic understanding of key concepts in BentoML, such as Services. We recommend you read Quickstart first.
  • You have gained access to Llama 3.1 8B on its official website and Hugging Face.
  • If you want to test the Service locally, you need a Nvidia GPU with at least 16G VRAM.
  • (Optional) We recommend you create a virtual environment for dependency isolation for this project. See the Conda documentation or the Python documentation for details.

Install dependencies

git clone https://github.com/bentoml/BentoSearch.git
pip install -r requirements.txt

Run the BentoML Service

We have defined a BentoML Service in service.py. Run bentoml serve in your project directory to start the Service.

$ bentoml serve .

The server is now active at http://localhost:3000. You can interact with it using the Swagger UI or in other different ways.

CURL
curl -N -X 'POST' \
  'http://localhost:3000/search' \
  -H 'accept: text/event-stream' \
  -H 'Content-Type: application/json' \
  -d '{
  "prompt": "Who won 2024 Olympic Track and Field?",
  "max_tokens": 8192
}'
Python client
import bentoml

with bentoml.SyncHTTPClient("http://localhost:3000") as client:
  response_generator = client.search(
    prompt="Who won 2024 Olympic Track and Field?",
    max_tokens=8192
  )
  for response in response_generator: print(response, end='', flush=True)

For detailed explanations of the Service code, see vLLM inference.

Deploy to BentoCloud

After the Service is ready, you can deploy the application to BentoCloud for better management and scalability. Sign up if you haven't got a BentoCloud account.

Make sure you have logged in to BentoCloud, then run the following command to deploy it.

bentoml deploy --env HF_TOKEN=<your_huggingface_token> .

Once the application is up and running on BentoCloud, you can access it via the exposed URL.

Note: For custom deployment in your own infrastructure, use BentoML to generate an OCI-compliant image.

Releases

No releases published

Packages

No packages published

Languages