Skip to content

The Sumen model integrates with Triton Inference Server

Notifications You must be signed in to change notification settings

hoang-quoc-trung/sumen-triton

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 

Repository files navigation

Translating Math Formula Images To LaTeX Sequences - Triton Inference Server

Setup Triton (Requires NVIDIA GPU)

  • Pull the Triton Inference Server Docker image
docker run --gpus=all -it --shm-size=256m  \
  -p8000:8000 -p8001:8001 -p8002:8002 \
  -v ${PWD}:/workspace/ -v ${PWD}/model_repository:/models \
  nvcr.io/nvidia/tritonserver:22.12-py3
  • Inside the container, install the required packages
cd /models
pip install -r requirements.txt
  • Start the Triton Inference Server
cd /opt/tritonserver
tritonserver --model-repository=/models

On the client side

  • Install the Triton Client
pip install tritonclient[http]
  • Run demo
python client.py

About

The Sumen model integrates with Triton Inference Server

Topics

Resources

Stars

Watchers

Forks

Languages