Skip to content

jjerry-k/triton_sample

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Triton Inference Server Example

Quick Setting

Quick Setting run on cpu.
Model format is TorchScript.

  1. Download Model
  1. Move model.pt file to model_repository/{model name}/1

  2. Run docker compose

docker compose up

To Do List

  • GPU Mode
  • Detection router
    • Postprocessing
  • Segmentation router
  • Variable input type
  • Variable output type

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages