Skip to content

MaxChangInnodisk/innotis-server

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

InnoTIS

InnoTIS 是 Innodisk and Aetina 用來提供 Aetina Server 運行AI模型的效果,我們結合了 NVIDIA Triton Inference Server 的技術讓使用者可以透過gRPC的方式傳送資料到我們的 Aetina Server 進行 AI 推論進而取得辨識結果。


Feature

  • Custom version which only have three model could use.
    1. DENSENET_ONNX ( NVIDIA sample )
    2. YOLOV4 ( COCO Dataset )
    3. YOLOV4_WILL ( Can Detect Does People Wear A Mask )
  • To modify your custom code (*.cpp *.h), please visit my notion
  • I use gRPC in innotis-server, HTTP service will open but disable to use.

How to use?

  1. Install NVIDIA Driver and Docker

    1. Build Triton on Windows ( With Docker )
    2. Build Triton on Ubuntu ( With Docker )
  2. Run innotis-server

    1. Download innotis-server
      $ git clone https://github.com/MaxChangInnodisk/innotis-server.git
      $ cd innotis-server
    2. Run init.sh ( Only need first time )
      $ ./init.sh
      • Please remind the ip of server, you can alos find it in server_ip.txt. image
      • The folder (build/, triton-deploy/) will be generated
    3. Run run.sh
      $ ./run.sh
      • Make sure GRPC and HTTP service is started. image
  3. Run innotis-client ( with another terminal )

    Github: innotis-client

    • DockerHub: pull image & run container from docker hub
      $ docker run --rm -p 5000:5000 -t maxchanginnodisk/innotis
    • Dockerfile: you can also build from docker file
    • Miniconda: virtual environment might be a great idea for developer
  4. Open browser and enter url ( localhost:5000 ).

    • Triton IP must be modify to <server_ip>, you can find <server_ip> in "server_ip.txt" which will be generated when run init.sh
  5. Have fun.

    demo


Reference

Thanks for: