InnoTIS 是 Innodisk and Aetina 用來提供 Aetina Server 運行AI模型的效果,我們結合了 NVIDIA Triton Inference Server 的技術讓使用者可以透過gRPC的方式傳送資料到我們的 Aetina Server 進行 AI 推論進而取得辨識結果。
- Custom version which only have three model could use.
- DENSENET_ONNX ( NVIDIA sample )
- YOLOV4 ( COCO Dataset )
- YOLOV4_WILL ( Can Detect Does People Wear A Mask )
- To modify your custom code (
*.cpp
*.h
), please visit my notion。 - I use
gRPC
ininnotis-server
,HTTP service
will open but disable to use.
-
Install NVIDIA Driver and Docker
-
Run
innotis-server
- Download
innotis-server
$ git clone https://github.com/MaxChangInnodisk/innotis-server.git $ cd innotis-server
- Run
init.sh
( Only need first time )$ ./init.sh
- Run
run.sh
$ ./run.sh
- Download
-
Run
innotis-client
( with another terminal )Github: innotis-client
- DockerHub: pull image & run container from docker hub
$ docker run --rm -p 5000:5000 -t maxchanginnodisk/innotis
- Dockerfile: you can also build from docker file
- Please visit innotis-client to get more information.
- Miniconda: virtual environment might be a great idea for developer
- Please visit innotis-client to get more information.
- DockerHub: pull image & run container from docker hub
-
Open browser and enter url (
localhost:5000
).- Triton IP must be modify to <server_ip>, you can find <server_ip> in "server_ip.txt" which will be generated when run
init.sh
- Triton IP must be modify to <server_ip>, you can find <server_ip> in "server_ip.txt" which will be generated when run
-
Have fun.
Thanks for: