Skip to content

2. Build neural network inference library

Johannes Czech edited this page Sep 4, 2021 · 8 revisions

At the time of writing there are three back-ends available to run the neural network inference with:

Back-end GPU Support CPU Support Inference Speed Effort to install
TensorRT (default) ✔️ 🔥🔥🔥🔥 ⚠️⚠️
OpenVino (:heavy_check_mark:) not tested yet ✔️ 🔥🔥🔥 ⚠️
MXNet ✔️ ✔️ 🔥🔥 ⚠️⚠️⚠️ (CPU only) / ⚠️⚠️⚠️⚠️ (GPU)
Torch ✔️ ✔️ 🔥 ⚠️

Next part: