Attention!
Repository is now hosted at https://github.com/hocop/nnio
Please refer to the project's documentation.
Very simple python API for inferencing neural networks. Ideal for sharing your models with colleagues who are not data scientists.
Features:
- Supports different formats: onnx, pytorch, openVino, tflite. Loading a model takes one line of code.
- Acts on numpy arrays and outputs numpy arrays. Inference also takes one line of code.
- Has flexible image preprocessing class, which can also read image from file.
- Can read models and images from URLs instead of paths.
- Has a bunch of built-in models. E.g. object detection, classification, person re-identification.
It supports running models on CPU as well as some accelerators:
- GPU with cuda support (onnx and pytorch)
- Google USB Accelerator (tflite)
- Intel Compute Stick (openVino)
- Intel integrated GPUs (openVino)
For each device there exists an own library and a model format. We wrap all those in a single well-defined python package.
import nnio
# Load model and put it on a Google Coral Edge TPU device
# model_path can be URL
model = nnio.EdgeTPUModel(
model_path='path/to/model_quant_edgetpu.tflite',
device='TPU',
)
# Create preprocessor
preproc = nnio.Preprocessing(
resize=(224, 224),
batch_dimension=True,
imagenet_scaling=True,
)
# Load and preprocess the image
# argument can be path, URL or numpy array
image = preproc('path/to/image.png')
# Make prediction
class_scores = model(image)
# Load the model
model = nnio.ONNXModel(
model_path='path/to/model.onnx',
)
# Inference code will be the same
For this example you will need onnxruntime
or onnxruntime-gpu
to be installed, depending on what device you want to use. Install them using pip. See Installation docs.
# Load model
model = nnio.zoo.onnx.detection.SSDMobileNetV1()
# Get preprocessing function
preproc = model.get_preprocessing()
# Preprocess your numpy image
image = preproc(image_rgb)
# Make prediction
boxes = model(image)
boxes
is a list of nnio.DetectionBox objects.
nnio was initially developed for the Fast Sense X microcomputer. It has six neural accelerators, which are all supported by nnio:
- 3 x Google Coral Edge TPU
- 2 x Intel VPU
- an integrated Intel GPU
More usage examples can be found in the documentation.