Skip to content

Commit

Permalink
Merge pull request #6 from alpha-carinae29/move-documentation
Browse files Browse the repository at this point in the history
Move documentation
  • Loading branch information
mhejrati committed Feb 15, 2021
2 parents 8e1dc25 + 8f79d5c commit 41cd7d2
Show file tree
Hide file tree
Showing 8 changed files with 179 additions and 4 deletions.
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@ __pycache__/
*.tflite
*.csv
.idea/
build/
venv/
.env
.vscode
2 changes: 2 additions & 0 deletions docs/requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
sphinx-rtd-theme

87 changes: 87 additions & 0 deletions docs/source/adaptive_learning.rst
Original file line number Diff line number Diff line change
@@ -1,8 +1,95 @@
Adaptive Learning
=================

Adaptive learning is the process of customization of object detection models with user-provided data and environments. For more information, please visit our `blog post <https://neuralet.com/article/adaptive-learning/>`_.

Client
^^^^^^

Neuralet adaptive learning service includes client/server side. You can start an adaptive learning task on the cloud and get the model after training on the client-side.

Run the docker container based on your device and the below commends inside the container: ::

cd services/adaptive-learning/client

**#Step 1:**

Create an :code:`input.zip` file from the video file you want to feed to Adaptive Learning. ::

zip -j input.zip PATH_TO_VIDEO_FILE

**#Step 2:**

Upload the zip file and get a unique id: ::

python3 client.py upload_file --file_path FILE_PATH

**#Step 3:**

Add the previous step's unique id to the :code:`UploadUUID` field and the video file name to the :code:`VideoFile` field of the config file. You can find a more comprehensive explanation of the config file and its fields in the next section. Note: You can use the sample config file in :code:`configs/sample_config.ini`

**#Step 4:**

Initiate a new job and get your job's ID: ::

python3 client.py train --config_path CONFIGPATH

**#Step 5:**

Get a job status (enter the job id at JASKID) ::

python3 client.py get_status --job_id JOBID


The expected status massages are as follows:

.. csv-table:: a title
:header: "Parameter", "Comments"
:widths: 10, 20

"Allocating Resource", "Allocating compute machine to your job"
"Building", "Building an environment to start a job"
"Training", "Running a Adaptive Learning Job"
"Wrapping Up", "Saving data and finishing the job"
"Finished", "The job has been finished. Note that it doesn't mean that the job has been finished successfully. it may finished with error"
"Failed", "There was a problem in Neuralet infrastructure"
"Not Reached Yet", "The job's workflow have not been reached to this stage yet"
"Unexpected Error", "An internal error has occurred"

**#Step 6:**

Download the trained model whenever the job has been finished. ::

python3 client.py download_file --job_id JOBID

**What is inside :code:`output.zip` file?**

:code:`train_outputs` : Contains all of the Adaptive Learning files.

:code:`train_outputs/frozen_graph` : Contains all of required files for inference and exporting to the edge devices. Pass this directory to :code:`inference.py` in :code:`x86` devices for running inference on trained model.

:code:`train_outputs/frozen_graph/frozen_inference_graph.pb` : When :code:`QuantizedModel` is :code:`false` in config file this file is inside frozen_graph directory. You can pass this file to the Jetson Exporter to create TensorRT engine.

:code:`train_outputs/frozen_graph/detect.tflite` : When :code:`QuantizedModel` is :code:`true` in config file this file is inside frozen_graph directory. This is the qunatized :code:`tflite` file. You can pass it to EdgeTPU exporter to create an edgetpu compiled tflite file.

:code:`event.out.tfevents` : This is the training log file of Adaptive Learning. You can open this file with :code:`tensorboard` and monitor training progress.



Adaptive Learning Config File
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

To customize the Adaptive Learning framework based on your needs, you must configure the sample config file on :code:`configs/` directory. There is a brief explanation of each parameter of config files in the following table:

.. csv-table:: a title
:header: "Parameter", "Options", "Comments"
:widths: 10, 20, 20


"Teacher/UploadUUID", "a UUID", "Unique id of uploaded input.zip file."
"Teacher/VideoFile", "string", "Name of the video you zipped and uploaded."
"Teacher/Classes", "comma-seperated string without space", "A list of classes names that you want to train on. these classes should be a subset of COCO classes. For all COCO classes just put :code:`coco`"
"Teacher/PostProcessing", "One of :code:`'background_filter'` or :code:`' '` ", "Background filter will apply a background subtraction algorithm on video frames and discards the bounding boxes in which their background pixels rate is higher than a defined threshold."
"Teacher/ImageFeature", "One of the :code:`'foreground_mask'`, :code:`'optical_flow_magnitude'`, :code:`'foreground_mask && optical_flow_magnitude'` or :code:`' '`", "This parameter specifies the type of input feature engineering that will perform for training. :code:`'foreground_mask'` replaces one of the RGB channels with the foreground mask. :code:`'optical_flow_magnitude'` replaces one of the RGB channels with the magnitude of optical flow vectors and, :code:`'foreground_mask && optical_flow_magnitude'` performs two feature engineering technique at the same time as well as changing the remaining RGB channel with the grayscale transformation of the frame. For more information about feature engineering and its impact on the model's accuracy, visit `our blog <https://neuralet.com/article/adaptive-learning/>`_ ."
"Student/QuantizedModel", "true or false", "whether to train the student model with quantization aware strategy or not. This is especially useful when you want to deploy the final model on an edge device that only supports :code:`Int8` precision like Edge TPU. By applying quantization aware training the App will export a :code:`tflite` too."

6 changes: 3 additions & 3 deletions docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,9 +17,9 @@

# -- Project information -----------------------------------------------------

project = 'neuralet_object_detection'
copyright = '2021, alpha-carinae29'
author = 'alpha-carinae29'
project = 'Neuralet Edge Vision'
copyright = '2021 - Neuralet'
author = 'Neuralet'

# The full version, including alpha/beta/rc tags
release = '0.0.1'
Expand Down
19 changes: 19 additions & 0 deletions docs/source/export_models.rst
Original file line number Diff line number Diff line change
@@ -1,8 +1,27 @@
Export Models to Edge Devices
=============================

With the Neuralet Edge Object Detection module, you can easily export your trained model to Nvidia's Jetson Devices and Google Edge TPUs.

Compile tflite Models to Edge TPU Models
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

In amd64 with connected USB Accelerator docker container run: ::

python3 exporters/edgetpu_exporter.py --tflite_file TFLITE_FILE --out_dir OUT_DIR

Where :code:`TFLITE_FILE` should be a quantized model. You can use our Adaptive Learning API to train a quantized object detection model.

For more information about quantization techniques of deep neural networks, you can read our `blog <https://neuralet.com/article/quantization-of-tensorflow-object-detection-api-models/>`_.

Export TensorFlow protobuf models to TRT engines
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

In Jetson docker container run: ::

python3 exporters/trt_exporter.py --pb_file PB_FILE --out_dir OUT_DIR [ --num_classes NUM_CLASSES]

Where :code:`PB_FILE` is a protobuf frozen graph tensorflow model.



44 changes: 44 additions & 0 deletions docs/source/getting_started.rst
Original file line number Diff line number Diff line change
@@ -1,17 +1,61 @@
Getting Started
===============
You can run Object Detection module on various platforms

X86
^^^

You should have `Docker <https://docs.docker.com/get-docker/>`_ on your system. ::

# 1) Build Docker image
docker build -f x86.Dockerfile -t "neuralet/object-detection:latest-x86_64_cpu" .

# 2) Run Docker container:
docker run -it -v "$PWD":/repo neuralet/object-detection:latest-x86_64_cpu

X86 nodes with GPU
^^^^^^^^^^^^^^^^^^

You should have `Docker <https://docs.docker.com/get-docker/>`_ and `Nvidia Docker Toolkit <https://github.com/NVIDIA/nvidia-docker>`_ on your system. ::

# 1) Build Docker image
docker build -f x86-gpu.Dockerfile -t "neuralet/object-detection:latest-x86_64_gpu" .

# 2) Run Docker container:
Notice: you must have Docker >= 19.03 to run the container with `--gpus` flag.
docker run -it --gpus all -v "$PWD":/repo neuralet/object-detection:latest-x86_64_gpu


Nvidia Jetson Devices
^^^^^^^^^^^^^^^^^^^^^

You need to have JetPack 4.3 installed on your Jetson device. ::

# 1) Build Docker image
docker build -f jetson-nano.Dockerfile -t "neuralet/object-detection:latest-jetson_nano" .

# 2) Run Docker container:
Notice: you must have Docker >= 19.03 to run the container with `--gpus` flag.
docker run -it --runtime nvidia --privileged -v "$PWD":/repo neuralet/object-detection:latest-jetson_nano

Coral Dev Board
^^^^^^^^^^^^^^^

::

# 1) Build Docker image
docker build -f coral-dev-board.Dockerfile -t "neuralet/object-detection:latest-coral-dev-board" .

# 2) Run Docker container:
docker run -it --privileged -v "$PWD":/repo neuralet/object-detection:latest-coral-dev-board

AMD64 node with a connected Coral USB Accelerator
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

::

# 1) Build Docker image
docker build -f amd64-usbtpu.Dockerfile -t "neuralet/object-detection:latest-amd64" .

# 2) Run Docker container:
docker run -it --privileged -v "$PWD":/repo neuralet/object-detection:latest-amd64
2 changes: 1 addition & 1 deletion docs/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
Welcome to Neuralet Object Detection's Documentation!
=====================================================
This is the neuralet's object detection repository. With this module you can run object detection models on various edge devices and create an adaptive learning session to customize object detection models to your specific environment. for more information please visit our `website <https://neuralet.com/>`_.or reach out to hello@neuralet.com.
This is the Neuralet edge vision module. With this module, you can run object detection models on various edge devices and create an adaptive learning session to customize object detection models to your specific environment. for more information, please visit our website <https://neuralet.com/>`_.or reach out to hello@neuralet.com.

.. toctree::
:maxdepth: 2
Expand Down
22 changes: 22 additions & 0 deletions docs/source/inference.rst
Original file line number Diff line number Diff line change
@@ -1,4 +1,26 @@
Run Inference
=============

On any of the docker containers you can run sample inference to get an output video: ::

python3 inference.py --device DEVICE --input_video INPUT_VIDEO --out_dir OUT_DIR \
[--model_path MODEL_PATH] [--threshold THRESHOLD] [--input_width INPUT_WIDTH]\
[--input_height INPUT_HEIGHT] [--out_width OUT_WIDTH] [--out_height OUT_HEIGHT]

Where:

:code:`DEVICE` should be one of the :code:`x86`, :code:`edgetpu` or :code:`jetson`.

:code:`INPUT_VIDEO` is the path to the input video file.

:code:`OUT_DIR` is a directory in which the script will save the output video file.

:code:`MODEL_PATH` is the path to the model file or directory. For :code:`x86` devices, it should be a directory that contains the :code:`saved_model` directory. For :code:`edgetpu` it should be a compiled :code:`tflite` file, and for :code:`jetson` devices, it should be a :code:`TRT Engine` file.

:code:`threshold` is the detector's threshold to detect objects.

:code:`INPUT_WIDTH` and :code:`INPUT_HEIGHT` are the width and height of the input of the model.

:code:`OUT_WIDTH` and :code:`OUT_HEIGHT` are the resolutions of output video.


0 comments on commit 41cd7d2

Please sign in to comment.