Skip to content

Leverage the Intel® Distribution of OpenVINO™ Toolkit to fast-track development of high-performance computer vision and deep learning inference applications, and run pre-trained deep learning models for computer vision on-premise. You will identify key hardware specifications of various hardware types (CPU, VPU, FPGA, and Integrated GPU), and ut…

Notifications You must be signed in to change notification settings

marwan1023/Intel-Edge-AI-for-IoT-Developers-Nanodegree-program

Repository files navigation

Intel® Edge AI for IoT Developers Nanodegree program

|

Intel® Edge AI for IoT Developers Nanodegree Program

N.B.: Please don't use the assignment and quiz solution. Try to solve the problem by yourself.


Leverage the Intel® Distribution of OpenVINO™ Toolkit to fast-track development of high-performance computer vision and deep learning inference applications, and run pre-trained deep learning models for computer vision on-premise. You will identify key hardware specifications of various hardware types (CPU, VPU, FPGA, and Integrated GPU), and utilize the Intel® DevCloud for the Edge to test model performance on the various hardware types. Finally, you will use software tools to optimize deep learning models to improve performance of Edge AI systems. - Source

Intel® Edge AI for IoT Developers Nanodegree program Projects

The project aims to create a people counting smart camera able to detect people using an optimized AI model at the Edge and extract relevant statistics like:

  • Number of people on the captured video stream
  • The duration they spend on screen
  • Total people counted

These statistics are sent using JSON and MQTT to a server, for bandwidth saving enabling the use of the low-speed link. If needed is always possible to watch remotely the video stream for seeing what's is currently happening.

The challenges in this project are: select the right pre-trained model for doing the object detection, optimize the model to allow the inference on low-performance devices, properly adjust the input video stream using OpenCV for maximizing the model accuracy.

Details
Programming Language: Python 3.5 or 3.6

The goal of this project is building an application to reduce congestion and queuing systems.

The Scenarios

  1. Manufacturing Sector
  2. Retail Sector
  3. Transportation Sector

Instructions

  • Propose a possible hardware solution for each scenario
  • Build out your application and test its performance on the DevCloud using multiple hardware types
  • Compare the performance to see which hardware performed best
  • Revise your proposal based on the test results

Requirements

Hardware

Software

Model

  • Download the person detection model from the Open Model Zoo
    sudo /opt/intel/openvino/deployment_tools/open_model_zoo/tools/downloader/downloader.py --name person-detection-retail-0013
    

Results

  1. Manufacturing Sector

    CPU FPGA GPU VPU
  2. Retail Sector

    CPU FPGA GPU VPU
  3. Transportation Sector

    CPU FPGA GPU VPU

In this project, you will use a Gaze Detection Model Gaze Detection Model to control the mouse pointer of your computer. You will be using the Gaze Estimation model to estimate the gaze of the user's eyes and change the mouse pointer position accordingly. This project will demonstrate your ability to run multiple models in the same machine and coordinate the flow of data between those models.
The gaze estimation model requires three inputs you will have to use three other OpenVino models:

The head pose The left eye image The right eye image.

Project requires and Installation

  • Install intel distribution of openvino for Windows 10 here

To get these inputs, you will have to use three other OpenVino models:

The Pipeline:

You will have to coordinate the flow of data from the input, and then amongst the different models and finally to the mouse controller. The flow of data will look like this:

Benchmarks

  • I ran the model inference on CPU and GPU device on local machine given same input video and same virtual environment. Listed below are hardware versions: Model precisions tested:

    FP32 FP16 INT8 Hardwares tested:

    CPU (2.3 GHz Intel Core i5) GPU (Intel(R) UHD Graphics 630)

I have checked Inference Time, Model Loading Time, and Frames Per Second model for FP16, FP32, and FP32-INT8

Benchmark results of the model. CPU(FP32-INT8,FP16,FP32) and Asynchronous Inference


Benchmark results of the model. GPU(FP32-INT8,FP16,FP32) and Asynchronous Inference


  • Due to non availability of FPGA and VPU in local machine, I did not run inference for these device types.

  • FP32

    Type of Hardware Total inference time Total load time fps
    CPU 31.6s 0.930308s 1.867089
    GPU 32.8s 33.834617s 1.798780
  • FP16

    Type of Hardware Total inference time Total load time fps
    CPU 31.8s 1.165073s 1.855346
    GPU 32.6s 34.921903s 1.809816
  • FP32-INT8

    Type of Hardware Total inference time Total load time fps
    CPU 32.0s 2.662999s 1.843750
    GPU 34.1s 47.700375s 1.730205

Requirements

And more sources, see this link. The Free Foundation course is from Udacity and Intel| Intel® Edge AI Fundamentals with OpenVINO™

Certification the Program

Intel Scholarship Winner Badge

About

Leverage the Intel® Distribution of OpenVINO™ Toolkit to fast-track development of high-performance computer vision and deep learning inference applications, and run pre-trained deep learning models for computer vision on-premise. You will identify key hardware specifications of various hardware types (CPU, VPU, FPGA, and Integrated GPU), and ut…

Resources

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published