New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Yolo classifier + AI - stick ( movidius, laceli ) #1505

Open
MrJBSwe opened this Issue May 1, 2018 · 45 comments

Comments

Projects
None yet
6 participants
@MrJBSwe
Copy link

MrJBSwe commented May 1, 2018

I think combining motion detection with classification like yolo and using AI-sticks as workhorse could fit very nicely in to the core purpose of motioneyeos.

The trend is a usb-dongle drawing less than 1 watt could do the classifying job, example movidius or laceli

@ccrisan ccrisan added the motioneye label May 1, 2018

@jasaw

This comment has been minimized.

Copy link
Collaborator

jasaw commented May 2, 2018

This is very interesting. I've been looking for ways to add some sort of AI to enhance motion detection. I found someone using the Pi GPU.
Do you know roughly what kind of performance we can expect running Yolo classifier on a RPi 3?
Also do you know whether there's a pre-trained model that we can use or whether there's readily available training data? The performance of the AI depends heavily on the training data, and seems like the most difficult part to get right if we were to train it ourselves.

@MrJBSwe

This comment has been minimized.

Copy link
Author

MrJBSwe commented May 2, 2018

Pi GPU
Based on JeVois, my guess is PI-GPU & tiny Yolo will run at 0.5 - 2 fps

pre-trained
based on coco
wget https://pjreddie.com/media/files/yolov2-tiny-voc.weights

Movement => Classify
Tiny yolo v2 uses 7.1 GFlops which makes it a good starting point for PI-GPU & movidius. It can also be run on CPU. It seems quite "easy" to train skynet for squirrels, fish

examples movidius

Continuous classification
With laceli, I think movement detection might be obsolete, since yolo is very robust to light changes and noise from background. Yolo v3 uses 140 GFlops and my guess is it should run at >10 fps on laceli !?

@wb666greene

This comment has been minimized.

Copy link

wb666greene commented May 20, 2018

The sample code from here:
https://www.pyimagesearch.com/2018/02/19/real-time-object-detection-on-the-raspberry-pi-with-the-movidius-ncs/

which uses MobileNet SSD (I believe from here: https://github.com/chuanqi305/MobileNet-SSD) and a PiCamera module to do about 5 fps on a Pi 3 with Movidius NCS.

I've modified the PyImageSearch sample code to get images via MQTT instead of the PiCamera video stream and then run object detection on them. If a "person" is detected I write out the detected file which will ultimately get pushed to my cell phone in a way yet TBD.

I've written a simple node-red flow also running on the Pi3 with the NCS that presents an ftp server and sends the image files to the NCS detection script. The Pi3 also runs the MQTT server and node-red.

I then configured motioneyeOS on a PiZeroW to ftp its motion images to the Pi3 node-red ftp server.

Its working great, been running all afternoon. Since virtually all security DVRs and netcams can ftp their detected images, I thing this system has great generality and could produce a system worthy of a high priority push notification since the false positive rate will be near zero.

I plan to put it up on github soon, but it will be my first github project attempt so it might take me longer than I'd like.

Running the "Fast Netcam" or v4l2 MJPEG streams into the neural network instead of "snapshots" might be even better, but the FLIR Lorex security DVR I have uses proprietary protocols so ftp'd snapshots is what I used. There is a lot of ugly code to support the lameness of my DVR so after I got it working (been running for three days now) I simplified things for this simple test system I plan to share as a starting point project to integrate AI with video motion detection to greatly lower the false alarm rate.

To suggest an enhancement to motioneye, I'd like to see an option for it to push jpegs directoy to an MQTT server instead of ftp.

I don't think video motion detection like motioneye is obsolete, it makes a great front-end to reduce the load on the network and AI subsystem letting more cameras be handled with less hardware.

Edit: Been running for over 8 hours now. I've the Pi3 also configured as a wifi AP with motioneyeOS connected to it, So I have a stand-alone system with only two parts. There have been 869 frames detected as "motion frames" only 352 had a "person" detected by the AI. Looking at a slide show of the detected images I saw no false positives, over 50% of the frames would be false alarms and very annoying if Emailed. I was testing the system so the number of real events was a lot higher than would normally be the case. So far complete immunity from shadows, reflections, etc.

I think this has great potential!

@debsahu

This comment has been minimized.

Copy link

debsahu commented May 28, 2018

Here's is my attempt at something similar:

https://github.com/debsahu/PiCamMovidius

PiMovidiusCamera

@wb666greene

This comment has been minimized.

Copy link

wb666greene commented May 29, 2018

@debsahu

This comment has been minimized.

Copy link

debsahu commented May 29, 2018

@wb666greene using ocv3 causes fps to drop below 1 (~0.88). Using yolov2 with native PiCamera library is a struggle, I tried. Getting started with GitHub is not straight forward, was a struggle initially. My suggestion is use GitHub desktop.

@wb666greene

This comment has been minimized.

Copy link

wb666greene commented May 29, 2018

Thanks I'll look into github desktop. But I've put the code up anyways as my frustration with github is at the point I'm just giving up for now. Getting my system sending notifications is my next order of business.

Here is a link to my github crude as it is:
https://github.com/wb666greene/SecurityDVR_AI_addon/blob/master/README.md

I'd like to try other models but ti they don't run on the Movidius the frame rate is not likely to be high enough for my use.

DarkNet YOLO was really impressive, but took ~18 seconds to run an image on my i7 desktop without CUDA.

@wb666greene

This comment has been minimized.

Copy link

wb666greene commented Jun 5, 2018

Thanks for the extra background info. I'll add some of these links to my github, think they are very helpful and more enlightening that anything I could write up.

My main point is that I've made an "add-on" for an existing video security DVR instead of making a security camera with AI. Expect these to flood the market soon, it'll be a good thing! but until then, I wanted AI in my existing system without a lot of expense or rewiring.

MotioneyeOS is a a perfectly good way get a simple video security DVR going, and in fact it has far superior video motion detection than does my FLIR/Lorex system, but you are on your own for "weatherproofing" and adding IR illumination to your MotioneyeOS setup -- not a small job!

I used it so I could give a simple example instead of over-complicating things with all the ugly code needed to deal with my FLIR/Lorex system's lameness.

@MrJBSwe

This comment has been minimized.

Copy link
Author

MrJBSwe commented Jul 30, 2018

@MrJBSwe

This comment has been minimized.

Copy link
Author

MrJBSwe commented Aug 13, 2018

@jasaw

This comment has been minimized.

Copy link
Collaborator

jasaw commented Oct 18, 2018

I just had a play with Movidius on a RPi 3B+ recently, with version 2 of the NCSDK (still in beta). Here's what I've found:

  • NCSDK v2 works after fixing a few installation scripts. It pulls in a lot of dependencies, and wants very specific versions of libraries. It will most likely break other existing programs on your host machine by messing with the libraries. It is recommended to install on an sacrificial machine.
  • It supports several neural net models, but neural net models are pulled in from external repositories, so things are left in a broken state because of changes on those external repositories.
  • I tried various neural net models, and only found one that is reliably detecting person.
    • YOLO: unable to test, does not seem to be supported.
    • TinyYOLO: a fast neural net model, but very low accuracy, completely useless for our application.
    • GoogleNet: I can't find pretrained weights that's geared towards detecting person.
    • SSD MobileNet with default pretrained weights: does not seem to be detecting person.
    • SSD MobileNet with chuanqi's pretrained weights: does a good job at detecting person, dogs and cats are OK, struggles with the rest. Still very good for our application. https://github.com/chuanqi305/MobileNet-SSD
  • Frame rates: I'm getting around 6.5 fps, far from real-time.
  • Resolution: Before pushing an image into the Movidius NCS, the image needs to be scaled down to the specific resolution that was used to train the neural net model, which is 300x300 for chuanqi's model.

With all that said, Movidius can still be used for our application as a 2nd-pass system that combs through all the recorded videos to detect person in non-real-time. This may be useful for various use cases, for example:

  1. Send notification and/or trigger alarm when a person is detected. No need to manually look through recorded videos anymore.
  2. Remove videos and/or images that do not contain a person. This reduces storage requirements.

On 2nd thought, Movidius can still be used for real-time person detection. Integrate into motion software and feed one frame every 3rd or 4th frame into the Movidius. The NCSDKv2 C API is documented here for anyone who wishes to try: https://movidius.github.io/ncsdk/ncapi/ncapi2/c_api/readme.html

@wb666greene

This comment has been minimized.

Copy link

wb666greene commented Oct 18, 2018

Thanks for posting this. I was about to try and install ncsdk V2 on an old machine to give tiny yolo a try, saved me a lot of time wasting! I was hoping the increased resolution of the TinyYolo model over MobileNetSSD would help. Any possibility you could upload or send me your compiled graph? I'd still like to play with the model, but you've removed my motivation for getting setup to compile it with the SDK.
Edit:
I was able to compile the TinyYolo graph on my V1 SDK, so I can play with it a bit using the V1 API. Have you compared V1 vs V2 results on any of the models?

Have you tried any of the multi-threaded or multi-processing examples? Running on my i7 desktop I've found using USB3 only improved the frame rate by less than half a frame/sec over USB2.

I've been running Chuanqi's MobileNetSSD since July on a Pi3B handling D1 "snapshots" from 9 cameras with overlapping fields of view from my pre-existing Lorex security system. I use the activation of PIR motion sensors to filter (or "gate" ) the images sent to the AI to reduce the load. It works great, I get the snapshots via FTP, and filter what goes to the AI. Only real complaint is the latency to detection can be as high as 3 or 4 seconds, although usually its about 2 seconds, other than the latency it seems real-time enough for me -- effectively the same as motioneye 1 frame/second snapshots.

Your use case (1) was my goal. I never looked at the video anyways, as the Lorex system's "scrubbing" is so poor. With the Emailed AI snapshots I now have a timestamp to use should I ever need to go back and look at the 24/7 video record (what the Lorex is really good at, but everything built on top of it is just plain pitiful).

I have three system modes, Idle, Audio, and Notify. Idle means we are home and going in and out and don't want to be nagged by the AI. Audio means were are home, but want audio notification of a person in the monitored areas -- fantastic for mail and package deliveries. Notify sends Email images to our cell phones. The key is the Audio AI mode has never woken us up in the middle of the night with a false alarm, and the only Emails have all been valid, mailman, political canvasser, package delivery, etc.

Much as I like Motioneye and MotioneyeOS I'm finding the PiCamera modules are not really suitable for 24/7 use as after a period of several days to a couple of weeks the module "crashes" and only returns a static image from before the crash. Everything else seems to work fine SSH, node-red dashboard, cron, etc. but the AI is effectively blind until a reboot. I've a software only MobileNetSSD AI running on a Pi2B and Pi3B with Pi NoIR camera modules, while it only gets one AI frame about every 2 seconds, it still can be surprisingly useful for monitoring key entry areas, but the "soft" camera failures is a serious issue. I've not ever run Motioneye OS 24/7 long enough to know if it suffers the issue or not. I should probably setup my PiZeroW and try.

With this experience, I' starting to swap out some Lorex cameras with 720p Onviif "netcams" (USAVision, ~$20 on Amazon) since I don't really care about the video, its a step up in snapshot resolution (1280x720), one Pi3B+ and Movidius can handle about four cameras with ~1 second worst case detection latency.

In 24/7 testing usage I am getting Movidius "TIMEOUT" errors every three or four days. It seems I can recover with a try block around the NCS API function calls, and having the except deallocate the graph and close the device, followed by a repeat device scan, open and load graph. Tolerable amount of blind time once every few days. I plan to rewrite for the V2 API to see if it fixes the issue, the V2 multistick example doesn't seem to have any errors yet in over a week of running.

@wb666greene

This comment has been minimized.

Copy link

wb666greene commented Oct 19, 2018

@jasaw
I think I can confirm your comments about TinyYolo. I got the sample code to run and modified it to input some D1 security camera images, detection performance is terrible. Missing full frontal, full length, people in the center of the frame, detecting shadows on the sides as "people", and all manner of wrong calls (two chairs as a bicycle, etc.)

Looks like MobileNetSSD is the only practical AI for security camera use at present on resource constrained systems.

While MobileNetSSD also makes a lot of wrong calls, if you only care about detecting "people", which seems fine for secruity camera systems, it performs very well in my experience.

@MrJBSwe

This comment has been minimized.

Copy link
Author

MrJBSwe commented Oct 21, 2018

@wb666greene

This comment has been minimized.

Copy link

wb666greene commented Oct 21, 2018

@MrJBSwe
I've seen it, but it looks like about $500 for the AI board and computer to plug it into (unless you already have one with a suitable interface, I don't).

Also so far the development environment looks like only C/C++ at present. If python bindings become available for it I'll get a whole lot more interested. Not that I'm any great python guru, but I find it really hard to beat for "rapid prototyping".

At this point I think the AI network is more of a limitation for security system purposes than is the hardware to run the network. MobileNetSSD cpu only can get 7+ fps on my i7 desktop with the OpenCV 3.4.2 dnn module and simple non- threaded python code.

@jasaw

This comment has been minimized.

Copy link
Collaborator

jasaw commented Oct 25, 2018

I have implemented Movidius support into motion and used ChuanQi's MobileNetSSD person detector to work alongside the classic motion detection algorithm. If "Show Frame Changes" in motionEye is enabled, it will also draw a red box around the detected person and the confidence percentage at the top right corner.

I have only tested on a Raspberry Pi with Pi camera with single camera stream. If you have multiple camera streams, the code expects multiple Movidius NC sticks, one stick per mvnc-enabled camera stream. Camera streams with mvnc disabled will use the classic motion detection algorithm.

Code is here:
https://github.com/jasaw/motion/tree/movidius

How to use:

  1. Install Movidius NCSDKv2. Follow the installation manual. Note that the NCSDKv2 may screw up your existing libraries, so I recommend trying this on a sacrificial machine. Alternatively, you could try installing just the API by running sudo make api (I have not tested this one).
  2. Git clone the movidius branch into any directory you like.
    • git clone -b movidius https://github.com/jasaw/motion.git
  3. Go into the directory and run:
    • autoreconf -fiv
    • ./configure
    • make
    • sudo make install
  4. Download the MobileNet SSD graph file or compile your own graph file by following the instructions here.
  5. Add MVNC related configuration items to thread-1.conf file.
    • mvnc_enable on : This will bypass the original motion detection algorithm and use MVNC instead.
    • mvnc_graph_path /home/pi/MobileNetSSD.graph : Path to MobileNetSSD graph. Other neural net models are not supported.
    • mvnc_classification person,cat,dog,car : A comma separated classes of objects to detect.
    • mvnc_threshold 75 : This is confidence threshold in percentage, which takes a range from 0 to 100 as integer. A detected person is only considered valid if the neural net confidence level is above this threshold. 75 seems like a good starting point.

Note: There seems to be some issue getting motionEye front-end to work reliably with this movidius motion. Quite often motionEye is not able to get the mjpg stream from motion, but accessing the stream directly from web browser via port 8081 works fine. Restarting motionEye multiple times seems to workaround this problem for me. Maybe someone can help me look into it?

@wb666greene

This comment has been minimized.

Copy link

wb666greene commented Oct 25, 2018

@jasaw
This is very nice work. Can I ask what kind of frame rate are you getting? If you are getting significantly better fps than I am, I'd be motivated to re-write in C/C++

Using the V1 ncsapi and ChuanQi's MoblileNetSSD on a Pi3B I'm getting about 5.6 fps with the Pi camera module (1280x720) using simple python code and openCV 3.4.2 for image drawing, boxes, and labels (the python PiCamera library I use creates a capture thread).

With simple threaded python code I'm also getting about 5.7 fps from a 1280x720 Onvif netcam (the ~$20 one I mentioned in an earlier reply). This same code and camera running on my i7 Desktop (heavily loaded) is getting about 8 fps. On a lightly loaded AMD quad core its getting about 9 fps

Have you seen any performance improvements of V1 vs. V2 of the ncsapi?

I now have one Pi3B setup with the V2 ncsapi and have run some of the examples (using a USB camera at 640x480) I was most interested in the multi-stick examples, but I've found that the two python examples from the appzoo that I've tried are in pretty bad shape -- not exiting and cleaning up threads properly. I pretty much duplicate their 3-stick results but I don't think they are measuring the frame rate correctly. The frame rate seems camera limited as dropping the light level drops the frame rate and their detection overlays show incredible "lag"

@jasaw

This comment has been minimized.

Copy link
Collaborator

jasaw commented Oct 25, 2018

@wb666greene I don't know exactly how many frames I'm getting from the Movidius stick. I'm not even following the threaded example. I think it doesn't matter anyway as long as it's running at roughly 5 fps. With my implementation, everything still runs at whatever frame rate you set, say 30 fps, but inference is only done at 5 fps. A person usually doesn't move in and out of camera view within 200ms (5fps), so it's pretty safe to assume that we'll at least get a few frames of the person, which is more than enough for inference.

I'm going to refactor my code so that I can merge it into upstream motion, and have multi-stick support as well.

@jasaw

This comment has been minimized.

Copy link
Collaborator

jasaw commented Oct 26, 2018

I have implemented proper MVNC support into motion software. See my earlier post for usage instructions: #1505 (comment)

@jasaw

This comment has been minimized.

Copy link
Collaborator

jasaw commented Nov 5, 2018

@wb666greene I've finally measured the frame rate from my implementation.
Currently, my code is starving the NC stick, only feeding it one frame when its FIFO is empty. This gives me 5.5 fps throughput, minimal heat generated from the device. Been running this setup for more than 1 week, no issue at all.
I've just tested with maintaining at least one frame in the FIFO to ensure no starvation, and managed to get 11.0 fps throughput. I've read that the hardware may overheat when pushed hard continuously, but I have not verified the thermal issue yet. There's thermal throttling built into the hardware, so would be good to see what happens when it's thermal throttled.

@jasaw

This comment has been minimized.

Copy link
Collaborator

jasaw commented Nov 5, 2018

I did some temperature testing.

I ran a short test pushing 11 fps and managed to get the NC stick to thermal throttle within 10 minutes, at ambient temperature of 24 degrees Celsius. The stick starts to throttle when it reaches 70 degrees Celsius, and frame rate dropped to 8 fps. I believe this is just the first level throttling (there are 2 stages).

According to Intel's documentation, these are the throttle states:

0: No limit reached.
1: Lower guard temperature threshold of chip sensor reached; short throttling time is in action between inferences to protect the device.
2: Upper guard temperature of chip sensor reached; long throttling time is in action between inferences to protect the device.

The stick temperature seems to plateau at 55 degrees Celsius when pushing 5.5 fps, again ambient temperature of 24 degrees Celsius.

@MrJBSwe

This comment has been minimized.

Copy link
Author

MrJBSwe commented Nov 6, 2018

I have recently tried Nvidia Xavier => I get like 5fps with yolov2
https://devtalk.nvidia.com/default/topic/1042534/jetson-agx-xavier/yolo/
( I have also tried it's different power modes and my feeling is at 10w the GPU cant offer any more the rest 10-30w just put power in the CPU cores )

Since it is quite expensive, I'm still putting my hope in direction of AI - sticks like Movidius X

RK3399Pro is an interesting addition ( but I prefer to buy the AI-stick separate with a mature API ;-)
https://www.indiegogo.com/projects/khadas-edge-rk3399pro-hackable-expandable-sbc#/

@wb666greene

This comment has been minimized.

Copy link

wb666greene commented Nov 7, 2018

@jasaw
Interesting results on the thermal test, I'm not seeing much fps difference between short duration tests (~10-30 seconds) and long test runs (overnight or longer).

I have given up on the v2 SDK for now, sticking with the V1 sdk and made some code variations to see what frame rates I can get with the same Python code (auto configuring for Python 3.5 vs 2.7) on three different systems comparing Thread and Multiprocessing to the baseline single main loop code which gave 3.2 fps for the Onvif cameras and 5.3 fps for a USB camera and openCV capture. The Onvif cameras are 1280x720 and the USB camera was also set to 1280x720.

These tests suggested using three Python threads, one to round-robin sample the Onvif cameras, one to process the AI on the NCS, and the main thread to do everything else (MQTT for state and result reporting, saving images with detections, displaying the live images, etc.) would be the way to go.

I got 10.7 fps on my i7 desktop with NCS on USB3 running Python 3.5 on an overnight run.

Running the same code on Pi3B+ with Python 2.7 I'm getting 7.1 fps, but its only been running all morning.

My work area is ~27C and I'm not seeing any evidence of thermal throttling (or it hits so fast my test runs haven't been short enough to see it). I don't think the v1 SDK has the temperature reporting features, I haven't checked the "throttling state" as I really only care about the "equilibrium" frame rate I can obtain. For my purposes 6 fps will support 4 cameras. I'm going to try adding a fourth thread to service a second NCS stick.

@MrJBSwe
The Movidius NCS only costs ~$75, come on in the water is fine :)
Running off a Pi3B+ its only using maybe 8W of power and your total entry fee is <$150 if you have the basics like spare keyboard, mouse, and monitor for installation and development. My target environment is stand-alone headless networked "IOT" device talking to the outside world via MQTT, Email, Telegram etc.

@MrJBSwe

This comment has been minimized.

Copy link
Author

MrJBSwe commented Nov 7, 2018

@wb666greene

come on in the water is fine

I have 2 movidus and like them as a "appetizer" while waiting for movidius x ( or something similar )
I have tested both v1 &2 of ncsdk. I'm currently playing around on a nivida 1070 to see what's possible when HW is less of a constraint. Yolov3 seems to be a bit overkill and drains even a 1070 of it's juice.

I want to run yolov2 ( or something similar ) at >= 4 fps ( yolo tiny gives too random results ). I plan to check out your code wb666greene & jasaw, interesting work !

this is a bit interesting ( price is right ;-)
https://youtu.be/bBuHOHPYY7k?t=69

@jasaw

This comment has been minimized.

Copy link
Collaborator

jasaw commented Nov 7, 2018

@wb666greene From what I've read, thermal reporting is only available on v2 SDK. In my test, I'm pushing 11 fps consistently through the stick until it starts to go in and out of thermal throttle state cycle of 8 fps (thermal throttled) for 1 second, 11 fps (normal) for 3 seconds. If you take the average, it's still pushing 10 fps, which may explain the 10.7 fps that you're seeing on your i7 desktop. I imagine at higher ambient temperature like summer 45 degrees Celsius, it's going to stay throttled for much longer, possibly even go into 2nd stage throttle.

@MrJBSwe

This comment has been minimized.

Copy link
Author

MrJBSwe commented Nov 10, 2018

Maybe something...
https://www.96boards.ai/products/rock960/

Similar to Khadas Edge, RK3399Pro and the upcoming Rock Pi 4 & RockPro64-AI, I guess the trend is RK3399Pro for multiple reason ( but a I still hope for movidius X and or Laceli )

BM1880
https://www.sophon.ai/post/36.html

List
https://github.com/basicmi/AI-Chip-List

Movidius X has been released !
https://www.cnx-software.com/2018/11/14/intel-neural-compute-stick-2-myriad-x-vpu/

@jasaw

This comment has been minimized.

Copy link
Collaborator

jasaw commented Nov 15, 2018

@MrJBSwe Yes, Neural Compute Stick 2 has finally been released. Let's see if I can get one to play with.

I see a few obstacles in supporting NCS 2.

  • NCS 2 only works with Intel's new toolkit called OpenVINO.
  • OpenVINO does not run on ARM machines, so will not run on Raspberry Pi.
  • OpenVINO only provides C++ & Python API, but Motion software is written in C. I have a feeling that Motion developers are reluctant to switch to C++.
  • Even if ARM is supported, cross compiling OpenVINO API looks like a giant pain.

Excited and disappointed at the same time...

@wb666greene

This comment has been minimized.

Copy link

wb666greene commented Nov 15, 2018

@MrJBSwe @jasaw

Found the same issues jasaw lists above with the NCS2 when going through the docs initially. I've ordered one from Mouser for $99+tax and shipping, they had 2000+ in stock when I ordered. There are other vendors listed on the Main page: https://software.intel.com/en-us/neural-compute-stick after pressing the "buy now" button.

The OpenVINO won't install on Ubuntu 18.04, I tried on a fresh install I'd used with the NCS for testing/developiong multistick code, didn't get past the initial installing dependencies as at least one is unmet and the process stops. So add fragile development environment to the list.

They say only 16.04 is supported (they also support Windows10 whcih they didn't for the original). I've made a fresh test install of 16.04 but the old drive I used seems to have errors so I didn't get very far but it appeared to get past the dependencies installation, Starting the installation GUI died with writing to /tmp errors and lots of disk I/O errors in dmesg :(

I may have to re-install and start over with a different drive as the badblocks scan is taking forever and finding lots of errors :(

You can call C++ libraries from C, if the compilers match. I've done it in a previous life, using a proprietary C++ library compiled for Linux from the device vendor. It required a very specific (old) version of gcc/g++ but I did get it to work.

Here is a brief overview of what you are up against:
https://www.teddy.ch/c++_library_in_c/

I have another potential application to run on a 12V DC i3 "NUC-like computer where the ~30 fps I get with three NCS sticks is not enough. Intel claims the NCS2 is "up to 8X faster" hence by desire to take a look at the NCS2.

@MrJBSwe

This comment has been minimized.

Copy link
Author

MrJBSwe commented Nov 15, 2018

@MrJBSwe wb666greene Thanks,
I couldn't for my life have guessed they would stop supporting ARM...the number 1 platform for this type of device ( AND shit I ordered 2 NCS2 before noticing ...shit shit maybe it could be of interest with the upcoming odroid ) BIG disappointment I thought Intel had better understanding than this ( evil inside ;-)

Guess I'll put my focus at this while waiting for other stuff
https://www.indiegogo.com/projects/sipeed-maix-the-world-first-risc-v-64-ai-module?utm_source=affiliate&utm_medium=cpc&utm_campaign=sasdeep&utm_content=link&sscid=b1k2_fcluh#/

@jasaw

This comment has been minimized.

Copy link
Collaborator

jasaw commented Nov 15, 2018

@MrJBSwe The IoT market is dominated by ARM, and Intel has been giving developers financial incentives to switch over to their Intel platform but without much success. By locking NCS2 down to Intel platform, I can see significant growth in Intel's IoT platform.

@MrJBSwe

This comment has been minimized.

Copy link
Author

MrJBSwe commented Nov 16, 2018

@ jasaw
I'll try to return them to mouser I really dislike what Intel has done here...
I guess they want to lose the "AI-stick" market to China....

I'll take a look at this
https://www.sophon.ai/post/36.html

@wb666greene

This comment has been minimized.

Copy link

wb666greene commented Nov 16, 2018

@MrJBSwe
It'd be a shame and a real National Security threat if the USA loses the AI chip market to China, especially after they have been caught red-handed putting "backdoors" into networking infrastructure they export!

@MrJBSwe

This comment has been minimized.

Copy link
Author

MrJBSwe commented Nov 16, 2018

@wb666greene "backdoors"
https://www.cs.vu.nl/~ast/intel/
( and on top of that Win10 ...like a ship full of holes )

Fair play and opensource...I guess we will have to wait for some kind of "FPGA" where we can flash our own hardware ...

@MrJBSwe

This comment has been minimized.

Copy link
Author

MrJBSwe commented Nov 18, 2018

Laceli 2.8 TOPS at 0.3W ( in the form of Orange Pi AI , SDK ? bit hard to get )

$69 Orange Pi AI Stick 2801, based on the same Lightspeeur 2801S ASIC found in PLAI Plug

With sdk
https://www.cnx-software.com/2018/11/22/orange-pi-ai-stick-2801-neural-compute-stick-sdk/

image

@wb666greene

This comment has been minimized.

Copy link

wb666greene commented Nov 20, 2018

I've got the openVINO SDK installed on an old i3 running Ubuntu-Mate 16.04. There were some path issues between the install doc on the website and where things ended up, but once I figured this out, all the tests passed and the demos I've tried all appear to run fine. Looks like GPU support is broken for this old i915 motherboard, but the good news is it auto-detects Modivius vs Movidius2.

Not played with much yet, but with the interactive_facedetection_demo sample code and a USB WebCam the performance looks to be significantly better with the NCS2

NCS facedetection: ~16 fps face analysis: ~3 fps
NCS2 ~42 fps ~10 fps
CPU ~17 fps ~5.4 fps
Note that the CPU needed an FP32 model where the NCS used fp16. As with my MobleNet-SSD Python code, the CPU on the i3 is about the same as the Movidius NCS.

But I've an project using multiple NCS and MobileNet-SSD on where these old i3 systems would be the target (since I have them) so my first exercise will be to attempt to modify it to use the openVINO SDK and see if: 1) the original NCS does a bit better (library improvements) and 2) how much improvement I get with the NCS2 vs original NCS. Next I'll investigate if perhaps this "face detection" works better than the "person detector" for my security system purposes -- an image with a face would be better when Emailed than an image of perhaps a person's back or an image with the head cut off :)

IMHO the Intel software support will blow away anything you will be able to get from those Chinese companies

To download the openVINO I had to "register" and they sent me an activation code that I was supposed to need. But the installation never asked me for it, maybe I missed it and its why the GPU code doesn't work? I've no real feel for the "power" of this old i3-i915 integrated graphics as a co-processor so its not a priority for me to pursue this.

FYI, Intel says they only support Ubuntu 16.04 and Windows 10, I initially tried installing it on Ubuntu 18.04 which I had already installed on this old i3, and it didn't get very far (library version conflict)

@MrJBSwe

This comment has been minimized.

Copy link
Author

MrJBSwe commented Nov 20, 2018

@wb666greene
Does NCS2 work with NCSDK 2.x ? ( meaning, I could run it on an ARM )

FYI, Intel says they only support Ubuntu 16.04
For the NCSDK 2.x they had put an version control in the script, it could actually run on 18.04
( maybe the same issue with openvino)

Have you tried Yolov2 ? ( not tiny )

What type of low energy board do you see as an realistic alternative to ARM based ?
Like the upcoming odroid-h2-intel-celeron
https://www.cnx-software.com/2018/10/19/odroid-h2-intel-celeron-j4150-development-board/

https://cpu.userbenchmark.com/Compare/Intel-Core-i7-7500U-vs-Intel-Celeron-J4105/m171274vsm444211

@wb666greene

This comment has been minimized.

Copy link

wb666greene commented Nov 22, 2018

@jasaw

This comment has been minimized.

Copy link
Collaborator

jasaw commented Dec 4, 2018

@MrJBSwe The Laceli based Orange Pi AI stick does come with an SDK. If you click through to the AliExpress page, you'll see that they'll supply the SDK as well. I haven't seen any feedback on how good the SDK is, but at least it runs on ARM ! Cost of AI stick + SDK is prohibitive, at $218.

@MrJBSwe

This comment has been minimized.

Copy link
Author

MrJBSwe commented Dec 6, 2018

@jasaw
I have ordered one + sdk and I have also decided to keep my 2x NCS2 ( Intel seems to have a plan to support ARM ) and if not I have bought an Odroid H2.

@wb666greene

This comment has been minimized.

Copy link

wb666greene commented Dec 20, 2018

News flash, OpenVINO release 5 to support Raspberry Pi!
https://software.intel.com/en-us/articles/OpenVINO-RelNotes

Just got a Email notification of its release. Downloading it now.

Fine print, looks like we'll need Raspbian 9 for the Movidius host.

@Chiny91

This comment has been minimized.

Copy link

Chiny91 commented Dec 21, 2018

Santa visited Chiny Towers early and delivered a sacrificial R Pi 3b and a Movidius. So, I have had the @jasaw recipe working with no problems for 12 hours and all appears well, although far too early to comment on reliability. I'm sure the postman delivering today could not imagine the excitement he generated 😄

Like many others, I'm plagued with unwanted motion triggers; cats, foxes, spiders, dawn, dusk, trees in wind, all of which have been eliminated by this AI system, so far. Over the holiday period, I'll be looking at how motionEye can deliver remote alerts, something I could not contemplate before, for fear of being overwhelmed.

I'd like to be able to contribute but I'm just an experienced user, occasionally dabbling in scripts, not a developer. I did notice that at 1024x576 resolution, motionEye needed multiple restarts, but at 704x576, a single start is OK. Whether that is relevant/general, I don't know how to ascertain.

@MrJBSwe

This comment has been minimized.

Copy link
Author

MrJBSwe commented Dec 30, 2018

I've got yolov3 running on myriad ( NCS2 ) 1.7 fps ( 3fps with CPU mode )
https://software.intel.com/en-us/articles/OpenVINO-Using-TensorFlow#inpage-nav-8
https://github.com/mystic123/tensorflow-yolo-v3

–data_type FP16 & latest openvino => will work in MYRIAD mode ( using ubuntu 18.04 )
https://richardstechnotes.com/category/techniques/

More ways to do it
https://github.com/PINTO0309/OpenVINO-YoloV3

Beer seeker =)
https://github.com/leswright1977/RPi3_NCS2/
https://www.youtube.com/watch?v=YYW4xARiQ7U


I have also converted YOLOv2 608×608 with
https://github.com/thtrieu/darkflow

but I can’t run yolov2 with object_detection_demo_yolov3_async
=> throw std::logic_error(“This demo only accepts networks with three layers”

Any ideas on how to make yolov2 run ?


New stuff ( The perfect board for MotionEyeOs ? )
http://www.lindeni.org/lindenis-v5.html

toybrick-rk3399pro Its on-chip NPU offers up to 3.0TOPs
https://www.96rocks.com/blog/2018/12/11/toybrick-rk3399pro-board-is-pre-order-now/

CPU: RISC-V Dual Core 64bit, with FPU
https://kendryte.com/
https://www.seeedstudio.com/Sipeed-MAix-BiT-for-RISC-V-AI-IoT-1.html

Laceli / Orange Pi / Lightspeeur SPR2801S
https://www.seeedstudio.com/USB-Neural-Network-Computing-Card.html

@Chiny91

This comment has been minimized.

Copy link

Chiny91 commented Feb 19, 2019

I've had my sacrificial R Pi 3b and a Movidius running for 2 months now, most successfully. The @jasaw recipe has been 100% reliable, no mystery crashes or hangups, plus all running unattended whilst I was off site for 4 weeks. On all but one day there were zero false positives; with 3 false positives being triggered in one hour on a single day by snow fall (a rare event round here). There were zero missed events, based on reviewing a non-Movidius motionEyeOS cam covering a similar area. To my Movidius Pi, I added some Python triggering a Pushover notification to my phone, giving me near real time notification of someone approaching my front door.

For my purposes (domestic security cams), this is a game changing development with false triggers being practically eliminated. It does come at a cost of a Movidius but the benefits are well worth it for my use. I do hope this development continues in the motion project and in due course, reaches motioneye and motionEyeOS.

@jasaw

This comment has been minimized.

Copy link
Collaborator

jasaw commented Feb 20, 2019

@Chiny91 Thank you for doing extensive testing on it. I'm glad that the Movidius detection system has been working well for you. I would love to see motion officially support Movidius as well, but unfortunately Intel has decided to deprecate the NCSDK in preference of Intel's own OpenVINO framework. I haven't got time to look into getting motion to support OpenVINO yet, so there's still more work to be done.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment