Skip to content
Branch: master
Clone or download
Latest commit 7ffd05f Mar 14, 2019
Type Name Latest commit message Commit time
Failed to load latest commit information.
pages Evo: properly restore counting areas when refreshing the page Mar 8, 2018
statemanagement Evo: make data downloadable while tracking Aug 23, 2018
static Fix: new icons sizing Jan 9, 2019
utils Evo: rethink slide logic to add the path visualization as a slide Jun 11, 2018
.gitignore Evo: make data downloadable while tracking Aug 23, 2018 Fixed typo on string #61 Mar 1, 2019
package-lock.json Evo: release 0.2.3 Jan 10, 2019
server.js Fix: bug related to #51 , csv export plugin was mutating data Sep 11, 2018
yarn.lock Fix: Websocket dependency error with node v10 fix #58 Jan 10, 2019

Open traffic cam (with YOLO)

This project is offline lightweight DIY solution to monitor urban landscape. After installing this software on the specified hardware (Nvidia Jetson board + Logitech webcam), you will be able to count cars, pedestrians, motorbikes from your webcam live stream.

Behind the scenes, it feeds the webcam stream to a neural network (YOLO darknet) and make sense of the generated detections.

It is very alpha and we do not provide any guarantee that this will work for your use case, but we conceived it as a starting point from where you can build-on & improve.

Table of Contents

💻 Hardware pre-requisite

  • Nvidia Jetson TX2
  • Webcam Logitech C222 (or any usb webcam compatible with Ubuntu 16.04)
  • A smartphone / tablet / laptop that you will use to operate the system

💾 Exports documentation

Counter data export

This export gives you the counters results along with the unique id of each object counted.


Tracker data export

This export gives you the raw data of all objects tracked with frame timestamps and positioning.

  // 1 Frame
    "timestamp": "2018-08-23T08:46:59.677Z" // Time of the frame
    // Objects in this frame
    "objects": [{
      "id": 13417, // unique id of this object
      "x": 257, // position and size on a 1280x720 canvas
      "y": 242,
      "w": 55,
      "h": 44,
      "bearing": 230,
      "name": "car"
      "id": 13418,
      "x": 312,
      "y": 354,
      "w": 99,
      "h": 101,
      "bearing": 230,
      "name": "car"
  // Other frames ...

⚙ System overview

See technical architecture for a more detailed overview

open traffic cam architecture

Edit schema

🛠 Step by Step install guide

NOTE: lots of those steps needs to be automated by integrating them in a docker image or something similar, for now need to follow the full procedure

⚡️Flash Jetson Board:

  • Download JetPack to Flash your Jetson board with the linux base image and needed dependencies
  • Follow the install guide provided by NVIDIA

NOTE: You also can find a detailed video tutorial for flashing the Jetson board here.

NOTE 2: This tutorial might also give additional information

🛩Prepare Jetson Board

  • Update packages

    sudo apt-get update
  • Install cURL

    sudo apt-get install curl
  • install git-core

    sudo apt-get install git-core
  • Install nodejs (v8):

    curl -sL | sudo -E bash -
    sudo apt-get install -y nodejs
    sudo apt-get install -y build-essential
  • Install ffmpeg (v3)

    sudo add-apt-repository ppa:jonathonf/ffmpeg-3
    # sudo add-apt-repository ppa:jonathonf/tesseract (ubuntu 14.04 only!!)
    sudo apt update && sudo apt upgrade
    sudo apt-get install ffmpeg
  • Optional: Install nano

    sudo apt-get install nano

📡Configure Ubuntu to turn the jetson into a wifi access point

  • enable SSID broadcast

    add the following line to /etc/modprobe.d/bcmdhd.conf

    options bcmdhd op_mode=2

    further infos: here

  • Configure hotspot via UI

    follow this guide:

  • Define Address range for the hotspot network

    • Go to the file named after your Hotspot SSID in /etc/NetworkManager/system-connections

      cd /etc/NetworkManager/system-connections
      sudo nano <YOUR-HOTSPOT-SSID-NAME>
    • Add the following line to this file:

      address1=, <--- this line
    • Restart the network-manager

      sudo service network-manager restart

🚀Configure jetson to start in overclocking mode:

  • Add the following line to /etc/rc.local before exit 0:

    #Maximize performances
    ( sleep 60 && /home/ubuntu/ )&
  • Enable rc.local.service

    chmod 755 /etc/init.d/rc.local
    sudo systemctl enable rc-local.service

👁Install Darknet-net:

IMPORTANT Make sure that openCV (v2) and CUDA will be installed via JetPack (post installation step) if not: (fallback :openCV 2: install script, CUDA: no easy way yet)

  • Install libwsclient:

    git clone
    cd libwsclient
    ./configure && make && sudo make install
  • Install liblo:

    wget --no-check-certificate
    tar xvfz liblo-0.29.tar.gz
    cd liblo-0.29
    ./configure && make && sudo make install
  • Install json-c:

    git clone
    cd json-c
    ./configure && make && make check && sudo make install
  • Install darknet-net:

    git clone
  • Download weight files:

    link: yolo.weight-files

    Copy yolo-voc.weights to darknet-net repository path (root level)


      |# ... other files
      |yolo-voc.weights <--- Weight file should be in the root directory
    wget --no-check-certificate
  • Make darknet-net

    cd darknet-net

🎥Install the open-data-cam node app

  • Install pm2 and next globally

    sudo npm i -g pm2
    sudo npm i -g next
  • Clone open_data_cam repo:

    git clone
  • Specify ABSOLUTE PATH_TO_YOLO_DARKNET path in lab-open-data-cam/config.json (open data cam repo)


    	"PATH_TO_YOLO_DARKNET" : "/home/nvidia/darknet-net"
  • Install open data cam

    cd <path/to/open-data-cam>
    npm install
    npm run build
  • Run open data cam on boot

    cd <path/to/open-data-cam>
    # launch pm2 at startup
    # this command gives you instructions to configure pm2 to
    # start at ubuntu startup, follow them
    sudo pm2 startup
    # Once pm2 is configured to start at startup
    # Configure pm2 to start the Open Traffic Cam app
    sudo pm2 start npm --name "open-data-cam" -- start
    sudo pm2 save

🏁 Restart the jetson board and open http://IP-OF-THE-JETSON-BOARD:8080/

Connect you device to the jetson

💡 We should maybe set up a "captive portal" to avoid people needing to enter the ip of the jetson, didn't try yet 💡

When the jetson is started you should have a wifi "YOUR-HOTSPOT-NAME" available.

You are done 👌

🚨 This alpha version of december is really alpha and you might need to restart ubuntu a lot as it doesn't clean up process well when you switch between the counting and the webcam view 🚨

You should be able to monitor the jetson from the UI we've build and count 🚗 🏍 🚚 !

‼️Automatic installation (experimental)

The install script for autmatic installation

Setting up the access point is not automated yet! **follow this guide: ** to set up the hotspot.

  • run the script directly from GitHub

    wget -O - | bash


To debug the app log onto the jetson board and inspect the logs from pm2 or stop the pm2 service (sudo pm2 stop <pid>) and start the app by using sudo npm start to see the console output directly.

  • Error: please specify the path to the raw detections file

    Make sure that ffmpeg is installed and is above version 2.8.11

  • Error: Could *not* find a valid build in the '.next' directory! Try building your app with '*next* build' before starting the server

    Run npm build before starting the app

  • Could not find darknet. Be sure to make darknet without sudo otherwise it will abort mid installation.

  • Error: cannot open shared object file: No such file or directory

    Try reinstalling the liblo package.

  • Error: Error: Cannot stop process that is not running.

    It is possible that a process with the port 8090 is causing the error. Try to kill the process and restart the board:

    sudo netstat -nlp | grep :8090
    sudo kill <pid>

🗃 Run open data cam on a video file instead of the webcam feed:

It is possible to run Open Data Cam on a video file instead of the webcam feed.

Before doing this you should be aware that the neural network (YOLO) will run on all the frames of the video file at ~7-8 FPS (best jetson speed) and do not play the file in real-time. If you want to simulate a real video feed you should drop the framerate of your video down to 7 FPS (or whatever frame rate your jetson board can run YOLO).

To switch the Open Data Cam to "video file reading" mode, you should go to the open-data-cam folder on the jetson.

  1. cd <path/to/open-data-cam>

  2. Then open YOLO.js, and uncomment those lines:

YOLO.process = new forever.Monitor(
    max: 1,
    cwd: config.PATH_TO_YOLO_DARKNET,
    killTree: true
  1. Copy the video file you want to run open data cam on in the darknet-net folder on the Jetson (if you did auto-install, it is this path: ~/darknet-net)
// For example, your file is `video-street-moovelab.mp4`, you will end up with the following in the darknet-net folder:

  |# ... other files
  |video-street-moovellab.mp4 <--- Video file
  1. Then replace YOUR_FILE_PATH_RELATIVE_TO_DARK_NET_FOLDER.mp4 placeholder in YOLO.js with your file name, in this case video-street-moovellab.mp4
// In our example you should end up with the following:

YOLO.process = new forever.Monitor(
    max: 1,
    cwd: config.PATH_TO_YOLO_DARKNET,
    killTree: true
  1. After doing this you should re-build the Open Data Cam node app.
npm run build

You should be able to use any video file that are readable by OpenCV, which is what YOLO implementation use behind the hoods to decode the video stream

🎛 Advanced settings

Track only specific classes

By default, the opendatacam will track all the classes that the neural network is trained to track. In our case, YOLO is trained with the VOC dataset, here is the complete list of classes

You can restrict the opendatacam to some specific classes with the VALID_CLASSES option in the config.json file .

For example, here is a way to only track buses and person:

  "VALID_CLASSES": ["bus","car"]

If you change this config option, you will need to re-build the project by running npm run build.

In order to track all the classes (default value), you need to set it to:

  "VALID_CLASSES": ["*"]

Extra note: the tracking algorithm might work better by allowing all the classes, in our test we saw that for some classes like Bike/Motorbike, YOLO had a hard time distinguishing them well, and was switching between classes across frames for the same object. By keeping all the detections and ignoring the class switch while tracking we saw that we can avoid losing some objects, this is discussed here

🛠 Development notes

Technical architecture

technical architecture open traffic cam

Edit schema

Miscellaneous dev tips

Mount jetson filesystem as local filesystem on mac for dev

sshfs -o allow_other,defer_permissions nvidia@ /Users/tdurand/Documents/ProjetFreelance/Moovel/remote-lab-traffic-cam/

SSH jetson

ssh nvidia@

Install it and run:

yarn install
yarn run dev
You can’t perform that action at this time.