Switch branches/tags
Nothing to show
Find file History
Type Name Latest commit message Commit time
Failed to load latest commit information.
cassandra Initial commit. Nov 13, 2018
elasticsearch Initial commit. Nov 13, 2018
kafka Updated Kafka Version Nov 27, 2018
kibana Initial commit. Nov 13, 2018
logstash Initial commit. Nov 13, 2018
nginx-ui Initial commit. Nov 13, 2018
node-apis Initial commit. Nov 13, 2018
python Initial commit. Nov 13, 2018
readme-images Initial commit. Nov 13, 2018
spark Initial commit. Nov 13, 2018
README.md Updated README Nov 13, 2018
docker-compose.yml Initial commit. Nov 13, 2018
nv-schema.json Initial commit. Nov 13, 2018
start.sh Initial commit. Nov 13, 2018
stop.sh Initial commit. Nov 13, 2018


Analytics Server Docker


To demonstrate the full end to end capabilities and to help developers jump start their development, Deepstream 3.0 comes with a complete reference implementation of a smart parking solution. This reference application can be deployed on edge servers or in the cloud. Developers can leverage this and adapt to their specific use cases. Docker containers have been provided to further simplify deployment, adaptability, and manageability.

The architecture of the application looks as follows:


Note: This application creates docker containers only for Analytics Server.

The application can be run in two modes:

  • Playback: This mode is used to playback events from a point in time
  • Live: This mode is used for seeing the events, scene as and when they are detected

Getting Started


The application requires recent versions of Docker and Docker Compose to be installed in the machine.

Environment Variables

Export the following Environment variables:

  • IP_ADDRESS - IP address of host machine
  • GOOGLE_MAP_API_KEY - Api Key for Google Map

Follow the instructions in this link to get an api key for Google Maps.


Playback is the default mode of the application.

If live mode has to be used, then:

  1. Go to node-apis/config/config.json and change the following config:

     garage.isLive: true
  2. Send the data generated by DeepStream 3.0 to the kafka topic metromind-raw


  1. Install Docker and Docker Compose.

  2. Export the environment variables

    a) IP Address of Host machine

     export IP_ADDRESS=xxx.xxx.xx.xx

    b) Google Map API Key:

  3. Assuming that the application has been cloned from this repository

     git clone https://github.com/NVIDIA-AI-IOT/deepstream_360_d_smart_parking_application.git

    use the following command to change the current directory.

     cd ./analytics_server_docker
  4. Change Configurations (Optional)

  5. Run the docker containers using the following docker-compose command

     sudo -E docker-compose up -d

    this will start the following containers

  6. Start spark streaming job, this job does the following

    a) manages the state of parking garage

    b) detects car "understay" anomaly

    c) computes flowrate

    run the following command to login into spark master

     sudo docker exec -it spark-master /bin/bash

    the docker container picks up the jar file from spark/data

     ./bin/spark-submit  --class com.nvidia.ds.stream.StreamProcessor --master spark://master:7077 --executor-memory 8G --total-executor-cores 4 /tmp/data/stream-360-1.0-jar-with-dependencies.jar

    Note that one can go to stream directory and compile the source code using maven to create the stream-360-1.0-jar-with-dependencies.jar

     mvn clean install -Pjar-with-dependencies
  7. Start spark batch job, this detects "overstay" anomaly.

    Use a second shell and run the following command to login into spark master

     sudo docker exec -it spark-master /bin/bash

    run the batch job

     ./bin/spark-submit  --class com.nvidia.ds.batch.BatchAnomaly --master local[8]  /tmp/data/stream-360-1.0-jar-with-dependencies.jar
  8. Generate Data (Optional) , for test purpose ONLY, normally Deepstream Smart Parking application will read from camera and send metadata to Analytics Server

    a) sudo apt-get update
    b) sudo apt-get install default-jdk
    c) sudo apt-get install maven 
    d) cd ../stream
    e) sudo mvn clean install exec:java -Dexec.mainClass=com.nvidia.ds.util.Playback -Dexec.args="<KAFKA_BROKER_IP_ADDRESS>:<PORT> --input-file <path to input file>"


    • Change KAFKA_BROKER_IP_ADDRESS and PORT with Host IP_ADDRESS and port used by Kafka respectively.
    • Set path to input file as data/playbackData.json for viewing the demo data.
    • The following additional options can be added to args in step e:
      • topic-name - Name of the kafka topic to which data has to be sent. Set it to metromind-raw if input data is not tracked, but if input data has already gone through the tracking module then send it to metromind-start. The default value used in step e is metromind-start.
        With this additional option, step e will look as follows:

          sudo mvn clean install exec:java -Dexec.mainClass=com.nvidia.ds.util.Playback -Dexec.args="<KAFKA_BROKER_IP_ADDRESS>:<PORT> --input-file <path to input file> --topic-name <kafka topic name>"
  9. Create Elasticsearch start-Index (Optional)

    browse to Kibana URL http://IP_ADDRESS:5601

    Start Index

  10. Create Elasticsearch anomaly-Index (Optional)

    Anomaly Index

  11. Automated Script (Optional)

    The entire process to start and stop the dockers can be automated using start.sh and stop.sh.

    If start.sh is going to be used, make sure that xxx.xxx.xx.xx is replaced by the IP ADDRESS of the host machine. Also replace <YOUR GOOGLE_API_KEY> with your own API KEY.

    stop.sh should be only used when the containers need to be stopped and the docker images have to be removed from the system.

    Use sudo docker-compose down to stop the containers. This significantly reduces the time taken by docker containers to start again when start.sh is executed.


    • The deepstream application should be started only after the analytics server is up and running.
    • Remember to shut down the docker-containers of analytics server once the deepstream is shut down.
  12. Test



    Note: The events that show up in the UI are comparatively less as compared to real events. This is because, if a object has a lot of events within the refresh interval then the events with respect to other objects may get obscured. To avoid this situation we display only a few events per object.