Analytics Server Docker
To demonstrate the full end to end capabilities and to help developers jump start their development, Deepstream 3.0 comes with a complete reference implementation of a smart parking solution. This reference application can be deployed on edge servers or in the cloud. Developers can leverage this and adapt to their specific use cases. Docker containers have been provided to further simplify deployment, adaptability, and manageability.
The architecture of the application looks as follows:
Note: This application creates docker containers only for Analytics Server.
The application can be run in two modes:
- Playback: This mode is used to playback events from a point in time
- Live: This mode is used for seeing the events, scene as and when they are detected
Export the following Environment variables:
- IP_ADDRESS - IP address of host machine
- GOOGLE_MAP_API_KEY - Api Key for Google Map
Follow the instructions in this link to get an api key for Google Maps.
Playback is the default mode of the application.
If live mode has to be used, then:
node-apis/config/config.jsonand change the following config:
Send the data generated by DeepStream 3.0 to the kafka topic
Install Docker and Docker Compose.
Export the environment variables
a) IP Address of Host machine
b) Google Map API Key:
export GOOGLE_MAP_API_KEY=<YOUR GOOGLE_API_KEY>
Assuming that the application has been cloned from this repository
git clone https://github.com/NVIDIA-AI-IOT/deepstream_360_d_smart_parking_application.git
use the following command to change the current directory.
Change Configurations (Optional)
Run the docker containers using the following
sudo -E docker-compose up -d
this will start the following containers
cassandra kafka zookeeper spark-master spark-worker elasticsearch kibana logstash api ui python-module
Start spark streaming job, this job does the following
a) manages the state of parking garage
b) detects car "understay" anomaly
c) computes flowrate
run the following command to login into spark master
sudo docker exec -it spark-master /bin/bash
the docker container picks up the jar file from spark/data
./bin/spark-submit --class com.nvidia.ds.stream.StreamProcessor --master spark://master:7077 --executor-memory 8G --total-executor-cores 4 /tmp/data/stream-360-1.0-jar-with-dependencies.jar
Note that one can go to stream directory and compile the source code using maven to create the stream-360-1.0-jar-with-dependencies.jar
mvn clean install -Pjar-with-dependencies
Start spark batch job, this detects "overstay" anomaly.
Use a second shell and run the following command to login into spark master
sudo docker exec -it spark-master /bin/bash
run the batch job
./bin/spark-submit --class com.nvidia.ds.batch.BatchAnomaly --master local /tmp/data/stream-360-1.0-jar-with-dependencies.jar
Generate Data (Optional) , for test purpose ONLY, normally Deepstream Smart Parking application will read from camera and send metadata to Analytics Server
a) sudo apt-get update b) sudo apt-get install default-jdk c) sudo apt-get install maven d) cd ../stream e) sudo mvn clean install exec:java -Dexec.mainClass=com.nvidia.ds.util.Playback -Dexec.args="<KAFKA_BROKER_IP_ADDRESS>:<PORT> --input-file <path to input file>"
- Change KAFKA_BROKER_IP_ADDRESS and PORT with Host IP_ADDRESS and port used by Kafka respectively.
- Set path to input file as
data/playbackData.jsonfor viewing the demo data.
- The following additional options can be added to args in step e:
topic-name - Name of the kafka topic to which data has to be sent. Set it to
metromind-rawif input data is not tracked, but if input data has already gone through the tracking module then send it to
metromind-start. The default value used in step e is
With this additional option, step e will look as follows:
sudo mvn clean install exec:java -Dexec.mainClass=com.nvidia.ds.util.Playback -Dexec.args="<KAFKA_BROKER_IP_ADDRESS>:<PORT> --input-file <path to input file> --topic-name <kafka topic name>"
Create Elasticsearch start-Index (Optional)
browse to Kibana URL http://IP_ADDRESS:5601
Create Elasticsearch anomaly-Index (Optional)
Automated Script (Optional)
The entire process to start and stop the dockers can be automated using
start.shis going to be used, make sure that
xxx.xxx.xx.xxis replaced by the IP ADDRESS of the host machine. Also replace
<YOUR GOOGLE_API_KEY>with your own API KEY.
stop.shshould be only used when the containers need to be stopped and the docker images have to be removed from the system.
sudo docker-compose downto stop the containers. This significantly reduces the time taken by docker containers to start again when
- The deepstream application should be started only after the analytics server is up and running.
- Remember to shut down the docker-containers of analytics server once the deepstream is shut down.
Note: The events that show up in the UI are comparatively less as compared to real events. This is because, if a object has a lot of events within the refresh interval then the events with respect to other objects may get obscured. To avoid this situation we display only a few events per object.