Developers can deploy the application on the Atlas 200 DK or the AI acceleration cloud server to decode the local MP4 file or RTSP video streams, detect vehicles in video frames, predict their attributes, generate structured information, and send the structured information to the server for storage and display.
Before using an open source application, ensure that:
- Mind Studio has been installed.
- The Atlas 200 DK developer board has been connected to Mind Studio, the cross compiler has been installed, the SD card has been prepared, and basic information has been configured.
Before running the application, obtain the source code package and configure the environment as follows.
Obtain the source code package.
Download all the code in the sample-videoanalysiscar repository at https://github.com/Ascend/sample-videoanalysiscar to any directory on Ubuntu Server where Mind Studio is located as the Mind Studio installation user, for example, /home/ascend/sample-videoanalysiscar.
Obtain the source network model and its weight file used in the application by referring to Table 1, and save them to any directory on the Ubuntu server where Mind Studio is located (for example, $HOME/ascend/models/videoanalysiscar).
Table 1 Models used in the vehicle detection application
Download the source network model file and its weight file by referring to README.md in https://github.com/Ascend/models/tree/master/computer_vision/classification/car_color.
Download the source network model file and its weight file by referring to README.md in https://github.com/Ascend/models/tree/master/computer_vision/classification/car_type.
Download the source network model file and its weight file by referring to README.md in https://github.com/Ascend/models/tree/master/computer_vision/object_detect/car_plate_detection.
Download the source network model file and its weight file by referring to README.md in https://github.com/Ascend/models/tree/master/computer_vision/classification/car_plate_recognition.
Download the source network model file and its weight file by referring to README.md in https://github.com/Ascend/models/tree/master/computer_vision/object_detect/vgg_ssd.
Convert the source network model to a Da Vinci model.
Choose Tool > Convert Model from the main menu of Mind Studio. The Convert Model page is displayed.
On the Convert Model page, set Model File and Weight File to the model file and weight file downloaded in 2, respectively.
Set Model Name to the model name in Table 1.
For the car_color model, car_color_inference processes 10 images at a time. Therefore, N of Input Shape must be set to 10 during conversion.
For the car_type model, car_type_inference processes 10 images at a time. Therefore, N of Input Shape must be set to 10 during conversion.
Retain default values for other parameters.
Click OK to start model conversion.
During the conversion of the car_plate_detection and vgg_ssd models, the following error will be reported.
Select SSDDetectionOutput from the Suggestion drop-down list box at the DetectionOutput layer and click Retry.
After successful conversion, a .om Da Vinci model is generated in the $HOME/tools/che/model-zoo/my-model/xxx directory.
Upload the converted .om model file to the sample-videoanalysiscar/script directory
Log in to Ubuntu Server where Mind Studio is located as the Mind Studio installation user and set the environment variable DDK_HOME.
Run the following commands to add the environment variables DDK_HOME and LD_LIBRARY_PATH to the last line:
- XXX indicates the Mind Studio installation user, and /home/XXX/tools indicates the default installation path of the DDK.
- If the environment variables have been added, skip this step.
Enter :wq! to save and exit.
Run the following command for the environment variable to take effect:
Access the root directory where the vehicle detection application code is located as the Mind Studio installation user, for example, /home/ascend/sample-videoanalysiscar.
Run the deployment script to prepare the project environment, including compiling and deploying the ascenddk public library and configuring Presenter Server. The Presenter Server is used to receive the data sent by the application and display the result through the browser.
bash deploy.sh host_ip model_mode
host_ip: For the Atlas 200 DK developer board, this parameter indicates the IP address of the developer board. For the AI acceleration cloud server, this parameter indicates the IP address of the host.
model_mode indicates the deployment mode of the model file. The value can be local or internet. The default setting is internet.
- local: If the Ubuntu system where Mind Studio is located is not connected to the network, use the local mode. In this case, you need to have downloaded the dependent common code library to the sample-videoanalysiscar/script directory by referring to Downloading Dependent Code Library.
- internet: If the Ubuntu system where Mind Studio is located is connected to the network, use the Internet mode. In this case, download dependent code library online.
bash deploy.sh 192.168.1.2 internet
- When the message Please choose one to show the presenter in browser(default: 127.0.0.1): is displayed, enter the IP address used for accessing the Presenter Server service in the browser. Generally, the IP address is the IP address for accessing the Mind Studio service.
- When the message Please input an absolute path to storage video analysis data: is displayed, enter the absolute path for storing video analysis data in Mind Studio. The Mind Studio user must have the read and write permissions. If the path does not exist, the script is automatically created.
Select the IP address used by the browser to access the Presenter Server service in Current environment valid ip list and enter the path for storing video analysis data, as shown in Figure 3.
Run the following command to start the Presenter Server program of the video analysis application in the background:
python3 presenterserver/presenter_server.py --app video_analysis_car &
presenter_server.py is located in the presenterserver directory. You can run the python3 presenter_server.py -h or python3 presenter_server.py --help command in this directory to view the usage method of presenter_server.py.
Figure 4 shows that the presenter_server service is started successfully.
Use the URL shown in the preceding figure to log in to Presenter Server (only the Chrome browser is supported). The IP address is that entered in 2 and the default port number is 7005. The following figure indicates that Presenter Server is started successfully.
The following figure shows the IP address used by the Presenter Server and Mind Studio to communicate with the Atlas 200 DK.
- The IP address of the Atlas 200 DK developer board is 192.168.1.2 (connected in USB mode).
- The IP address used by the Presenter Server to communicate with the Atlas 200 DK is in the same network segment as the IP address of the Atlas 200 DK on the UI Host server. For example: 192.168.1.223.
- The following is an example of accessing the IP address of the Presenter Server using a browser: 10.10.0.1, because the Presenter Server and Mind Studio are deployed on the same server, the IP address is also the IP address for accessing the Mind Studio through the browser.
The video structured application can parse local videos and RTSP video streams.
To parse a local video, upload the video file to the Host.
For example, upload the video file car.mp4 to the /home/HwHiAiUser/sample directory on the host.
If only RTSP video streams need to be parsed, skip this step.
Run the video analysis application.
Run the following command in the /home/ascend/sample-videoanalysiscar directory to start the video analysis application:
bash run_videoanalysiscarapp.sh host_ip presenter_view_appname channel1 [channel2] &
- presenter_view_app_name: Indicates View Name displayed on the Presenter Server page, which is user-defined. The value of this parameter must be unique on the Presenter Server page.
- channel1: absolute path of a video file on the host, need to add double quotes when there is only a video file. Channel2 can be omitted
- channel2: URL of an RTSP video stream. need to add double quotes, when there is only RTSP video stream, you need to use " " to occupy channel1
Example command of video file:
bash run_videoanalysiscarapp.sh 192.168.1.2 video "/home/HwHiAiUser/sample/car.mp4" &
Example command of RTSP video stream:
bash run_videoanalysiscarapp.sh 192.168.1.2 video " " "rtsp://192.168.2.37:554/cam/realmonitor?channel=1&subtype=0" &
Use the URL that is displayed when you start the Presenter Server service to log in to the Presenter Server website (only the Chrome browser is supported). For details, see 3.
The navigation tree on the left displays the app name and channel name of the video. The large image of the extracted video frame and the detected target small image are displayed in the middle. After you click the small image, the detailed inference result and score are displayed on the right.
Vehicle attribute detection supports the identification of vehicle brands, vehicle colors, and license plates.
In the network model of license plate recognition, the license plate images automatically generated by the program are trained as the training set, instead of using real license plate images. Therefore, this model has low accuracy in identifying real license plate numbers. If a high-accuracy model is required, collect real license plate images as the training set and train them.
Stopping the Video Structured Analysis Application
To stop the video analysis application, perform the following operations:
Run the following command in the sample-videoanalysiscar directory as the Mind Studio installation user:
bash stop_videoanalysiscarapp.sh host_ip
bash stop_videoanalysiscarapp.sh 192.168.1.2
Stopping the Presenter Server Service
The Presenter Server service is always in the running state after being started. To stop the Presenter Server service of the video structured analysis application, perform the following operations:
Run the following command to check the process of the Presenter Server service corresponding to the video structured analysis application as the Mind Studio installation user:
ps -ef | grep presenter | grep video_analysis_car
ascend@ascend-HP-ProDesk-600-G4-PCI-MT:~/sample-videoanalysiscar$ ps -ef | grep presenter | grep video_analysis_car ascend 3655 20313 0 15:10 pts/24?? 00:00:00 python3 presenterserver/presenter_server.py --app video_analysis_car
In the preceding information, 3655 indicates the process ID of the Presenter Server service corresponding to the facial recognition application.
To stop the service, run the following command:
kill -9 3655
Download the dependent software libraries to the sample-videoanalysiscar/script directory.
Table 2 Download the dependent software library
The URL for downloading the FFmpeg 4.0 code is https://github.com/FFmpeg/FFmpeg/tree/release/4.0.
You can search for related packages on the Python official website https://pypi.org/ for installation. If you run the pip3 install command to download the file online, you can run the following command to specify the version to be downloaded: pip3 install tornado==5.1.0 -i Installation source of the specified library --trusted-host Host name of the installation source