-
Notifications
You must be signed in to change notification settings - Fork 50
Updated Object Identification Sample #19
Comments
@varunjain3 Thanks for the questions!
This transforms the output or rather retains the output as a feature embedding that can be used to identify persons against either a previously detected person. You can find the corresponding pipeline description at:
Note: this pipeline uses a custom python function via
More information on
|
Hey @nnshah1 ! Thanks for the answer, It really helped. For eg, I tried the emotion recognition pipeline available on the video-analytics-sevices repository in the traffic scenerio in smart-city-sample and I was successfull printing the bounding box as well as the emotion using minor changes in the analysis.js. I could see the .json output using the inspect element feature on the webpage, where the .json file stored the frame wise structured output of the model. I tried running the entrance pipeline which contained the gvapython flag. And as I can see the output stream is changed using the gvapython code, as per what I understood, the pipline calls the process_frame method. I understood how all the counting was happening in the python code, yet I couldnt understand how the message stream was being changed by the python code. Also I tried running the same person re identification models (face +reidentification) without the gvapython code hopping that I would get the .json file with frame wise output, but that was not the case, even though the analytics counted the number of people correctly but I wasn't satisfied with the output stream of data. and there were no bounding boxes as well. Comming to the gvaidentify part, Thanks for sharing the related repos, I wanted to ask, will the pipeline that is present in ad-insertion-sample work without any changes directly in the traffic scenerio? I was confused because on this url you shared https://github.com/opencv/gst-video-analytics/tree/v1.0/samples/gst_launch/reidentification it says that I have to compile the custom gvaidentify. For say, I want to make a custom library with face encodings, can I directly run the https://github.com/opencv/gst-video-analytics/tree/v1.0/samples/gst_launch/reidentification python code on my library with the correct models downloaded or do i need to compile some C code first in order to do that. -please let us know of topics you'd like to see covered. |
I understood how all the counting was happening in the python code, yet I couldnt understand how the message stream was being changed by the python code. The message is being updated on line 22 - 25. There the original message is loaded from the frame, modified, and then we remove the original message and add the modified one: Comming to the gvaidentify part, Thanks for sharing the related repos, I wanted to ask, will the pipeline that is present in ad-insertion-sample work without any changes directly in the traffic scenerio? No unfortunately some modifications are needed to support rtsp vs files - For say, I want to make a custom library with face encodings, can I directly run the https://github.com/opencv/gst-video-analytics/tree/v1.0/samples/gst_launch/reidentification python code on my library with the correct models downloaded or do i need to compile some C code first in order to do that. If you are using similar code / logic to that in the people counting.py you will not need any additional C code. The C element provides similar functionality. Depending on the underlying libraries and algorithms - the performance of cosine distance and matching in python may be need to be optimized / translated to C. |
@varunjain3 How is your project going? Please let us know if there is anything else you need from us - we've also just released 0.3.1 with a set of improved documentation. It'd doesn't cover all the topics you've brought up here - but would like to get your feedback. If this topic is resolved, let us know and we can close the issue. |
Hi @nnshah1 Command that I run is Please help me track the issue, I am also attaching some screenshots. |
@nnshah1 Apart from the above issue, when I am running the smart-city with the changed pipeline.json (screenshot in above comment) with face-detection-adas, landmark-regression and face-reidentification-retail model, I am getting error in model loading for landmark-regression and face-reidentification (screenshot attached), so I thought it could be model_proc issue, but when I am loading them individually, they are loading fine. Traceback (most recent call last): File "/home/runva.py", line 50, in loop 'max_running_pipelines': 1, File "/home/vaserving/vaserving.py", line 129, in start self.options.ignore_init_errors) File "/home/vaserving/model_manager.py", line 58, in init raise Exception("Error Initializing Models") Exception: Error Initializing Models Traceback (most recent call last): File "/home/detect-object.py", line 32, in connect raise Exception("VA exited. This should not happen.") Exception: VA exited. This should not happen. |
@divdaisymuffin Can you provide a directory tree for the models directory - it looks like the model manager is loading a file with extension .ipync_checkpoints and getting confused. Probably this error should be ignored and not treated as fatal. In the meantime you can also use the environment variable: IGNORE_INIT_ERRORS=True, to continue execution, and check that the models have been registered correctly using the models REST end point. |
On the issue of gallery generation, can you increase the GST_DEBUG level to 3 and see if there are indeed faces detected within the images? It looks as if it is possible the faces are not being detected in the source files. |
Thank you @nnshah1 for your quick response. |
But now I am getting issue related to gvaidentify {"levelname": "INFO", "asctime": "2020-08-18 12:34:12,820", "message": "Creating Instance of Pipeline object_detection/2", "module": "pipeline_manager"} {"levelname": "ERROR", "asctime": "2020-08-18 12:34:13,035", "message": "Error on Pipeline 1: gst-resource-error-quark: gallery file failed to open (3): /home/gst-video-analytics/samples/gst_launch/reidentification/gvaidentify/reid_gallery.cpp(73): EmbeddingsGallery (): /GstPipeline:pipeline17/GstGvaIdentify:identify:\nCannot open gallery file: /home/gallery/face_gallery_FP32/gallery.json.", "module": "gstreamer_pipeline"} PipelineStatus(avg_fps=0, avg_pipeline_latency=None, elapsed_time=3.457069396972656e-05, id=1, start_time=1597754053.039853, state=<State.ERROR: 4>) Pipeline object_detection Version 2 Instance 1 Ended with ERROR Traceback (most recent call last): File "/home/detect-object.py", line 32, in connect raise Exception("VA exited. This should not happen.") Exception: VA exited. This should not happen. I am also attaching screenshot for the same, and I am also telling you the hierarchy of the gallery folder along with gallery.json (as I mentioned earlier I am not able to convert images to tensor, so I have used .jpgs in the gallery.json) |
@divdaisymuffin The identify element will not support jpg and will require the tensors to be generated. Please increase the GST_DEBUG level before running the gallery generator. We will need to understand that issue before proceeding to the VA serving pipeline. |
@nnshah1 Thanks, I have increased GST_DEBUG level to 6, and got detailed error info. Pasting it here, please have a look. ERROR is here: |
Just to confirm, which version of gst-video-analytics is this? |
@nnshah1 I have used this git link to install gst-vide0-analytics |
hi @nnshah1 I changed to version 2020.2, but still getting same error |
@divdaisymuffin Thank you for your patience. I'm in the process of lining up the right resources to look into your issue. Can you give me more details about the project you and @varunjain3 are part of? If it helps facilitate we can set up a time to discuss in more detail. |
@nnshah1 - Divya & Varun are team members of company AIVIDTECHVISION LLP. We are building SaaS based Video Analytics Platform. We are excited to use Open Visual Cloud, OpenVINO code base in our offering and also contribute in enhancing it further. we all are currently facing technical issues as mentioned by @varunjain3, @divdaisymuffin. Your help on this will accelerate our understanding about Open Visual cloud and will also help us contribute better. I am open to do conference call with you for any queries on this regard. thanks, Dhaval Vora - Co-Founder & CEO, AIVIDTECHVISION. |
Please find the updated sample here demonstrating how to enable object identification via a public face recognition model and the latest video analytics serving. |
Thanks @nnshah1 ! Do give us some time to understand this and get back to you shortly. |
hi @nnshah1, Thank you so much for the help.
and rerunning the same give me detailed errors Please help me with this. |
@divdaisymuffin I think you will need to correctly pull the side-branch of the va-serving as specified by @nnshah1 in the readme, it works fine for me. |
@nnshah1 Following instructions further I got another issue while downloading the model.
I got following issue
I got permission denied even I am running with root privileges. |
@varunjain3 I am following Readme.pdf only. Please suggest me what I am doing wrong. |
@divdaisymuffin The warnings on git apply are to be expected - apologies for the confusion there. The patch can only be applied once so rerunning it will generate the errors (the first time it will display warnings). Please start with a fresh va serving clone, and run the download script without sudo. I believe the issue has to do with the permissions of the openvino user If you continue to run into issues, please modify the download shell script to pass a user flag to the docker commands:
|
Thanks @nnshah1 that solved my issue 👍 |
@nnshah1 But I am getting one error while running object_identification.py with head-pose-face-detection-male.mp4 Error: and I am not getting expected result for head-pose-face-detection-male.mp4 |
Looks like a typo in the instructions - apologies - the correct url has: |
It worked, Thanks @nnshah1 |
hi @nnshah1, I got a error at step 2 of Deploying Object Identification as a Microservice. command Error: I am running it on azure vm with ubuntu 18.04. |
it looks like you are running on a system without an graphics devices exposed, in the docker run script you can comment out the like which adds this device to the container - will add detection for this scenario in the future. Please comment out this line: |
Thanks @nnshah1 for providing this object-reidentification patch in VAserving and also for your guidance. I am able to get the output in the file object_identification_results.txt which is stored in /tmp location. I have also checked "/home/user/video-analytics-serving/samples/object_identification/results/" directory and it contains video specific frames and results.jsonl file. But I haven't found the post-processed video as output as a file or a stream, Please suggest How I can see that. Do I need to make changes to pipeline.json at /video-analytics-serving/pipelines/gstreamer/object_identification/1/ specially to gvametapublish ? |
@divdaisymuffin That's great to hear! To save the watermarked results as a video file or as a stream you would need to modify
to use splitmuxsink or an rtspsink instead of jpegenc To integrate the results into smart cities however, you can modify
to add recording based on splitmuxsink without watermarks. |
Thanks @nnshah1 I tried with gvawatermark option for now, so I made changes to the object_identification/2/pipeline.json, replacing "jpegenc" with "rtspsink" and "splitmuxsink" both, and also provided a location to save the output.mp4. But I am getting same error in both cases "rtspsink" and "splitmuxsink" (Errors shown in attached screenshot). Please suggest how to implement it to get the output as video. Pipeline.json
Please see attached screenshots for details |
Please try something like:
This site has some nice instructions: https://github.com/DamZiobro/gstreamerCheatsheet Including: https://github.com/DamZiobro/gstreamerCheatsheet#splitmuxsink-test-overwritting-max-2-files |
Thank you @nnshah1 , I tried gvawatermark one, error resolved but I am not able to find output video file. Smart city one I didn't tried yet, I will try the smart city implementation and update you on that. |
Looking at this path: Looks like it is a path from your host machine - it won't be accessible directly in the container. In dev mode we mount |
Hey @nnshah1 I need your help again, I was trying to integrate the results to smart-city following template worked for me to get the out put So I am getting the streaming and everything but have 2 issues:
For details I am attaching screenshots.
Please help me to solve this |
Two things I think may be at play:
That will then add the "face" label to the bounding box - and I believe will be rendered by smart cities |
@divdaisymuffin Does this unblock your progress? |
Hi @nnshah1 , I tried your suggestions to get gallery directory, but that doesn't solved my issue :( Although your suggestion to get bounding box worked very well. I think to get gallery directory created in the smartcity docker swarm environment ,changes need to be added to the docker file of the analytics container of the Smartcity, So I am trying that right now. I will let you know, if it works. |
@divdaisymuffin Is there a gallery directory created in the analytics container? If the gallery.json is what is missing - we need to call the "save gallery" method from the sample code either on startup or shutdown. If the"enroll" parameter is set to true - you should see the gallery directory with tensor files created. |
@nnshah1 , No gallery directory is not getting created by the pipeline. yeah enroll parameter is True, but still no gallery directory is created. |
hi @nnshah1 , I am able to enroll faces now, what worked for me is to put the vas_identify.py at /home/Smart-city-sample/analytics/object/ , earlier I put it at /home/Smart-city-sample/analytics/object/extensions. |
The simplest thing will be to add additional print statements to vas_identify to print where the tensors are getting stored. The gallery.json file itself - needs to be created via the code in the sample (vas_identify does not create the gallery.json itself, only the tensors). The stats are also printed by the sample app and not vas_identify - but you can add print statements into vas_identify to turn on additional logging when objects are matched or not matched - or to keep summary stats. |
Hi @nnshah1 I m in a bit of trouble in extending this video analytics serving module to a new set of problem statements. We are extending to our use case and accordingly modifying the changes required. We replaced the face detection model with the person detection model and reidentification with the person reidentification. We were able to enroll in the tensors and dump the frames. Now we find that there are many false/incorrect detections that have to be suppressed. In many of the libraries, we have a confidence/detection threshold which can effectively eliminate the incorrect and extra bounding boxes generated at the time of inference. Is there any parameter as such to target and tweak them when using it in GStreamer pipelines? I have attached the images as to why I am in need of these parameters. |
|
@Shirish-Ranoji , @divdaisymuffin Could we close this issue as the basics have been demonstrated? We can open a new issues to discuss further enhancements. Does that seem reasonable? |
Hi, @nnshah1 Thankyou so much for your quick response. I would look into custom_transforms from the smart city project and try the pipeline with the detection threshold. Also, it makes sense to close this issue and start a new one as it has grown too long. |
@nnshah1 hey!
Working on the Smart-City-Cloud, I realised that it would be beneficial if I could understand the making of platforms separately, I see that making the pipeline environment inside the smartcitysample doesn't take that much time to build, while this g-streamer container takes more than 5 hours to build, why is that and is there any way to avoid this?
Also for the re-identification models, I observe that we are using !gvaclassify , but can you make me understand that how should one make the .json file for the model? for eg, I see that in the emotion-recoginition model, you have used this input preproc https://github.com/intel/video-analytics-serving/blob/59fdcba3e7b631f391cf5654b30f78d56585411b/models/emotion_recognition/1/emotions-recognition-retail-0003.json#L3-L8
Can you tell me which layer_name are we referencing this to? ( I believe to the previous model)
Also, in the output_postpoc
https://github.com/intel/video-analytics-serving/blob/59fdcba3e7b631f391cf5654b30f78d56585411b/models/emotion_recognition/1/emotions-recognition-retail-0003.json#L9-L23
We have used some sort of converter, what is this.
My aim is to use the pipeline to run the person reidentification models. If you can help me understand how can I do that
The text was updated successfully, but these errors were encountered: