-
Notifications
You must be signed in to change notification settings - Fork 125
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
detection boxes are carried over when inferenceing multiple videos #417
Comments
Hi, does adding this line interpreter.reset_all_variables() in while loop after post function call would solve your issue?
|
Hi, sadly this doesnt resolve the problem. Detections are still carried over to multiple frames. |
@hjonnala |
Hi, I also see the similar results. But I don't think so it is due to hardware. It is more like opencv issue. For example you can refer to these stack overflow issues. https://answers.opencv.org/question/173100/imshow-sometimes-dont-clear-previous-image/ |
@hjonnala
|
Hmm..I have also tried cv2.destroyWindow and it is not solving the issue. |
Solved this issue by rewriting the complete project in c++. |
Awesome, what do you think that is causing the issue with python. Is it tflite_runtime or opencv or something else? |
I dont think it was opencv, images looked good before feeding them into the interpreter and the drawed boxes matched the one the interpreter returned. |
OS: Ubuntu 20.04
Edgetpu Device: Coral USB Accellerator
Python: 3.8.5
tflite runtime: 2.5.0
Hey,
i encountered a kinda strange behaviour when trying to inference several videos/video streams. A detection box that is detectet in one frame on one image is carried over to the frame of an other video and is displayed there as well. When processing only one video, everything is fine.
Example of my error:
error_example.zip
here is a minimal working code example of what im doing:
detect.py
:code ready to use with some example videos avaialable at https://drive.google.com/file/d/1tlWngd_hWXsWBpjk3Q6ST-3EIJHSOe0i/view?usp=sharing just download it and execute the detect.py
the code is meant to proccss multiple rtsp streams with always the newest frame(not implemented in the error example, to reduce confusion), but that means i cant just process the videos one after one.
is there a way to clear the old detections before going to a new frame, without the need to reload the model into the USB Accelerator
The text was updated successfully, but these errors were encountered: