-
Notifications
You must be signed in to change notification settings - Fork 596
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update on the Result #5
Comments
It seems that there has a overall offset between bounding boxes and person's position , when i use tiny_yolo i encountered this problem , it is because of different input image resolutions . But in yolov3_keras we dont need care about this . I had test different image resolutions , it all worked well . Did you change the code and could you send your video to me to have a test ? |
or you can download a video file from https://motchallenge.net/ for test . In README.md i upload a test result video use the MOT challenge test video . The bad track result you got were because of boxes not on people. You can check the boxes's position right or not just after yolo detect between line 58 and line 60 of file demo.py . |
The last , you can del line108 to line113 of file yolo.py . I add it because i found some negative coordinates , but the people's position of your image seems that it was not caused by it . |
Can you give your email id? |
sent |
Access denied , I need permission to download |
Try again. I updated the permissions |
Did you noticed the message "[mov,mp4,m4a,3gp,3g2,mj2 @ 0x7fe21f9ed080] Timecode frame rate 59/1 not supported" when use your video file ? if you had google it you should found the problem , if you had test other video file you would got a better result . Every time when i use your video file it can only run 52 frames , but after i convert video coding of your video by Handbrake it can run well . And there no overall offset of boxes . So i thought the bad result was caused by your video's Timecode . |
I sent you the original video I used the ffmpeg transcoded video Will send that video too |
Can it be a shorter video file , it was to big to download . And it is to late i may reply tomorrow . |
Sorry for taking longer to respond. I was traveling. I have sent you a new one. I encoded it using ffmpeg. Can you check that video. |
https://drive.google.com/open?id=1NZZtrSZQiclCE388QknIWigcqgYP_ywm , test
video ,for the yolov3's performance . It performs not well .
2018-05-25 21:41 GMT+08:00 Saurabh Hinduja <notifications@github.com>:
… Sorry for taking longer to respond. I was traveling.
I have sent you a new one.
I encoded it using ffmpeg. Can you check that video.
—
You are receiving this because you modified the open/close state.
Reply to this email directly, view it on GitHub
<#5 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AOVckUert82klqYNxQnGSEbJ9hpb94pgks5t2AoWgaJpZM4UJwLZ>
.
|
Did you change the code? Cause I am using demo on the video and the boxes are not lining up. |
no |
ok |
Did you re-encode this video also? |
no |
I ran it again.. Still it is not aligned. Yolov3 on keras works perfectly fine. |
Am getting this OpenCV(3.4.1) Error: Assertion failed (pos < (1u<<31)) in patchInt, file /home/saurabhh/opencv/modules/videoio/src/container_avi.cpp, line 737 OpenCV(3.4.1) Error: Assertion failed (pos < (1u<<31)) in patchInt, file /home/saurabhh/opencv/modules/videoio/src/container_avi.cpp, line 737 [1] 17692 abort (core dumped) python demo.py GOPR4091_new.MP4 And still not aligned. |
Finally the boxes are aligned. But the core is still dumped. |
From the paper “For each track k we count the number of frames since the last successful measurement association ak. This counter is incremented during Kalman filter prediction and reset to 0 when the track has been associated with a measurement. Tracks that exceed a predefined maximum age Amax are considered to have left the scene and are deleted from the track set. New track hypotheses are initiated for each detection that cannot be associated to an existing track. These new tracks are classified as tentative during their first three frames. During this time, we expect a successful measurement association at each time step. Tracks that are not successfully associated to a measurement within their first three frames are deleted” ,in code the Amax is 30 at line 40 of tracker.py . |
Any idea why I am getting the error? |
set writeVideo_flag = False line37 of file demo.py , can it works well ? |
I need the output in video file. I am doing it on a remote server |
I use opencv3.2.0 , you can have a try . |
I figured it out. You are using MJPG and saving it as AVI. AVI it uses XVID and MJPG is for MP4 |
Hi
I tested the video. The results are very bad.
Its only 6 seconds after start of video.
There are only 6 people but the track number is 17.
Even the boxes are not on people.
Second image is from yolov3 on keras
The text was updated successfully, but these errors were encountered: