Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

VideoCapture::set (CAP_PROP_POS_FRAMES, frameNumber) not exact in opencv 3.2 with ffmpeg #9053

Open
nji9nji9 opened this issue Jun 30, 2017 · 75 comments

Comments

@nji9nji9
Copy link

nji9nji9 commented Jun 30, 2017

Example and confirmed effect:

http://answers.opencv.org/question/162781/videocaptureset-cap_prop_pos_frames-framenumber-not-exact-in-opencv-32-with-ffmpeg/

@saskatchewancatch
Copy link
Contributor

@nji9nji9 The "category:highui-video" category label captures the video input/output functionality as such functionality has come to be associated with highui ... since at least 2.4. See here.

"category:video" is for video processing functionality (not input/output).

I believe maintainer has chosen the appropriate label.

Hope this clarifies things.

@nji9nji9
Copy link
Author

nji9nji9 commented Jul 17, 2017

Some more info:
Depending on the format type (codec) the mis-positioning is quite different.
MP4/AVC1: from 8 to -3 frames. (Negative values = seeks BEHIND desired pos.)
WMV3/VC-1 (VBR): from 33 (!) to 0 (all positive),
WMV3/VC-1 (CBR): 0 always.
For the last two ones the same codec is used.
I found several questions about this problem in the different forums. All with no solution.
It seems the way it is now the positioning in a movie file is completely useless - moreover "dangerous" when you rely that you get what you want.
Maybe this issued could be labelled as "important"?

@nji9nji9
Copy link
Author

Found another issue with that bug. Solved or not solved? What is happening here?

@mshabunin mshabunin added the bug label Jul 24, 2017
chcao052 added a commit to chcao052/opencv that referenced this issue Aug 25, 2017
…_PROP_POS_FRAMES,) with ffmeg

av_seek_frame() is seeking with DTS time, hence the initial time should be first_dts instead of start_time.

Why:  with my limit understanding of ffmpeg, start_time is the +ve offset from first_dts. 
For most video files, start_time and first_dts are the same hence the code seem to work. 
When they are different, using start_time to seek will result in the next key frame(eg. 30 for me).
@nji9nji9
Copy link
Author

nji9nji9 commented Sep 5, 2017

What can be done to prevent to close this issue after merging the above commit? As it doesn't solve the problem.
Still I don't understand why knowbody seems to care about the bug. It is confirmed by several posters and to my opinion it's quite serious. As in a situation that's common (read-in a media at positions) you get the wrong frames WITHOUT ANY NOTICE. Opencv has very much sophisticated algs to process the data, but if reading stored data it gets wrong data.
Why does knowbody care? Or is there really knowbody that has the knowledge?
(BTW. I don't have the knowledge. But I built a terrible index-fighting workaround for the bug).

@Britefury
Copy link

Hi,
I have encountered this issue also. What is your work-around?

@nji9nji9
Copy link
Author

nji9nji9 commented Sep 22, 2017

Hi.
Well it is literally a "work-around" - and not nice at all.

If I need only very few frames I start at 0 and continue to
the desired position. Which is quite slow, but produces correct frames.

If I need more or all frames I process the movie multithreaded
with (overlapping) parts, computing a cheap feature for each frame.
Then fumble together the parts based on the value of the feature (shifting +/-).
To get specific frames exactly now I request an interval around
and match the feature (shifting +/-).
This works well - with one exception (already mentioned in the opencv
forum): the multihread reading-in will miss the last few frames,
if the opencv mispositioning is too early. As then the processing
by opencv will stop as it "thinks" it is at the end already.
...
And all this because nobody cares about the buggy code.
I wonder in how much software this mispositioning is NOT RECOGNIZED
and leads without knowing to wrong results.
Seems the professionals are retired, and the youngster are
"developping" apps. ;-)

@Sharan123
Copy link

OpenCV 3.3 With TBB ,LIBV4L, FFMPEG have the same bug.
Seems that webm is failing to even produce the FOURCC Value as well as failing to give the same frame.

Avi FOURCC MJPEG is OK
avi FOURCC MP42 is Not ok and failing i can give the source code i used for testing if it means anything.

I'm puzzled how this error isn't a higher priority. This actually forced users to iterate and count the whole video just to be sure they dragged out a valid frame. Otherwise results are corrupted

@nji9nji9
Copy link
Author

To my opinion it would be best to remove the whole code from opencv.
Better no code than buggy code whose results will be overseen by the most.
Also by removing maybe someone will wake up.
The way it is now (since some time btw) it is quite irresponsible.

@alalek
Copy link
Member

alalek commented Nov 22, 2017

OpenCV highly uses FFmpeg (in case of FFmpeg backend).
But FFmpeg itself doesn't work well with seeking on non "key" frames. There are many workarounds, but they are not very reliable. Sometimes seeking code works, sometimes doesn't.
If your have worked FFmpeg's code with accurate seeking, then we could try to integrate it into OpenCV.

Consider to extract frames from video (into .png files) and use them.

Main purpose of OpenCV library is not highly correlated with goals of Media I/O library. Video I/O and Image I/O things added into OpenCV for demo purposes initially.
We require only these things from Video I/O backends:

  • open video stream
  • read the first frame in BGR/gray format / read next frame / ... next ... / ... until end of stream
  • all other things / properties / hacks are optional and depends on backend (and backend version), used video stream (container type / video codec). These additional things are almost not tested in OpenCV - quality is not tracked at all.

@nji9nji9
Copy link
Author

OK, let's take it as a fact that FFmpeg is the cause for the wrong function.
(And not do that ping-pong ;-)
Would you agree that is (or might be) quite dangerous that there is a function
that gives results that are erraneous (sometimes)?
Because you are not aware that you get the wrong frames (or fourcc, see comment above).
Just thing about medical applications (!!).
Considering the requirements that openCV has to backends
I definetely suggest to disable/ remove the calling code we know that is buggy.
(That is VideoCapture::set () with parameter CAP_PROP_POS_FRAMES,
and maybe also the get-method in the bug Sharan123 descibed above.)
By this the users of openCV would be noticed that a method they used
so far is not reliable.
And only add it again when FFmpeg made it's homework.
I really think this should be done.
For the best.
Actually I am not the one to know enough of openCV details to change its code myself.

@alalek
Copy link
Member

alalek commented Nov 22, 2017

be noticed

I believe we could add some notification message for users about using of optional (and actually untested) features.

medical applications

Tests; many tests; tests for anything including tests for tests.

@nji9nji9
Copy link
Author

nji9nji9 commented Nov 22, 2017

I think an unspecific notice somewhere in the openCV docu
would not be ... ahem ... noticed ... by anyone.

Tests. Who would deny that they are essential ;-)

But this used FFmpeg code is not untested, but tested and shown as buggy.
Therefore I think (as we know about that bug) we have the responsibility
to force a notice when the buggy code is used specifically
(maybe with a kind of #pragma ... ==> compiler error message?).

When there will be no reaction from the openCV users - good, nobody affected.
When there will be a shit storm:
99% of them will be baffled, when they get the explanation,
and will code their work-around then.
And maybe one or two will provide correct FFmpeg code.

@nji9nji9
Copy link
Author

nji9nji9 commented Feb 7, 2018

To state it explicite:

  1. OpenCV cannot be used for offline analysis of movies.

  2. Not only it delivers wrong frames but without any notice.

What makes it even worse:

  1. Responsible maintainer do rate this as low priority bug.

Can't imagine a more neglected (sad) situation.

@Britefury
Copy link

Britefury commented Feb 7, 2018

I am very much in favour of warning developers about these problems. Perhaps print a warning to the console when using VideoCapture::set with CAP_PROP_POS_FRAMES. Or have it fail with an error message.

It's quite broken. I have used VideoCapture::set with CAP_PROP_POS_FRAMES to attempt to seek to the start of the video (frame 0) and had it seek to 15 or 30 frames in instead. I have had to go through my code and replace all references to CAP_PROP_POS_FRAMES with CAP_PROP_POS_MSEC just to seek to the start of a video. Frame seeking is not really usable IMHO.

A potential upshot of making these problems more obvious is that it would potentially add some motivation for someone with the relevant video codec skills to develop a fix. :)

@cesarandreslopez
Copy link

cesarandreslopez commented Jul 11, 2018

So this bit me too and after several hours of trying I concluded, just as mentioned in this thread, that both VideoCapture::set with CAP_PROP_POS_FRAMES and VideoCapture::set with CAP_PROP_POS_MSEC are completely unreliable.

I can also state that FFmpeg will not have this problem. Frames seeked and grabbed from FFmpeg will always return the correct one.

I ended up having to rewrite all frame captures to grab from FFmpeg and then importing the still image into cv2 so I could then send it to the video writer object in cv2 as needed.

As pointed out by @nji9nji9 this issue only happens when VideoCapture::set is used. This issue does not happen when all frames are iterated from the beginning of the video file.

It seems to me that the problem is that VideoCapture ignores all duplicate frames, while FFmpeg will return duplicates without issues. In consequence, when calling VideoCapture::set with CAP_PROP_POS_FRAMES you will get a frame offset by something approximate (or likely equal) to the number of duplicate frames in the video file, or the number of duplicate frames in video file up to the frame you've tried to set to..

If this is right, then, resolving this bug should therefore just be a matter of having VideoCapture::set not ignore duplicate frames in the video.

Just a couple of thoughts, if useful at all

@alex-liuziming
Copy link

To state it explicite:

  1. OpenCV cannot be used for offline analysis of movies.
  2. Not only it delivers wrong frames but without any notice.

What makes it even worse:

  1. Responsible maintainer do rate this as low priority bug.

Can't imagine a more neglected (sad) situation.

My gosh, totally right. To debug, I reviewed my code several times until I foundd this terrible bug here. 😢 How could such a bug been left in master branch??? And marked as low priority???

@mhtrinh
Copy link

mhtrinh commented Sep 24, 2018

I have the same problem !
This is the worst type of bug : random and completely silent !!

@fidelechevarria
Copy link

This bug should definitely not have low priority.

@nji9nji9
Copy link
Author

nji9nji9 commented Apr 5, 2019

That's what I always say.
It exhibits the incompetence of @alalek on this (Not meant as an offend).

First - and simplest - thing to do would be to disable
VideoCapture::set (CAP_PROP_POS_FRAMES, frameNumber)
in the main trunk.
Probably at least a few dozens of (the new compiled) applications would notice that they have a big bug - since years.
BTW:
Not me, I don't use opencv since all that.

@apatsekin
Copy link

apatsekin commented Apr 15, 2019

Two years old bug and still no attention or at least warning?

@Robird
Copy link

Robird commented May 30, 2019

av_seek_frame(... , AVSEEK_FLAG_BACKWARD)

@Robird
Copy link

Robird commented May 30, 2019

@nji9nji9
Copy link
Author

In the name of the dozens (?) apps where this bug
unnoted delivers wrong frames:

Thank you Robird!

Unfortunately I don't have the ability to assess
if your workaround (It is a workaround, isn't it?)
fits into the environment of opencv.
(Actually I myself frustrated turned back to M$/ DirectShow
as the bug wasn't noted).
If your workaround works in all cases (does it?),
to my opinion it should be into the main trunk.
With some explanation
(Why seek isn't reliable,
where the bug probably is located in (ffmpeg?),
why av_seek_frame/ grabFrame does the job etc.)

@mhtrinh
Copy link

mhtrinh commented May 31, 2019

I believe it is not just a OpenCV or ffmpeg problem : it's also the video codec issue. Video codec (eg x264) do not store frame number in their frame. Or at least it is not mandatory. It is an issue for video that do not have fix frame rate : how do you know what is the frame number at 5.36s when the frame rate could have changed mean while ? The only reliable way to get frame number is then by running from the beginning !

That is why I went away from referencing using frame number and use dts.

But it would still be really professional to have some sort of warning from OpenCV when compiling or when running, when someone is trying to seek using frame number !

@nji9nji9
Copy link
Author

But if it is a codec issue the effect should be in DirectShow too...
See the referenced question in my starting post
(There my last comment from July 24, 17.)

If you're true (and some results from the mentioned thread)
then Robird's workaround wouldn't do the job.

@knowblesse
Copy link

The link to a recap

Everyone who enters this conversation should read this clear summary by roninpawn. link to the summary Thanks a lot. It really saved me.

The conclusion of the recap?

I guess the conclusion of this thread is never use the VideoCapture::set (CAP_PROP_POS_FRAMES, frameNumber) due to the reliability issue. I also saw a situation that right after calling the set (CAP_PROP_POS_FRAMES, frameNumber), get(CAP_PROP_POS_FRAMES) return me a wrong value, and sometimes even a negative value. Many of you say sequentially calling the read function does not mess up with the seeking behavior (quote1, quote2, quote3) but I have lost all the faith to the mighty opencv's video I/O after this thread.

Alternative video I/O

My goal was to extract every 5 frames in a constant framerate video and process it with other python functions. I encountered this issue when some videos started to return the same frame even if I changed the play header. Although I can read all five frames, get one, and toss others away, many of you recommended ffmpeg, the more reliable and faster package for video I/O, and found the workaround using it. (deffcode by abhitronix, simple videostream script by roninpawn)
Some mentioned FFMS for various framerate videos, and I guess it might work as it says "It is the only source filter that has proper variable framerate (VFR) support." However, the python wrapper for this library appears to no longer exist.

Speed issue

A stupid but more reliable (if sequentially reading does not mess up as I quoted) way of frame seeking is reading through the whole video and stopping until the target frame is found. I have tried this with opencv's VideoCapture::read() and found it's quite fast. So I tried to test the speed and compare other implementation with ffmpeg and here is the result.

Video Size : 640 x 480 (1.5Gb)
Codec : MPEG-4 (DIVX)
pixel format : yuv420p
framerate : 30
Video length : 1844.67 seconds (55340 frames)

  • Option 1 : OpenCV calling all frames : 17.67 sec
  • Option 2-1 : ffmpeg : Directly reading bgr image from yuv420p video (conversion inside the ffmpeg) : 43.19 sec
  • Option 2-2 : ffmpeg : Read from ffmpeg, and convert from the opencv (code snippet of videostream script by roninpawn) : 41 sec
  • Option 2-3 : ffmpeg : FFdecoder : 42.99 sec

Source code for the test (gist)

I don't know what magic is done inside the VideoCapture::read() method, but the stupid method (reading through the whole video and stopping until the target frame is found) might be not a bad idea if you need multiple frames from the video.
(if you need, say just one or two frames from the video, then ffmpeg filter's select function will be much faster)
I'm desperate to have a fast and reliable frame reading function, so it will be really grateful if show me any better workarounds.
And thank you to @roninpawn and @abhiTronix for sharing the package!

@bml1g12
Copy link

bml1g12 commented May 3, 2022

I thought that you concluded here that other libraries like imutils and camgears are faster when doing RGB conversation ("w/decode") but your library is fastest out there when you only need YUV.

I've never developed on camgears, nor do I recall ever using imutils as a primary video-access solution. So: no. Those are not my conclusions you're citing. Those are YOUR conclusions from our collective benchmarking efforts.

From which in earnest, I wasn't able to conclude much because I never felt resolved in the discrepancies of each of our local results. Multiple tests had to be disabled on my machine to run the benchmarking suite without either a crash or inexplicably slow results across failing libraries. I also never managed to come to terms with the actual implementation of the simulated blocking methods meant to be at work, or in what way they were being evidenced in the results. Which was all confounded further in apparent disparities between the measurable real-time of each process and the frames per second reported by the results.

Simply: There were too many moving parts, with too many impenetrable caveats, multiple failures of function during testing, and what seemed a platform bias. Though maybe the platform bias was just my inexperience deploying that kind of application.

All that said: To date, I have yet to myself conduct the simple (total # of frames / time to completion) benchmarks of the libraries you recommended. Which says nothing - speaking to the topic of this thread - of confirming the accuracy and speed of those libraries' seek methods.

Where both abhiTronix's deFFcode library, as well as my little 300-line FFmpeg_VideoStream script, employ FFmpeg to acquire a decoded stream of video: 'Seek' accuracy is assured, HMS positioning is lightning fast, while core access speed is inherited from the most trusted and complete open-source multimedia suite alive today.

I think these are all, of value. And worthy of sharing.

I definitely think your contribution is "worthy of sharing" and an excellent contribution. We're both looking to just get the fastest and most accurate library out there.

Sorry if you've had difficulty reproducing that benchmarking repo, I've put a minimal and fully reproducible google collab here which evidences that when working with RGB, libraries like imutils can be faster than ffmpeg_videostream. I didn't make the imutils library so have no vested interest in it.

That's not to say it's always the case that this library is the fastest, it really depends on the amount of CPU available, so an application that is IO-bound will have different results than one which is CPU bound. The video size itself affects whether it's CPU bound, so if we use a smaller video file in a minimal benchmark, we'll get almost the same optimal video reading speed no matter what library we use.

Working with FFMPEG directly also has a lot more flexibility - coming back to the topic of this thread, if we want better control
of accurate frame seeking or e.g. to use non RGB color spaces, then a library like imutils or camgears would not be suitable unless we are happy to read the whole file, because both are just a Python wrapper over the opencv code.

@abhiTronix
Copy link

abhiTronix commented May 3, 2022

Sorry if you've had difficulty reproducing that benchmarking repo, I've put a minimal and fully reproducible google collab here which evidences that when working with RGB, libraries like imutils can be faster than ffmpeg_videostream. I didn't make the imutils library so have no vested interest in it.

@bml1g12 Sorry to break it to you but imutils is never fast as it appears to, as it has problem of duplicate frames and your benchmarks lack sanity check. If you include checking every frame is distinct and not empty, I'm sure your benchmarks will be flawed. I can't go into details but we all know opencv-python itself is slower than FFMPEG wrappers like ffmpeg-python in raw performance, and there's no way it is outputting more frames per second in real time. Multi-threading in imutils appears to processing new frames, but actually it's cycling same frame again and again. I worked on both imutils and Camgear as contributor, and i know why deffcode as FFMPEG wrapper is better than Camgear or imutils for any video processing purpose.

@mhtrinhLIC
Copy link

mhtrinhLIC commented May 3, 2022

Please stop hijacking this issue. This issue is about seeking to the wrong frame in OpenCV. Please use a different forum or issue, to discuss your speed benchmarking. This issue is not about speed. People (like me) would not mind the speed as long as it seek to the right frame.
You provided an alternative library that solve this issue, thank you. But stick to the issue subject.

@bml1g12
Copy link

bml1g12 commented May 3, 2022

I agree, let's keep on topic here but I'd love to discuss further @abhiTronix elsewhere (To address your concerns I've updated the colab to prove no duplicate frames occur with the imutils code)

With regards to accurate frame seeking, I would argue benchmarking is still relevant, as there's no point in obtaining accurate frame seeking if it were slower than reading the file in serial (as we know we can read file in a serial frame-by-frame manner without using the set operation and get accurate frame seeking).

Any python ffmpeg wrapper which offers us options, like Deffcode etc., seems like a great contribution to the field, allowing us more control over how to read video in Python

@brosenberg42
Copy link

I have tried this with opencv's VideoCapture::read() and found it's quite fast.

When you need to seek, I have found calling VideoCapture::grab() to skip frames to be faster.

@nji9nji9
Copy link
Author

nji9nji9 commented May 3, 2022

As the thread starter... may I annotate my ceterum censeo?

The main problem of the situation still isn't addressed by all the posts:

There is a (probably wide used) method in opencv that returns
incorrect values.

This should be addressed first of all at opencv.

1st option
Remove the method from opencv
This would be better than now, as it would notice all the users of the method
that there is/ was something terribly wrong.

2nd option
Exchange the method's code by the workaround we luckily know of
(Reading the file in serial;
Maybe also some words in the docu that the method is terribly slow now.)

AFTER having done this first step it is time to check alternatives.
(I suppose the error in opencv' IO interface, but where?)

So my kind question:
Would PLEASE someone change opencv's main dev in that way?
(I quitted developping some years ago).

@petered
Copy link

petered commented Dec 21, 2022

Seems reliably seeking through video with OpenCV is a lost cause. Building on @Eugeny 's excellent tool posted above, I have made an alternative that uses pyav.

Gist: https://gist.github.com/petered/db8e334c7aefdf367af1b11e6eefe733

Usage:

reader = VideoReader(path="path/to/video.mp4')
# Iterate in order
for frame in reader.iter_frames(frame_interval=(10, 20)):
    cv2.imshow('frame', frame.image)
    cv2.waitKey(1)
# Request individually
frame = reader.request_frame(20)  # Ask for the 20th frame (0-indexed)
cv2.imshow('frame', frame.image)
cv2.waitKey(1)

@samsont
Copy link

samsont commented Jan 23, 2023

This is a very ugly bug that has been opened for more than 5 years. Any way to make it higher priority? Thanks.

@staeff777
Copy link

It seems that in my case, a conversion into the mkv container has helped:

ffmpeg -i video.mp4 -codec copy video.mkv

@den-run-ai
Copy link

CAP_PROP_POS_MSEC is not working correctly when cap.grab() is used to skip frames, instead of cap.read()

@aliases-deepest
Copy link

As a temporary solution for this long-standing positioning error problem via cap.set(cv2.CAP_PROP_POS_FRAMES, fr_no). I found this option for myself, which may be useful for someone else too.

Inside read cycle, every ~28 frames (depends on FPS of the selected video) I specify position again and again. This decreases performance slightly, but gives exceptionally accurate frames on request, even on video that is about 30 minutes long.
cap set

@brosenberg42
Copy link

It seems that in my case, a conversion into the mkv container has helped:

ffmpeg -i video.mp4 -codec copy video.mkv

The issue isn't the container. The default setting for ffmpeg is to output videos with a constant frame rate. Both mp4 and mkv files can have a variable frame rate. VideoCapture::set (CAP_PROP_POS_FRAMES, frameNumber) converts the frame number to a time using the video's average frame rate. For videos with a constant frame rate, the calculation works out. It doesn't work out when a video uses a variable frame rate.

Re-encoding a video can be slow. Depending on the use case you are probably better of calling VideoCapture::grab in a loop to get to the frame you want.

@mattans
Copy link

mattans commented May 10, 2023

@alalek is there a bug fix ETA? thanks

@ChernyshovYuriy
Copy link

Any links to examples of video with a constant frame rate to actually test that OpenCV works well in this case?

@Valenciano118
Copy link

The issue isn't the container. The default setting for ffmpeg is to output videos with a constant frame rate. Both mp4 and mkv files can have a variable frame rate. VideoCapture::set (CAP_PROP_POS_FRAMES, frameNumber) converts the frame number to a time using the video's average frame rate. For videos with a constant frame rate, the calculation works out. It doesn't work out when a video uses a variable frame rate.

Re-encoding a video can be slow. Depending on the use case you are probably better of calling VideoCapture::grab in a loop to get to the frame you want.

I've tried both mp4 and mkv and have seen how mkv is working decently while mp4 isn't.

Also they're not re-encoding the video, they are actually copying it and just changing the container, so it's a fast operation that is usually limited by I/O.

@Eyshika
Copy link

Eyshika commented Mar 21, 2024

Any fix for this yet ? I see a very weird issue, VideoCapture::set (CAP_PROP_POS_FRAMES, frameNumber) gives me right frame, when testing locally, but then it doesnt on my VM. Its always few secs or msecs behind.

@petered
Copy link

petered commented Mar 21, 2024

As far as I know no - and wouldn't bet on it happening. Best bets as far as I know are pyav (works (mostly) but slow) or decord (fast but uses up too much memory for longer videos). If anyone comes up with a good solution that reliably and quickly seeks to specific video frames ... we're all ears.

@zzhuolun
Copy link

zzhuolun commented May 23, 2024

I encountered the same problem. Randomly accessing a frame with VideoCapture::set(CAP_PROP_POS_FRAMES, frameNumber) is not reliable after certain frames (in my case after frame 136). The VideoCapture::set(CAP_PROP_POS_FRAMES, 136) is actually accessing the frame 137.

import cv2
import numpy as np


def seq_acc(video_path: str, frame_ids: list) -> list:
    video = cv2.VideoCapture(video_path)
    frames_seq = []
    frame_id_curr = 0
    for frame_id in tqdm(frame_ids):
        while frame_id != frame_id_curr:
            video.grab()
            frame_id_curr += 1
        success, img = video.read()
        assert success
        frames_seq.append(img)
        frame_id_curr = frame_id + 1
    return frames_seq


def rand_acc_cv(video_path: str, frame_ids: list) -> list:
    video = cv2.VideoCapture(video_path)
    frames_rand = []
    for frame_id in tqdm(frame_ids):
        video.set(cv2.CAP_PROP_POS_FRAMES, frame_id)
        success, img = video.read()
        assert success
        frames_rand.append(img)
    return frames_rand


video_path = 'DJI_0001.MP4'
frame_ids = range(130, 140)
frames_seq = seq_acc(video_path, frame_ids)
frames_rand = rand_acc_cv(video_path, frame_ids)

for frame_id, frame_seq, frame_rand in zip(frame_ids, frames_seq, frames_rand):
    print(frame_id, np.allclose(frame_seq, frame_rand))

frame_ids = [135, 136, 137]
frame_seq = seq_acc(video_path, frame_ids)
frame_rand = rand_acc_cv(video_path, frame_ids)

print('seq 135 == rand 135', np.allclose(frame_seq[0], frame_rand[0]))
print('seq 136 == rand 136',np.allclose(frame_seq[1], frame_rand[1]))
print('seq 137 == rand 137', np.allclose(frame_seq[2], frame_rand[1]))

Output:

130 True
131 True
132 True
133 True
134 True
135 True
136 False
137 False
138 False
139 False

seq 135 == rand 135 True
seq 136 == rand 136 False
seq 137 == rand 136 True

Imageio.v3 doesn't have the problem.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests