-
Notifications
You must be signed in to change notification settings - Fork 147
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Iterating 4K GoPro video using python API, results in huge memory leaks and process killed #208
Comments
Same here. if my app does not consume frames, the memory usage increases till segfault or eof. The problem appears with ctx=cpu(0) but not ctx=gpu(0) steps to reproduce:
on a 64GB system |
Having the same issue here with the pytorch bridge. |
Is that with the latest version? I've been using an older version (0.4.1) for a long time with no issues, but my videos are typically rather short (<30 seconds). |
On windows, version 0.6.0 from pypi (no crash but eats up to 64GB memory - surelly would eat more if possible!). How does your memory usage increase if you do not consume the decoded frames, or consume them too slowly? |
Ah that's a big video :) |
it seems that the problem can be solved by setting environnement variable |
I have the same issue. If I leave a VideoReader instance idle, it just continues to consume memory, I assume as it is pre-reading and caching. If I seek elsewhere in the video (in particular, backwards) the memory drops quickly. I tried setting the above flag and it didn't change anything - it just consumes memory until segfault. |
Building up on @ashwhall's idea, I'm getting consistent results by always doing import decord
class VideoReaderWrapper(decord.VideoReader):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.seek(0)
def __getitem__(self, key):
frames = super().__getitem__(key)
self.seek(0)
return frames
# and similarly for the other methods I need The only thing I still have to be careful with is when running time-consuming methods like |
I have the same issue of memory filling up when doing debugging in PyCharm. Furthermore, the issue also persists when pausing via However, doing |
I warn everyone here that the I confirmed this with a script to dump out individual frames from a video like: video_reader = VideoReader(video_file)
for f in range(len(video_reader)):
frame = video_reader[f].asnumpy()
video_reader.seek(0)
image = Image.fromarray(frame)
image.save(f'{f:04d}.jpg') Then, I put the image sequence back together with ffmpeg for visual inspection. Unfortunately, the memory issue and the side effect of the |
I was just bit by this bug. I see it's been over a year since originally reported. Is anyone looking at this? It's easy to reproduce, but if more information is needed, please let me know. Happy to help. |
What's your decord verson? I tried this code on decord |
I am doing basic usage: instantiate reader, read frames, skip frames. 5 minutes into the video, process memory allocation is already at 5GB. By minute 10 it goes up to 14GB. ImageIO needs less than 100MB for the same file (with 30% slower performance)
class DecordVideoIterator:
def init( self, videoPathName ):
self.videoReader = VideoReader( videoPathName, ctx = cpu(0) )
self.currentIndex = 0
Substituting this iterator with ImageIO python library, everything works fine. Object tracker and tracemalloc shows no Python leaks that I coudl identify. Leaks must be in the native code.
The text was updated successfully, but these errors were encountered: