Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OSError: [Errno 12] Cannot allocate memory when processing Taichi Dataset #7

Closed
kenmbkr opened this issue May 11, 2020 · 2 comments
Closed

Comments

@kenmbkr
Copy link

kenmbkr commented May 11, 2020

I encountered the following memory error when processing the Taichi Dataset. The same script works for VoxCeleb and the error does not occur at the first iteration but arbitrarily at 200+ iterations. Any idea what caused the problem? For example, a particular video, or number of workers, ... Would using a try-except block to wrap mimsave or using only 1 worker fix the problem?

python load_videos.py --metadata taichi-metadata.csv --format .mp4 --out_folder taichi --workers 8 --youtube youtube-dl

/path/anaconda3/envs/first-order-model/lib/python3.7/importlib/_bootstrap.py:219: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
  return f(*args, **kwds)
/path/anaconda3/envs/first-order-model/lib/python3.7/importlib/_bootstrap.py:219: RuntimeWarning: numpy.dtype size changed, may indicate binary incompatibility. Expected 96, got 88
  return f(*args, **kwds)
0it [00:00, ?it/s]Can not load video 3M5VGsUtw_Q, broken link
2it [00:29, 10.95s/it]Can not load video xmwGBXYofEE, broken link
...
267it [1:23:47, 21.82s/it]Can not load video vNfhp02w9s0, broken link
279it [1:29:16, 15.30s/it]multiprocessing.pool.RemoteTraceback: 
"""
Traceback (most recent call last):
  File "/path/anaconda3/envs/first-order-model/lib/python3.7/multiprocessing/pool.py", line 121, in worker
    result = (True, func(*args, **kwds))
  File "load_videos.py", line 75, in run
    save(os.path.join(args.out_folder, partition, path), entry['frames'], args.format)
  File "/path/video-preprocessing/util.py", line 118, in save
    imageio.mimsave(path, frames)
  File "/path/anaconda3/envs/first-order-model/lib/python3.7/site-packages/imageio/core/functions.py", line 357, in mimwrite
    writer.append_data(im)
  File "/path/anaconda3/envs/first-order-model/lib/python3.7/site-packages/imageio/core/format.py", line 492, in append_data
    return self._append_data(im, total_meta)
  File "/path/anaconda3/envs/first-order-model/lib/python3.7/site-packages/imageio/plugins/ffmpeg.py", line 558, in _append_data
    self._initialize()
  File "/path/anaconda3/envs/first-order-model/lib/python3.7/site-packages/imageio/plugins/ffmpeg.py", line 616, in _initialize
    self._write_gen.send(None)
  File "/path/anaconda3/envs/first-order-model/lib/python3.7/site-packages/imageio_ffmpeg/_io.py", line 379, in write_frames
    cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=None, shell=ISWIN
  File "/path/anaconda3/envs/first-order-model/lib/python3.7/subprocess.py", line 800, in __init__
    restore_signals, start_new_session)
  File "/path/anaconda3/envs/first-order-model/lib/python3.7/subprocess.py", line 1482, in _execute_child
    restore_signals, start_new_session, preexec_fn)
OSError: [Errno 12] Cannot allocate memory
"""

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "load_videos.py", line 103, in <module>
    for chunks_data in tqdm(pool.imap_unordered(run, zip(video_ids, args_list))):
  File "/path/anaconda3/envs/first-order-model/lib/python3.7/site-packages/tqdm/std.py", line 1108, in __iter__
    for obj in iterable:
  File "/path/anaconda3/envs/first-order-model/lib/python3.7/multiprocessing/pool.py", line 748, in next
    raise value
OSError: [Errno 12] Cannot allocate memory
279it [1:41:36, 21.85s/it]
@AliaksandrSiarohin
Copy link
Owner

All the frames for particular block is saved to the RAM first. So maybe video have high resolution and amount of Ram in you server is low. If you decrease number of workers amount of used ram will also decrease.
So you could try to use 4 workers.

@kenmbkr
Copy link
Author

kenmbkr commented May 11, 2020

I used 4 workers and it works for me. Thank you.

@kenmbkr kenmbkr closed this as completed May 11, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants