Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OSError: [Errno 12] Cannot allocate memory #12

Closed
axiezai opened this issue Apr 30, 2020 · 5 comments
Closed

OSError: [Errno 12] Cannot allocate memory #12

axiezai opened this issue Apr 30, 2020 · 5 comments

Comments

@axiezai
Copy link

axiezai commented Apr 30, 2020

Hi, this looks awesome btw!

I tried running the sample.py command with my own song, but received the following error:

Level:2, Cond downsample:None, Raw to tokens:128, Sample length:1048576
0: Converting to fp16 params
Downloading from gce
Traceback (most recent call last):
  File "jukebox/sample.py", line 237, in <module>
    fire.Fire(run)
  File "/home/axiezai/miniconda3/envs/jukebox/lib/python3.7/site-packages/fire/core.py", line 127, in Fire
    component_trace = _Fire(component, args, context, name)
  File "/home/axiezai/miniconda3/envs/jukebox/lib/python3.7/site-packages/fire/core.py", line 366, in _Fire
    component, remaining_args)
  File "/home/axiezai/miniconda3/envs/jukebox/lib/python3.7/site-packages/fire/core.py", line 542, in _CallCallable
    result = fn(*varargs, **kwargs)
  File "jukebox/sample.py", line 234, in run
    save_samples(model, device, hps, sample_hps)
  File "jukebox/sample.py", line 157, in save_samples
    vqvae, priors = make_model(model, device, hps)
  File "/media/rajlab/sachin_data_2/userdata/xihe/jukebox/jukebox/make_models.py", line 185, in make_model
    priors = [make_prior(setup_hparams(priors[level], dict()), vqvae, 'cpu') for level in levels]
  File "/media/rajlab/sachin_data_2/userdata/xihe/jukebox/jukebox/make_models.py", line 185, in <listcomp>
    priors = [make_prior(setup_hparams(priors[level], dict()), vqvae, 'cpu') for level in levels]
  File "/media/rajlab/sachin_data_2/userdata/xihe/jukebox/jukebox/make_models.py", line 169, in make_prior
    restore(hps, prior, hps.restore_prior)
  File "/media/rajlab/sachin_data_2/userdata/xihe/jukebox/jukebox/make_models.py", line 54, in restore
    checkpoint = load_checkpoint(checkpoint_path)
  File "/media/rajlab/sachin_data_2/userdata/xihe/jukebox/jukebox/make_models.py", line 34, in load_checkpoint
    download(gs_path, local_path)
  File "/media/rajlab/sachin_data_2/userdata/xihe/jukebox/jukebox/utils/gcs_utils.py", line 36, in download
    subprocess.call(args)
  File "/home/axiezai/miniconda3/envs/jukebox/lib/python3.7/subprocess.py", line 339, in call
    with Popen(*popenargs, **kwargs) as p:
  File "/home/axiezai/miniconda3/envs/jukebox/lib/python3.7/subprocess.py", line 800, in __init__
    restore_signals, start_new_session)
  File "/home/axiezai/miniconda3/envs/jukebox/lib/python3.7/subprocess.py", line 1482, in _execute_child
    restore_signals, start_new_session, preexec_fn)
OSError: [Errno 12] Cannot allocate memory

I did some googling, and this seems like a swap space issue? I checked and confirmed I had free swap space:

# free -h
              total        used        free      shared  buff/cache   available
Mem:            31G        615M         29G         16M        780M         29G
Swap:          236M         42M        194M
Thu Apr 30 14:50:26 2020
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 410.78       Driver Version: 410.78       CUDA Version: 10.0     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  TITAN Xp            Off  | 00000000:42:00.0  On |                  N/A |
| 23%   34C    P8    18W / 250W |     76MiB / 12194MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|    0      1451      G   /usr/lib/xorg/Xorg                            73MiB |
+-----------------------------------------------------------------------------+

Is 194M not enough? Is there a minimum requirement for swap space that I'm not meeting or is this memory error caused by something else?

@sdtblck
Copy link

sdtblck commented Apr 30, 2020

I'm also getting OOM errors when running it on colab

@kylemcdonald
Copy link

Same, with 2080Ti (11GB). Made sure to select the GPU that isn't being used for the GUI.

(jukebox) $ nvidia-smi
Thu Apr 30 14:23:24 2020
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 435.21       Driver Version: 435.21       CUDA Version: 10.1     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce RTX 208...  Off  | 00000000:05:00.0  On |                  N/A |
| 30%   38C    P8     4W / 260W |    198MiB / 11016MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
|   1  GeForce RTX 208...  Off  | 00000000:09:00.0 Off |                  N/A |
| 30%   39C    P8    20W / 260W |      1MiB / 11019MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|    0      1129      G   /usr/lib/xorg/Xorg                            78MiB |
|    0      1305      G   /usr/bin/gnome-shell                         117MiB |
+-----------------------------------------------------------------------------+
(jukebox) $ CUDA_VISIBLE_DEVICES=1 python jukebox/sample.py --model=5b_lyrics --name=sample_5b --levels=3 --sample_length_in_seconds=20 --total_sample_length_in_seconds=180 --sr=44100 --n_samples=6 --hop_fraction=0.5,0.5,0.125
Using cuda True
{'name': 'sample_5b', 'levels': 3, 'sample_length_in_seconds': 20, 'total_sample_length_in_seconds': 180, 'sr': 44100, 'n_samples': 6, 'hop_fraction': (0.5, 0.5, 0.125)}
Setting sample length to 881920 (i.e. 19.998185941043083 seconds) to be multiple of 128
Downloading from gce
Restored from /home/kyle/.cache/jukebox-assets/models/5b/vqvae.pth.tar
0: Loading vqvae in eval mode
Conditioning on 1 above level(s)
Checkpointing convs
Checkpointing convs
Loading artist IDs from /home/kyle/Documents/jukebox/jukebox/jukebox/data/ids/v2_artist_ids.txt
Loading artist IDs from /home/kyle/Documents/jukebox/jukebox/jukebox/data/ids/v2_genre_ids.txt
Level:0, Cond downsample:4, Raw to tokens:8, Sample length:65536
Downloading from gce
Restored from /home/kyle/.cache/jukebox-assets/models/5b/prior_level_0.pth.tar
0: Loading prior in eval mode
Conditioning on 1 above level(s)
Checkpointing convs
Checkpointing convs
Loading artist IDs from /home/kyle/Documents/jukebox/jukebox/jukebox/data/ids/v2_artist_ids.txt
Loading artist IDs from /home/kyle/Documents/jukebox/jukebox/jukebox/data/ids/v2_genre_ids.txt
Level:1, Cond downsample:4, Raw to tokens:32, Sample length:262144
Downloading from gce
Restored from /home/kyle/.cache/jukebox-assets/models/5b/prior_level_1.pth.tar
0: Loading prior in eval mode
Loading artist IDs from /home/kyle/Documents/jukebox/jukebox/jukebox/data/ids/v2_artist_ids.txt
Loading artist IDs from /home/kyle/Documents/jukebox/jukebox/jukebox/data/ids/v2_genre_ids.txt
Level:2, Cond downsample:None, Raw to tokens:128, Sample length:1048576
0: Converting to fp16 params
Downloading from gce
Restored from /home/kyle/.cache/jukebox-assets/models/5b_lyrics/prior_level_2.pth.tar
0: Loading prior in eval mode
Traceback (most recent call last):
  File "jukebox/sample.py", line 237, in <module>
    fire.Fire(run)
  File "/home/kyle/anaconda3/envs/jukebox/lib/python3.7/site-packages/fire/core.py", line 127, in Fire
    component_trace = _Fire(component, args, context, name)
  File "/home/kyle/anaconda3/envs/jukebox/lib/python3.7/site-packages/fire/core.py", line 366, in _Fire
    component, remaining_args)
  File "/home/kyle/anaconda3/envs/jukebox/lib/python3.7/site-packages/fire/core.py", line 542, in _CallCallable
    result = fn(*varargs, **kwargs)
  File "jukebox/sample.py", line 234, in run
    save_samples(model, device, hps, sample_hps)
  File "jukebox/sample.py", line 215, in save_samples
    ancestral_sample(labels, sampling_kwargs, priors, hps)
  File "jukebox/sample.py", line 123, in ancestral_sample
    zs = _sample(zs, labels, sampling_kwargs, priors, sample_levels, hps)
  File "jukebox/sample.py", line 94, in _sample
    prior.cuda()
  File "/home/kyle/anaconda3/envs/jukebox/lib/python3.7/site-packages/torch/nn/modules/module.py", line 304, in cuda
    return self._apply(lambda t: t.cuda(device))
  File "/home/kyle/anaconda3/envs/jukebox/lib/python3.7/site-packages/torch/nn/modules/module.py", line 201, in _apply
    module._apply(fn)
  File "/home/kyle/anaconda3/envs/jukebox/lib/python3.7/site-packages/torch/nn/modules/module.py", line 201, in _apply
    module._apply(fn)
  File "/home/kyle/anaconda3/envs/jukebox/lib/python3.7/site-packages/torch/nn/modules/module.py", line 201, in _apply
    module._apply(fn)
  [Previous line repeated 3 more times]
  File "/home/kyle/anaconda3/envs/jukebox/lib/python3.7/site-packages/torch/nn/modules/module.py", line 223, in _apply
    param_applied = fn(param)
  File "/home/kyle/anaconda3/envs/jukebox/lib/python3.7/site-packages/torch/nn/modules/module.py", line 304, in <lambda>
    return self._apply(lambda t: t.cuda(device))
RuntimeError: CUDA out of memory. Tried to allocate 34.00 MiB (GPU 0; 10.76 GiB total capacity; 9.87 GiB already allocated; 2.62 MiB free; 10.03 GiB reserved in total by PyTorch)

@prafullasd
Copy link
Collaborator

For low GPU memory environments, try passing a lower max_batch_size / n_samples to sample.py, or the 1b_lyrics model instead of the 5b_lyrics.

@prafullasd
Copy link
Collaborator

The swap I'm not sure hmm it looks like it fails to download the model. You'll need 2Gb each for upsamplers and 1B_lyric model, and 11Gb for 5b_lyric model. Maybe try changing the path it downloads to? In gcs_utils.py, we currently download to .cache, but try another location?

@axiezai
Copy link
Author

axiezai commented Apr 30, 2020

@prafullasd thank you for the response, seems like my swap is fine, the problem is low GPU memory, changing the model to 1b_lyrics worked fine, seems like I need to find some balance between n_samples and model size. Cheers!

@axiezai axiezai closed this as completed Apr 30, 2020
ndettmer added a commit to drduda/jukebox that referenced this issue Feb 16, 2022
ndettmer added a commit to drduda/jukebox that referenced this issue Feb 16, 2022
ndettmer added a commit to drduda/jukebox that referenced this issue Feb 16, 2022
ndettmer added a commit to drduda/jukebox that referenced this issue Feb 16, 2022
ndettmer added a commit to drduda/jukebox that referenced this issue Feb 16, 2022
ndettmer added a commit to drduda/jukebox that referenced this issue Feb 16, 2022
ndettmer added a commit to drduda/jukebox that referenced this issue Feb 16, 2022
ndettmer added a commit to drduda/jukebox that referenced this issue Feb 16, 2022
ndettmer added a commit to drduda/jukebox that referenced this issue Feb 16, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants