You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have installed dynet with gpu compatibility as mentioned in the docs. Also the --dynet-mem is set in the train_single-source.sh file. Even then I got this error. Following is the Traceback of the entire error.
[dynet] Device Number: 2
[dynet] Device name: GeForce GTX 1080 Ti
[dynet] Memory Clock Rate (KHz): 5505000
[dynet] Memory Bus Width (bits): 352
[dynet] Peak Memory Bandwidth (GB/s): 484.44
[dynet] Memory Free (GB): 11.5464/11.7215
[dynet] Device(s) selected: 2
[dynet] random seed: 2652333402
[dynet] using autobatching
[dynet] allocating memory: 6000MB
[dynet] memory allocation done.
Param, load_model: None
Traceback (most recent call last):
File "/mnt/data/souvik/sanskrit/ocr-post-correction/postcorrection/multisource_wrapper.py", line 65, in
pretrainer = PretrainHandler(
File "/mnt/data/souvik/sanskrit/ocr-post-correction/postcorrection/pretrain_handler.py", line 81, in init
self.pretrain_model(pretrain_src1, pretrain_src2, pretrain_tgt, epochs)
File "/mnt/data/souvik/sanskrit/ocr-post-correction/postcorrection/pretrain_handler.py", line 88, in pretrain_model
self.seq2seq_trainer.train(
File "/mnt/data/souvik/sanskrit/ocr-post-correction/postcorrection/seq2seq_trainer.py", line 55, in train
batch_loss.backward()
File "_dynet.pyx", line 823, in _dynet.Expression.backward
File "_dynet.pyx", line 842, in _dynet.Expression.backward
ValueError: Dynet does not support both dynamic increasing of memory pool size, and automatic batching or memory checkpointing. If you want to use automatic batching or checkpointing, please pre-allocate enough memory using the --dynet-mem command line option (details http://dynet.readthedocs.io/en/latest/commandline.html).
The text was updated successfully, but these errors were encountered:
Hi! It looks like you need to allocate more memory in --dynet-mem since your dataset is probably larger than our sample dataset. You can change it in the train_single-source.sh file to a larger value (e.g., 12000 MB).
Thanks. But I don't have 12gb of gpu. The max i can afford id ~10gb. Is there a way to change the batch size or any other hyperparameter to run the script. I have 25k samples in pretrain dataset.
I ran all my experiments on CPU and it wasn't too slow -- you could try that. You can also remove the "--dynet-autobatching" flag and try it without autobatching.
You can also adjust the model size hyperparameters to make a smaller model.
I have installed dynet with gpu compatibility as mentioned in the docs. Also the --dynet-mem is set in the train_single-source.sh file. Even then I got this error. Following is the Traceback of the entire error.
[dynet] Device Number: 2
[dynet] Device name: GeForce GTX 1080 Ti
[dynet] Memory Clock Rate (KHz): 5505000
[dynet] Memory Bus Width (bits): 352
[dynet] Peak Memory Bandwidth (GB/s): 484.44
[dynet] Memory Free (GB): 11.5464/11.7215
[dynet] Device(s) selected: 2
[dynet] random seed: 2652333402
[dynet] using autobatching
[dynet] allocating memory: 6000MB
[dynet] memory allocation done.
Param, load_model: None
Traceback (most recent call last):
File "/mnt/data/souvik/sanskrit/ocr-post-correction/postcorrection/multisource_wrapper.py", line 65, in
pretrainer = PretrainHandler(
File "/mnt/data/souvik/sanskrit/ocr-post-correction/postcorrection/pretrain_handler.py", line 81, in init
self.pretrain_model(pretrain_src1, pretrain_src2, pretrain_tgt, epochs)
File "/mnt/data/souvik/sanskrit/ocr-post-correction/postcorrection/pretrain_handler.py", line 88, in pretrain_model
self.seq2seq_trainer.train(
File "/mnt/data/souvik/sanskrit/ocr-post-correction/postcorrection/seq2seq_trainer.py", line 55, in train
batch_loss.backward()
File "_dynet.pyx", line 823, in _dynet.Expression.backward
File "_dynet.pyx", line 842, in _dynet.Expression.backward
ValueError: Dynet does not support both dynamic increasing of memory pool size, and automatic batching or memory checkpointing. If you want to use automatic batching or checkpointing, please pre-allocate enough memory using the --dynet-mem command line option (details http://dynet.readthedocs.io/en/latest/commandline.html).
The text was updated successfully, but these errors were encountered: