You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for making the code available for such an interesting work.
I have tried to train the Relation Prediction model on GPUs with 32 GB of memory but it lead to CUDA out of memory error. I have also tried to train with vgg16(pretrain=True) but still ran into the same problem. So, I wonder what kind of GPU you used for your experiments and how you manage the memory in training.
The text was updated successfully, but these errors were encountered:
The problem was solved by adding the following to the run.py file:
import tensorflow as tf
gpus = tf.config.experimental.list_physical_devices('GPU')
for gpu in gpus:
tf.config.experimental.set_memory_growth(gpu, True)
We used NVIDIA 3080 or sometimes the NVIDIA A10 for training. Usually changing the train_batch_size to a smaller value helps with the out-of-memory error problem. I am glad you found a solution that worked for you, hence your error might be related to a different version of TensorFlow.
Hi,
Thanks for making the code available for such an interesting work.
I have tried to train the Relation Prediction model on GPUs with 32 GB of memory but it lead to CUDA out of memory error. I have also tried to train with vgg16(pretrain=True) but still ran into the same problem. So, I wonder what kind of GPU you used for your experiments and how you manage the memory in training.
The text was updated successfully, but these errors were encountered: