-
Notifications
You must be signed in to change notification settings - Fork 140
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RuntimeError: CUDA error: out of memory #19
Comments
Closed
Hi @albertchristian92 , how did you solve this problem? My solution was to decrease the |
The
|
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Hi, Thank you for your works. Actually, I am interested in this works, but when I tried to start training your code using Docker, I met a problem RuntimeError: CUDA error: out of memory as shown here:
I am using Multi GPU GeForce GTX 1080 as following:
Here, how I run your code:
python3 main.py
ddd
--exp_id centerfusion
--shuffle_train
--train_split mini_train
--val_split mini_val
--val_intervals 1
--run_dataset_eval
--nuscenes_att
--velocity
--batch_size 4
--lr 2.5e-4
--num_epochs 60
--lr_step 50
--save_point 20,40,50
--gpus 0,2,3
--not_rand_crop
--flip 0.5
--shift 0.1
--pointcloud
--radar_sweeps 6
--pc_z_offset 0.0
--pillar_dims 1.0,0.2,0.2
--max_pc_dist 60.0
--num_workers 0
--load_model ../models/centerfusion_e60.pth \
Please give any suggestion regarding this issue. Thank you very much.
The text was updated successfully, but these errors were encountered: