You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I think the default batch_size_val should be as the same as the ngpus_per_node, or get an error:
ValueError: batch_size should be a positive integeral value, but got batch_size=0
The text was updated successfully, but these errors were encountered:
Thanks for the issue. I updated the default value of batch_size_val in the config files.
We follow the previous DeepLab in Caffe where the crop size need to be 8*n+1 (this is due to the implementation of the interp' layer that needs to do align corners). 8*n is also fine now in PyTorch interpolate' function.
At the time of the development of this repo, sync bn is not included in the official PyTorch. You can use PyTorch 1.1 or newer versions with sync bn incorporated.
Hi, hs.
In your code,
if args.distributed: torch.cuda.set_device(gpu) args.batch_size = int(args.batch_size / ngpus_per_node) args.batch_size_val = int(args.batch_size_val / ngpus_per_node) args.workers = int(args.workers / ngpus_per_node)
I think the default batch_size_val should be as the same as the ngpus_per_node, or get an error:
ValueError: batch_size should be a positive integeral value, but got batch_size=0
The text was updated successfully, but these errors were encountered: