You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am training a ResNet-50 with large iter_size and each batch has 10 images. This configuration almost cost all my memory in caffe, so I can not set a larger batch_size. Although large iter_size enable me to train ResNet-50, a batch of 10 images hurts batch normalization. Is there any ideas or examples to save memory cost in caffe so that i use batch normalization in a large deep network ResNet-50. Thanks!
The text was updated successfully, but these errors were encountered:
This question is better suited for the Caffe mailing list, since this is a fundamental issue with batch normalization and not Caffe's implementation. You could use multiple GPUs to increase the batch size. The Caffe team gets bombarded with usage related issues, which makes it hard for them to address bugs.
Please do not post usage, installation, or modeling questions, or other requests for help to Issues.
Use the caffe-users list instead. This helps developers maintain a clear, uncluttered, and efficient view of the state of Caffe.
In my opinion, maybe it is not a bug but it is a flaw of iter_size with batch_norm_layer. If you multiple GPUs to increase the batch size, saying, 10 images in each of 0,1 gpu. when in the batch normalization forward stage 0 gpu compute the batch mean, and 0 gpu will wait for 1 gpu to compute its batch mean and then communicate data to get the total batch mean (maybe computed in cpu then transfer to gpu). If this is the case, i think it is a little heavy.
I am training a ResNet-50 with large iter_size and each batch has 10 images. This configuration almost cost all my memory in caffe, so I can not set a larger batch_size. Although large iter_size enable me to train ResNet-50, a batch of 10 images hurts batch normalization. Is there any ideas or examples to save memory cost in caffe so that i use batch normalization in a large deep network ResNet-50. Thanks!
The text was updated successfully, but these errors were encountered: