You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I used to use small batch size due to small sized gpu memory.
with help of iter_size, i could increase the number of images to calculate the gradient.
But its slow than original one without iter_size.
What makes this?
Thanks.
The text was updated successfully, but these errors were encountered:
I assume you mean that with the same number of images being computed, it's slower? A larger iter_size (and correspondingly smaller batchsize) has more overhead in synchronizing CUDA threads and launching CUDA kernels. Rather than batching all the images together, instead there are several passes through the data.
Please do not post usage, installation, or modeling questions, or other requests for help to Issues.
Use the caffe-users list instead. This helps developers maintain a clear, uncluttered, and efficient view of the state of Caffe.
I used to use small batch size due to small sized gpu memory.
with help of iter_size, i could increase the number of images to calculate the gradient.
But its slow than original one without iter_size.
What makes this?
Thanks.
The text was updated successfully, but these errors were encountered: