You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
One of the low hanging fruits in memory optimization is in-place operators. If you look at the proto message of ResNet, it has tons of in-place operations: conv's output has been in place modified by batch_norm, scale, and Relu.
With a few lines modification of code, my experiments showed that it can save up to 40% of the memory.
VGG
ResNet50
Original
3.3G
1.5G
In-place BatchNorm
2.9G
1.2G
In-place BN and activation
2.5G
0.9G
memopt
2.4G
0.94G
memopt + in-place
2.0G
0.6G
The text was updated successfully, but these errors were encountered:
One of the low hanging fruits in memory optimization is in-place operators. If you look at the proto message of ResNet, it has tons of in-place operations: conv's output has been in place modified by batch_norm, scale, and Relu.
With a few lines modification of code, my experiments showed that it can save up to 40% of the memory.
The text was updated successfully, but these errors were encountered: