You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, first of all thanks for a very interesting paper.
I would like to know how long did it take you to train the models? I'm trying to train ConvMixer-768/32 using 2xV100 and one epoch is ~3 hours, so I would estimate that full training would take ~= 2 * 3 * 300 ~= 1800 GPU hours, which is insane. Even if you trained with 10 GPUs it would take ~1 week for one experiment to finish. Are my calculations correct?
The text was updated successfully, but these errors were encountered:
I think you're correct that it takes approximately a week to train a ConvMixer on ImageNet-1k on 10 GPUs (we used RTX8000s). The ConvMixer-1536/20 took ~9 days and the ConvMixer-768/32 took ~8 days for twice the number of epochs (300 vs. 150). The model is indeed quite slow, but we are optimistic that low-level optimizations of large-kernel depthwise convolution could improve this -- we are currently looking into that.
Another option is to try using a larger patch size (like patch_size=14), which will be significantly faster but less accurate. We also suspect that this could be improved by spending some time on parameter tuning.
I'm going to close this issue for now, but feel free to reopen it or open a new issue if you have more questions or comments.
Hi, first of all thanks for a very interesting paper.
I would like to know how long did it take you to train the models? I'm trying to train ConvMixer-768/32 using 2xV100 and one epoch is ~3 hours, so I would estimate that full training would take ~= 2 * 3 * 300 ~= 1800 GPU hours, which is insane. Even if you trained with 10 GPUs it would take ~1 week for one experiment to finish. Are my calculations correct?
The text was updated successfully, but these errors were encountered: