New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Quantizers are not DDP/AMP compliant #10
Comments
@danieltudosiu Hi Daniel! No that would be great! Always welcoming contributors :) |
@danieltudosiu do you want to see if https://github.com/lucidrains/vector-quantize-pytorch/releases/tag/0.4.8 fixes the AMP issue? |
as for DDP, i'm guessing just need an allreduce at these two lines? https://github.com/lucidrains/vector-quantize-pytorch/blob/master/vector_quantize_pytorch/vector_quantize_pytorch.py#L153-L155 |
One reduction should happen here but only for the summation of the one hot encoddings (embed_onehot.sum(0)). And one here for the summation of the embeddings (embed_sum).
Regarding the AMP part, I am not actively using this codebase since we are close to finishing the project and we have a more barebone implementation ourselves, I was just signalling the issues so after the project I can move to this library ;) . But from a quick look, I would say it should work. In our case, we have just used the decorator to disable the AMP fully. And given my experience with the VQ logic, I would say it would be a good default (maybe even not giving a chance to enable AMP). |
@danieltudosiu got it! thanks for your input :) |
@lucidrains just to be clear the all reduce should be something like this:
Where encodings_sum is your embed_onehot.sum(0) and dw is your embed_sum. |
@danieltudosiu hey yup! i think sum is by default anyways :) |
Hi Lucidrains,
Thanks for the amazing work you do by implementing all those papers!
Is there a plan to make the Quantizer be compliant with:
If you want I can have a go at it.
The text was updated successfully, but these errors were encountered: