You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I came across your work and you did a fantastic job in improving the performance of SAM.
It seems that the current implementation supports only a single loss function.
While the code example does include the case for fp16, there are no mentions of gradient scaling, which is commonly used together in AMP.
Are there any plans to support multiple loss functions and GradScaler?
The text was updated successfully, but these errors were encountered:
Thank you so much!
yes currently we only test it for CE loss and smoothed CE loss, the results are good. The sharpness-aware learning should also be effective for multiple loss functions.
We have tested the FP16 on the imagenet dataset, but the accuracy drops about 0.5%. We will test the FP16 then the multiple loss functions as the ddl of neurips has passed :D
I came across your work and you did a fantastic job in improving the performance of SAM.
It seems that the current implementation supports only a single loss function.
While the code example does include the case for fp16, there are no mentions of gradient scaling, which is commonly used together in AMP.
Are there any plans to support multiple loss functions and GradScaler?
The text was updated successfully, but these errors were encountered: