You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for your great work and open-source code, which inspires me a lot.
During replication, there was a slight disparity between my results and yours (with ASR, AUC even higher on the Pokémon dataset than yours). So I want to know which differing settings led to my higher results.
My setting:
Pokmon train-test split: 416, 417.
Training steps: 15000, Batch size: 1, Gradient_accumulation_steps: 4, LR: 1e-5.
Without Crop and Flip. (Did you use crop and flip during training?)
My result
ASR 0.90, AUC 0.9391 with Prompt (higher than yours: 0.821, 0.891)
Trying to keep the settings consistent with the paper, but I still obtained different results. Looking forward to your response!
The text was updated successfully, but these errors were encountered:
We didn't use Crop and Flip during the finetune. For batch size, we use 1 batch * 8 GPUs. Did you use the same member/non-member splits as us? Since there are only around 400 training samples, it may introduce a certain variance.
Thank you for your great work and open-source code, which inspires me a lot.
During replication, there was a slight disparity between my results and yours (with ASR, AUC even higher on the Pokémon dataset than yours). So I want to know which differing settings led to my higher results.
My setting:
Pokmon train-test split: 416, 417.
Training steps: 15000, Batch size: 1, Gradient_accumulation_steps: 4, LR: 1e-5.
Without Crop and Flip. (Did you use crop and flip during training?)
My result
ASR 0.90, AUC 0.9391 with Prompt (higher than yours: 0.821, 0.891)
Trying to keep the settings consistent with the paper, but I still obtained different results. Looking forward to your response!
The text was updated successfully, but these errors were encountered: