You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It would be great if you could provide the individual outputs when running the models on the test sets. Additionally, is it possible to provide links to all the model adapters used (currently the README only includes llama-13b).
Perhaps a GDrive or Zenodo link would work well.
This would enable quicker turn-around times when comparing different adapters. Thanks a lot for the work so far!
The text was updated successfully, but these errors were encountered:
Thanks for your interest in our project! We have uploaded the outputs of LLaMA-7B and LLaMA-13B with different adapters on both math reasoning and commonsense reasoning tasks. You can find the outputs here, https://drive.google.com/drive/folders/1weL4Cq1h6M5lOhNL9Hran167D1dqtOZk?usp=sharing. The results are consistent with the reported ones. But we still need time to collate the adapter weights.
It would be great if you could provide the individual outputs when running the models on the test sets. Additionally, is it possible to provide links to all the model adapters used (currently the README only includes
llama-13b
).Perhaps a GDrive or Zenodo link would work well.
This would enable quicker turn-around times when comparing different adapters. Thanks a lot for the work so far!
The text was updated successfully, but these errors were encountered: