You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As mentioned here, cached memory is not used at finetuning time.
It is also mentioned that one can increase the maximum length at finetuning time, since relative position embeddings are used. However, increasing the size make the model slower (right ?).
How can one changes the existing examples (run_squad) to make the model use cached memory at finetuning time ?
The text was updated successfully, but these errors were encountered:
As mentioned here, cached memory is not used at finetuning time.
It is also mentioned that one can increase the maximum length at finetuning time, since relative position embeddings are used. However, increasing the size make the model slower (right ?).
How can one changes the existing examples (
run_squad
) to make the model use cached memory at finetuning time ?The text was updated successfully, but these errors were encountered: