You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The model gets loaded every time fairseq-generate is called to get a summary. Is there any way to avoid the model loading everytime I want to do an inference? Is there any possible way to first pre-load the model and then the inference on it?
Thanks.
The text was updated successfully, but these errors were encountered:
The model gets loaded every time fairseq-generate is called to get a summary. Is there any way to avoid the model loading everytime I want to do an inference? Is there any possible way to first pre-load the model and then the inference on it?
Thanks.
The text was updated successfully, but these errors were encountered: