You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The second command in run.sh: main.py --do-eval --quantify --model-type roberta --prefix 0524 --filename final --task ast will rewrite the trained model checkpoint. Also it does not seem that the code has the ability to load a saved model.
The text was updated successfully, but these errors were encountered:
I see, yea the saving of checkpoints occurs with evaluation. You can comment this out, add a --do-save flag, or change the name of the checkpoint file. I would recommend the third option, which was in the original code before I cleaned things up. For example, something like:
def save_pretrained(args, filepath=None):
if filepath is None:
filepath = os.path.join(args.output_dir, f'{args.task}_model_{args.seed}.pt')
torch.save(self.state_dict(), filepath)
print(f"Model weights saved in {filepath}")
In terms of loading a checkpoint, you can use any typical loading function. An example:
The second command in
run.sh
:main.py --do-eval --quantify --model-type roberta --prefix 0524 --filename final --task ast
will rewrite the trained model checkpoint. Also it does not seem that the code has the ability to load a saved model.The text was updated successfully, but these errors were encountered: