You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Starting from pretrained GPT2 hf_model, I finetuned a custom LM using unlabelled dataset, using the model_cls = AutoModelForCausalLM method, and have saved/exported the resulting Learner object (let's call that finetuned_LM_learn).
What is the best way for me to then load that finetuned LM, and use it for downstream task (e.g. sequence classification) on labelled dataset? Should I just go through the same steps here, and then switch out the base_model before starting to train? Something like below?
Ok I think I should be using finetuned_LM_learn.hf_model.save_pretrained(path) and then for downstream classification task do AutoModelForSequenceClassification.from_pretrained(path) instead. Please do correct me if I am wrong.. Thank you.
Just use the transformer model's save_pretrained to the path you want to save your HF artifacts in (you should do the same with the tokenizer) and the use as you are above.
Hi,
Starting from pretrained GPT2 hf_model, I finetuned a custom LM using unlabelled dataset, using the
model_cls = AutoModelForCausalLM
method, and have saved/exported the resulting Learner object (let's call thatfinetuned_LM_learn
).What is the best way for me to then load that finetuned LM, and use it for downstream task (e.g. sequence classification) on labelled dataset? Should I just go through the same steps here, and then switch out the
base_model
before starting to train? Something like below?Or is that not the correct / best way? Thanks.
The text was updated successfully, but these errors were encountered: