New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
save finetuned model #18
Comments
That's about right. |
Hey there, I am trying to save the fine-tuned model using save_weights: model_generator.save_weights('fine_tuned_model.h5') but I am getting the following error: AttributeError: 'FinetuningModelGenerator' object has no attribute 'save_weights' Any idea why this is happening? Thanks. |
Sorry, my bed.
|
Another question, please. Running the following line while loading the saved fine-tuned model raises an error: ( Cannot download into an already existing file: epoch_92400_sample_23500000.pkl) pretrained_model_generator, input_encoder = load_pretrained_model(local_model_dump_dir='') How did you deal with that problem? if pretrained_model_generator, input_encoder are actually still in memory can I use them, or do I have to run this line again? If I have to run it again, what would be a solution here? Thanks a lot. |
I used it with validate_downloading flag
assuming that epoch_92400_sample_23500000.pkl is in local_model_dump_dir |
There seems to be a problem in saving the fine-tuned model. For some reason, loading the saved fine-tuned model and using it to make predictions leads to a change in the results every time the following snippet is executed: # Evaluating the performance on the test-set
results, confusion_matrix = evaluate_by_len(model_generator, input_encoder, OUTPUT_SPEC, X_test['seq_short'], y_test, \
start_seq_len = 600, start_batch_size = 8)
print('Test-set performance:')
display(results)
print('Confusion matrix:')
display(confusion_matrix) Here is how I save the fine-tuned model after fine-tuninig the model: finetune(model_generator, input_encoder, OUTPUT_SPEC, X_train['seq_short'], y_train, X_val['seq_short'], y_val, \
seq_len = 600, batch_size = 8, max_epochs_per_stage = 1, lr = 1e-04, begin_with_frozen_pretrained_layers = True, \
lr_with_frozen_pretrained_layers = 1e-02, n_final_epochs = 0, final_seq_len = 600, final_lr = 1e-05, callbacks = training_callbacks)
fine_tuned_model = model_generator.create_model(600)
fine_tuned_model.save_weights('fine_tuned_model.h5') Here is how I load the saved fine-tuned model, which produces different predictions every time I run it. I think I am not loading it correctly: pretrained_model_generator, input_encoder = load_pretrained_model(local_model_dump_dir='',
local_model_dump_file_name= "protBert_model_name.pkl",
)
model_generator = FinetuningModelGenerator(pretrained_model_generator, OUTPUT_SPEC,
pretraining_model_manipulation_function = get_model_with_hidden_layers_as_outputs,
dropout_rate = 0.5
)
fine_tuned_model = model_generator.create_model(600)
fine_tuned_model.load_weights('fine_tuned_model.h5') Could you help me with this, please? @nadavbra Thank you. |
@asimokby Since |
Wonderful. Thanks a lot. Here is how I save and load a fine-tuned model now: Saving: # fine-tune the model
finetune(model_generator, input_encoder, OUTPUT_SPEC, X_train['seq_short'], y_train, X_val['seq_short'], y_val, \
seq_len = 512, batch_size = 8, max_epochs_per_stage = 1, lr = 1e-04, begin_with_frozen_pretrained_layers = True, \
lr_with_frozen_pretrained_layers = 1e-02, n_final_epochs = 1, final_seq_len = 1024, final_lr = 1e-05, callbacks = training_callbacks)
# pickle the weights
with open('model_weights.pkl', 'wb') as f:
pickle.dump(model_generator.model_weights, f) Loading: # unpickling the weights
with open('model_weights.pkl', 'rb') as f:
saved_model_weights = pickle.load(f)
saved_pretrained_model_generator, saved_input_encoder = load_pretrained_model(local_model_dump_dir='',
local_model_dump_file_name= "protBert_model_name.pkl",
)
saved_model_generator = FinetuningModelGenerator(saved_pretrained_model_generator, OUTPUT_SPEC,
pretraining_model_manipulation_function = get_model_with_hidden_layers_as_outputs,
dropout_rate = 0.5,
model_weights = saved_model_weights,
) |
saved_pretrained_model_generator, saved_input_encoder = load_pretrained_model(local_model_dump_dir='', |
Hi, I have a question about saving a fine tuned model plese.
After fine tuning i'm using save_weights such as:
and if i want to use the model later:
where OUTPUT_SPEC is the same that I used to fine tune the model
Is this ok?
The text was updated successfully, but these errors were encountered: