-
Notifications
You must be signed in to change notification settings - Fork 538
fix saving and loading model including weights #420
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix saving and loading model including weights #420
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @spirousschuh, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request addresses a critical bug in the model serialization process, specifically preventing the loss of model weights when saving and loading TabPFN models. The changes ensure that the model configuration handling is robust across different input types during deserialization, and a new comprehensive test validates the end-to-end persistence functionality.
Highlights
- Model Configuration Compatibility: The
ModelConfig.upgrade_configmethod has been enhanced to accept both dictionary andModelConfigobject inputs. This ensures that when aModelConfiginstance is passed during model loading, it is correctly converted to a dictionary before the upgrade logic is applied, resolving an incompatibility issue. - Robust Model Persistence Testing: A new unit test,
test_saving_and_loading_model_with_weights, has been added totests/test_regressor_interface.py. This test rigorously verifies that aTabPFNRegressormodel, including its modified weights, can be successfully saved to disk and subsequently loaded, with all parameters accurately preserved. This directly addresses the core problem identified by the author.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments or fill out our survey to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request addresses an issue with saving and loading models by ensuring the model configuration is correctly handled as a dictionary during the loading process. The fix is sound and is accompanied by a solid unit test that verifies the corrected behavior. I've added one suggestion for a minor refactoring to improve efficiency in the configuration upgrade logic. Overall, this is a good contribution that improves the robustness of model serialization.
…ng_weights # Conflicts: # src/tabpfn/model/config.py
|
@spirousschuh thanks for fixing this! I just merged with main which was quite a change, would you like to double-check? |
|
Sorry I have by now deviated a lot from your initial PR. Is it right that test_load_fitted_model is testing your case already? |
|
Hi, thanks for looking at my PR. I assume that you are talking about the test "test_save_load_happy_path". It does test the case of loading a model, i.e. a checkpoint with the keys "state_dict" and "config". In this test however what is loaded from the "config" key and ends up in the "ModelConfig.upgrade_config" method is of type dict. The problem that happened to me was that the "save_tabpfn_model" function does not save a dict, but saves a ModelConfig in the checkpoint: https://github.com/PriorLabs/TabPFN/blob/main/src/tabpfn/model_loading.py#L615 |
|
@spirousschuh okay great. Maybe the test file for saving and loading the model is the most appropriate place for the test - would be great to add that test there? |
|
@noahho I added the test as you suggested and I added a small twist to the PR. Now, the |
|
Thanks a lot, also for adding that test. The fix now seems simpler, so removing the previous fix seems like a good idea. |
|
Ok. I undid the changes in the For ruffing the code I ran |
noahho
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Amazing, thanks a lot for this!
|
/gemini review |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR fixes a compatibility issue between the model saving and loading functionality where saved models could not be properly loaded due to a type mismatch between ModelConfig objects and dictionaries.
- Updates
save_tabpfn_modelto save the model config as a dictionary instead of aModelConfigobject - Removes an unnecessary blank line in the config upgrade method
- Adds a comprehensive unit test to verify the save/load functionality works correctly
Reviewed Changes
Copilot reviewed 3 out of 3 changed files in this pull request and generated 1 comment.
| File | Description |
|---|---|
src/tabpfn/model_loading.py |
Converts ModelConfig to dictionary using asdict() before saving and imports the required function |
src/tabpfn/architectures/base/config.py |
Removes extraneous blank line for code cleanup |
tests/test_save_load_fitted_model.py |
Adds comprehensive test for save/load model functionality with weight preservation verification |
Comments suppressed due to low confidence (1)
tests/test_save_load_fitted_model.py:96
- The test only covers the regressor case but the fix applies to both TabPFNRegressor and TabPFNClassifier. Consider adding a similar test for TabPFNClassifier to ensure the fix works for both model types.
regressor = TabPFNRegressor(model_path="auto", device="cpu", random_state=42)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request addresses an issue with saving and loading model weights and configurations. The core change involves converting the model configuration to a dictionary before saving, which ensures compatibility with the loading process that expects a dictionary. A new test case has been added to verify this save/load cycle.
My main feedback is regarding backward compatibility. The current change will prevent models saved with the older version of the code from being loaded. I've suggested a modification to the loading logic to handle both old and new model formats, ensuring a smooth transition for users with existing saved models.
* Record copied public PR 420 * fix saving and loading model including weights (#420) Co-authored-by: noahho <Noah.homa@gmail.com> --------- Co-authored-by: mirror-bot <mirror-bot@users.noreply.github.com> Co-authored-by: spirousschuh <hasensilvester@gmail.com> Co-authored-by: noahho <Noah.homa@gmail.com> Co-authored-by: Noah Hollmann <noah@priorlabs.ai>
Motivation and Context
I encountered that saving a model including its weights and loading it again did not work for me the way that I would expect it. Therefore, I wrote a unit test that fails in the current version of the repo and I propose a code fix.
Detailed Problem Description
From my perspective the underlying problem is that when saving a model using the function
tabpfn.model.loading.save_tabpfn_modelsaves atabpfn.model.config.ModelConfigtogether with the torch model. However, the model loading routine uses theModelConfig.upgrade_configmethod that just works with dictionaries.The fix
During the loading routine I decided to convert a
ModelConfigto adictin theModelConfig.upgrade_configfunction. But of course one could fix the saving routine to save a dict instead of aModelConfig.Public API Changes
How Has This Been Tested?
Yes. I provide a unit test for the regressor interface.
Checklist
CHANGELOG.md(if relevant for users).