New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add ability to save and load param store #22
Comments
We should also think about an interface and implementation to save and load models themselves. With cloudpickle we can serialize pretty much anything in Python except for running/suspended generators and coroutines. |
For the parameter store I envision something to the effect of pyro.save(model_params,"model_params.pr") and pyro.load("model_params.pr") which will save and load the name-space and the affiliated parameters. We may need to think about being able to define which names we want to load and mapping other names to these parameters, for instance when initializing another model partially with pre-trained parameters or some such hacky business. |
this seems like the right first pass. i think (from conversations) the main tricky thing is going to be inserting params into neural net modules when we re-load. presumably pytorch provides (or can be coerced to provide) some way to set the params of a module. we may need to pass into pyro.load a handle to modules or something like that.... the alternative is pickling the entire module and re-loading it. that seems like a less good idea to me. |
See here, should be pretty straightforward: serializing models in PyTorch |
ah, cool. so as long as the param store keeps around a handle to the models from |
Addressed by #47 |
We need the ability to easily (and reasonably quickly) save the param store to disk, and re-load it.
This is useful in a long training run both for saving state for future analyses and for checkpointing in case of a crash.
The text was updated successfully, but these errors were encountered: