New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Greykite Forecaster Model is Unpickle-able #73
Comments
Hi @kurtejung some of the functions/classes are not directly pickleable. We have a built-in function to iteratively save or load the model. Once you have run the forecast, you can do
For loading, you can load an dumped directory with
Change the "dir" to your desired directory. |
Thanks! - not sure how I missed this in the documentation. I'm trying to implement a deepcopy function for this as well - I can use the save/load functionality but the I/O is time intensive. Is there an in-memory version of dump/load_forecast_result? If not, would such a function be a welcome addition to the codebase? |
Hi @kurtejung , yeah you are very welcome to help add the deepcopy version of the save/load functionality! Please feel free to open a PR if you would like to, thanks! |
I'm using Greykite in Miniconda on Windows. When I tried to dump a Forecaster object, I had the following error: I'm wondering what might be the cause of this problem. |
Hi @vincetran96, I am not sure that this issue is due to the
Feel free to post your code snippets, it helps us in assisting you. |
@sayanpatra Thank you for your suggestions. I have re-used the exact code from @kurtejung above except the pickling part at the end:
Below is the traceback:
A seemingly noteworthy exception occurred before the |
@vincetran96 Hi, wondering if you ever come across a solution to this. I'm encountering the same problem. Thanks! |
Even basic implementation of greykite (see below) does not pickle properly, due to some of the design choices within Greykite (e.g. nested functions and namedtuple definitions within function class calls.
Was this a purposeful design choice? Is there another method to save a trained model state and reuse the model to create inferences downstream? Integrations with deployment tools become much more challenging if we need to retrain the model every time and can't save the model state. Looking for guidance here on best practice - thanks!
Here's code to reproduce the issue:
The text was updated successfully, but these errors were encountered: