-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ensure Access to Model Parameters upon Early Termination #2
Comments
@rasbt, I don't think there's much we can do here wrt early termination from running out on time on a PBS job. AFAIK those terminations immediately kill the job, and there's no way to gracefully exit the program at that point. However, we can certainly catch a keyboard interrupt and store the best discovered pipeline so far. |
During the optimization process, if the user interrupts execution, the best pipeline discovered so far is stored for further analysis. This functionality allows the user to prematurely end the optimization process without losing their progress.
@rhiever Yes, that's a bit tricky. I would suggest adding an default option for writing two files
on the fly. After each iteration, we write the current parameters to the Okay, this sounds pretty complex right? However, writing small files in Python is pretty quick (especially compared to one iteration in the deap algo), and it wouldn't really impact the computational efficiency. The idea is to only store the parameters to these yaml files that are essential for reconstructing the last state of the model. I can see several reasons why this is useful
I would therefore suggest implementing a
that can be called in each iteration in the pipeline evaluation by default.
And the load method
|
What about just |
Yes, I think pickle would generally be more convenient since you wouldn't have to worry about the structure of the parameter files etc. However, I think having a parameter file would be better for compatibility (e.g., python 2 vs 3, different pickle protocols etc.). I think that pickle is fine if you are working only on one machine, but for record keeping, reproducibility, and sharing, a parameter file in a simple, human readable format like yaml would be much better. |
I'd love to see a demo of this if you have a good solution in mind. There certainly is an issue with model persistency in the command-line version: once the Python call ends, the model is gone. My one request is that we try to avoid adding more external dependencies. We already have two major external dependencies (scikit-learn and DEAP), and I'm wary of adding more. |
For example, to construct a "clone," we could first initialize a new object with similar parameter settings. Assuming that we wrote the contents of
we could then initialize the new object as
and to set the "fitted" parameters, we just use
Practical example:
99.9 Of course, we need to go a few levels further since we have multiple nested objects in a pipeline, but I think this should not be too difficult. |
Yeah - doing this with nested pipeline objects is going to be a challenge, especially because some of those pipeline objects are functions of custom code. In fact, all of the pipelines are nested functions. I think the saved state should represent that. |
Keep in mind the usability of the temp output and what might be possible.... no big deal if its just a log of the current state of the model upon exit, but if you wanted to use that log as some sort parameterized object for specifying a new model run based on the last attempt, or (in some blue-sky thinking way), using the temp output from the previous model to train the new model to skip certain generations.... |
@MichaelMarkieta I agree with you; maybe we should start with a simple log file to get it going, and we can later come up with a "parameter file" to directly initialize and parameterize the model as I mentioned above. |
From @rasbt:
Let's make sure that we don't lose the model parameters if the run is terminated early.
The text was updated successfully, but these errors were encountered: