Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ensure Access to Model Parameters upon Early Termination #2

Closed
rhiever opened this issue Nov 11, 2015 · 9 comments
Closed

Ensure Access to Model Parameters upon Early Termination #2

rhiever opened this issue Nov 11, 2015 · 9 comments

Comments

@rhiever
Copy link
Contributor

rhiever commented Nov 11, 2015

From @rasbt:

Let's make sure that we don't lose the model parameters if the run is terminated early.

  • Add a "verbose" parameter that writes to stderr. This way, the user can pipe the output (e.g., model parameters and metrics) to a log file. This is especially useful to keep track of the process if you are running TPOT as a PBS job, and it ensures that you have access to the model parameters when the job crashes (or hits the wall time)
  • Also, make sure that the current state is saved gracefully if the program quits unexpectedly.
@rhiever rhiever added the bug label Nov 11, 2015
@rhiever
Copy link
Contributor Author

rhiever commented Nov 11, 2015

@rasbt, I don't think there's much we can do here wrt early termination from running out on time on a PBS job. AFAIK those terminations immediately kill the job, and there's no way to gracefully exit the program at that point.

However, we can certainly catch a keyboard interrupt and store the best discovered pipeline so far.

rhiever added a commit that referenced this issue Nov 11, 2015
During the optimization process, if the user interrupts execution, the
best pipeline discovered so far is stored for further analysis. This
functionality allows the user to prematurely end the optimization
process without losing their progress.
@rasbt
Copy link
Contributor

rasbt commented Nov 11, 2015

@rhiever Yes, that's a bit tricky. I would suggest adding an default option for writing two files

  • model_param.yaml
  • and model_param.yaml.tmp

on the fly. After each iteration, we write the current parameters to the model_param.yaml.tmp; then we use model_param.yaml.tmp to overwrite model_param.yaml; I think this 2-step approach is safer (accounting for rare scenarios where the job quits while writing one of the files).

Okay, this sounds pretty complex right? However, writing small files in Python is pretty quick (especially compared to one iteration in the deap algo), and it wouldn't really impact the computational efficiency.

The idea is to only store the parameters to these yaml files that are essential for reconstructing the last state of the model. I can see several reasons why this is useful

  • avoid starting from scratch if the job crashed
  • run additional iterations if the results are not satisfactory
  • re-use parameters from other models that may come in handy in related projects
  • having a record of the experiment

I would therefore suggest implementing a dump_params method (or function)

 model.dump_params('current_state.yaml')

that can be called in each iteration in the pipeline evaluation by default.

 # run experiment
 for x in range(something):
     evolve_model()
     model.dump_params('current_state.yaml.tmp')
     copyfile(''current_state.yaml.tmp', 'current_state.yaml')

And the load method

new_model = XXX()
new_model.load_params('current_state.yaml')

@rhiever
Copy link
Contributor Author

rhiever commented Nov 11, 2015

What about just pickleing the model? Haven't tested pickle with DEAP, but in theory that would make life easier.

@rasbt
Copy link
Contributor

rasbt commented Nov 11, 2015

Yes, I think pickle would generally be more convenient since you wouldn't have to worry about the structure of the parameter files etc. However, I think having a parameter file would be better for compatibility (e.g., python 2 vs 3, different pickle protocols etc.). I think that pickle is fine if you are working only on one machine, but for record keeping, reproducibility, and sharing, a parameter file in a simple, human readable format like yaml would be much better.

@rhiever rhiever added enhancement and removed bug labels Nov 11, 2015
@rhiever
Copy link
Contributor Author

rhiever commented Nov 12, 2015

I'd love to see a demo of this if you have a good solution in mind. There certainly is an issue with model persistency in the command-line version: once the Python call ends, the model is gone.

My one request is that we try to avoid adding more external dependencies. We already have two major external dependencies (scikit-learn and DEAP), and I'm wary of adding more.

@rasbt
Copy link
Contributor

rasbt commented Nov 12, 2015

For example, to construct a "clone," we could first initialize a new object with similar parameter settings. Assuming that we wrote the contents of lr.get_params() to a yaml file, and lr.get_params() shall represent yaml.load(file_stream)['parameters']

>>> lr = LogisticRegression(lr.get_params())
>>> lr.get_params()
{'tol': 0.0001, 'max_iter': 100, 'warm_start': False, 'solver': 'liblinear', 'C': 1.0, 'dual': False, 'fit_intercept': True, 'random_state': None, 'n_jobs': 1, 'multi_class': 'ovr', 'verbose': 0, 'class_weight': None, 'intercept_scaling': 1, 'penalty': 'l2'}

we could then initialize the new object as

>>> lr2 = LogisticRegression(lr.get_params())

and to set the "fitted" parameters, we just use setattr:

yaml_cont = yaml.load(file_stream)
for a in yaml_cont['attributes']:
    settattr(lr2, a, yaml_cont['attributes'][a])

Practical example:

>>> lr.fit([[1], [2], [3]], [0, 1, 1])
>>> setattr(lr, 'coef_', 99.9)
>>> getattr(lr, 'coef_')

99.9

Of course, we need to go a few levels further since we have multiple nested objects in a pipeline, but I think this should not be too difficult.

@rhiever
Copy link
Contributor Author

rhiever commented Nov 12, 2015

Yeah - doing this with nested pipeline objects is going to be a challenge, especially because some of those pipeline objects are functions of custom code. In fact, all of the pipelines are nested functions. I think the saved state should represent that.

@MichaelMarkieta
Copy link

Keep in mind the usability of the temp output and what might be possible.... no big deal if its just a log of the current state of the model upon exit, but if you wanted to use that log as some sort parameterized object for specifying a new model run based on the last attempt, or (in some blue-sky thinking way), using the temp output from the previous model to train the new model to skip certain generations....

@rasbt
Copy link
Contributor

rasbt commented Nov 16, 2015

@MichaelMarkieta I agree with you; maybe we should start with a simple log file to get it going, and we can later come up with a "parameter file" to directly initialize and parameterize the model as I mentioned above.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants