Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

A Python shell that has all configuration loaded into the namespace #22

Closed

Conversation

dnouri
Copy link
Collaborator

@dnouri dnouri commented Jul 8, 2016

Not sure if trivial or useful. :-)

But here's an example session:

$ pld-shell
Welcome to the palladium shell.  I've loaded your configuration for you,
and here's the variables you have access to:
{'__mode__': 'fit',
 'dataset_loader_test': <palladium.dataset.SQL object at 0x7f745dc19828>,
 'dataset_loader_train': <palladium.dataset.SQL object at 0x7f744c2f9400>,
 'load_data_decorators': [],
 'model': LinearRegression(copy_X=True, fit_intercept=True, n_jobs=1, normalize=True),
 'model_persister': <palladium.persistence.File object at 0x7f744c2f9390>}

>>> model
LinearRegression(copy_X=True, fit_intercept=True, n_jobs=1, normalize=True)

@coveralls
Copy link

coveralls commented Jul 8, 2016

Coverage Status

Coverage remained the same at 100.0% when pulling 9d64c32 on samsungaccelerator:feature-shell-command into 7ad551e on ottogroup:develop.

@alattner
Copy link
Contributor

@dnouri: Is there any particular use case you have in mind? It might be helpful for debugging, but I am not sure if a simple pdb wouldn't do the same. Maybe some options would be nice (e.g., --fit fitting the model after loading the config or --devserver loading a model from the model persister). What do you think?

Can you use the shell history in a simple shell (not in emacs or alike)? I get something like "^[[A" in my default terminal.

docopt(list_cmd.doc, argv=argv) should be docopt(shell_cmd.doc, argv=argv)

@dnouri
Copy link
Collaborator Author

dnouri commented Jul 12, 2016

@alattner: The use case I had in mind was running a prediction (so I'd still have to call the model_persister) and then plot the predictions against the actual values.

But I'm thinking firing up a Python shell and calling get_config() is easy enough. But maybe add another issue where we add documentation on how to "load" model and environment in a shell to work with, e.g. in an IPython notebook. Pretty easy but not obvious how to do it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants