-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add verbosity parameter #7
Comments
I'm not seeing any output in python3 when running |
Looks like there is very little in the way of print statements... only at the end of the |
With maximum verbosity (=2), it should also report the best/worst/average scores every generation. Depending on your data size, each generation may take a while. We've been thinking about ways to provide an update on the best model every generation in #2. What information are you looking for while TPOT is running? |
I got greedy... the dataset was too large and I didn't wait long enough for the first gen (0) to get reported. Working as expected. Carry on... I am used to the verbosity of http://h2o.ai which is an unfair bias. |
It would probably help if we added an "Expectations" section explaining that TPOT will be rather slow for large data sets. Although in #59 we're exploring ideas on speeding the initial seeding phase up to get the user something useful right off the bat. |
Could also print some stuff out right off the bat to tell the user things have been initialized. The fit statement is lonely waiting for gen 0 to finish. |
True enough. We do this for the command-line version, so it seems reasonable to do it for the scripted version as well. Filed as issue #74. |
Allow the user to tune the level of output from TPOT. Default to no output.
The text was updated successfully, but these errors were encountered: