Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Training score in *SearchCV.results_ #6895

Closed
jnothman opened this issue Jun 16, 2016 · 2 comments
Closed

Training score in *SearchCV.results_ #6895

jnothman opened this issue Jun 16, 2016 · 2 comments
Labels
Enhancement Moderate Anything that requires some knowledge of conventions and best practices Sprint

Comments

@jnothman
Copy link
Member

With the new GridSearchCV/RandomizedSearchCV results_ attribute, we are able to easily return more information to users. As well as test scores, we should have an option to return training scores.

This supersedes the ancient implementation in #1742, where training time sensitivity to parameters is also discussed in an example. It would still be good to have an example.

@jnothman jnothman added Enhancement Moderate Anything that requires some knowledge of conventions and best practices labels Jun 16, 2016
@eyc88
Copy link

eyc88 commented Jul 16, 2016

I'll pick this one

@amueller
Copy link
Member

please try coordinating with @dhanus as you'll be working on the same code.

eyc88 pushed a commit to eyc88/scikit-learn that referenced this issue Jul 16, 2016
    Now *SearchCV.results_ includes both timing and training scores.
raghavrv pushed a commit to raghavrv/scikit-learn that referenced this issue Sep 1, 2016
    Now *SearchCV.results_ includes both timing and training scores.
raghavrv pushed a commit to raghavrv/scikit-learn that referenced this issue Sep 8, 2016
    Now *SearchCV.results_ includes both timing and training scores.
raghavrv pushed a commit to raghavrv/scikit-learn that referenced this issue Sep 11, 2016
    Now *SearchCV.results_ includes both timing and training scores.
raghavrv pushed a commit to raghavrv/scikit-learn that referenced this issue Sep 11, 2016
    Now *SearchCV.results_ includes both timing and training scores.
raghavrv pushed a commit to raghavrv/scikit-learn that referenced this issue Sep 12, 2016
    Now *SearchCV.results_ includes both timing and training scores.

wrote new test (sklearn/model_selection/test_search.py)
and new doctest (sklearn/model_selection/_search.py)
added a few more lines in the docstring of GridSearchCV and RandomizedSearchCV.
Revised code according to suggestions.
Add a few more lines to test_grid_search_results():
    1. check test_rank_score always >= 1
    2. check all regular scores (test/train_mean/std_score) and timing >= 0
    3. check all regular scores <= 1
Note that timing can be greater than 1 in general, and std of regular scores
always <= 1 because the scores are bounded between 0 and 1.
raghavrv pushed a commit to raghavrv/scikit-learn that referenced this issue Sep 14, 2016
    Now *SearchCV.results_ includes both timing and training scores.

wrote new test (sklearn/model_selection/test_search.py)
and new doctest (sklearn/model_selection/_search.py)
added a few more lines in the docstring of GridSearchCV and RandomizedSearchCV.
Revised code according to suggestions.
Add a few more lines to test_grid_search_results():
    1. check test_rank_score always >= 1
    2. check all regular scores (test/train_mean/std_score) and timing >= 0
    3. check all regular scores <= 1
Note that timing can be greater than 1 in general, and std of regular scores
always <= 1 because the scores are bounded between 0 and 1.
raghavrv pushed a commit to raghavrv/scikit-learn that referenced this issue Sep 16, 2016
    Now *SearchCV.results_ includes both timing and training scores.

wrote new test (sklearn/model_selection/test_search.py)
and new doctest (sklearn/model_selection/_search.py)
added a few more lines in the docstring of GridSearchCV and RandomizedSearchCV.
Revised code according to suggestions.
Add a few more lines to test_grid_search_results():
    1. check test_rank_score always >= 1
    2. check all regular scores (test/train_mean/std_score) and timing >= 0
    3. check all regular scores <= 1
Note that timing can be greater than 1 in general, and std of regular scores
always <= 1 because the scores are bounded between 0 and 1.
raghavrv pushed a commit to raghavrv/scikit-learn that referenced this issue Sep 17, 2016
    Now *SearchCV.results_ includes both timing and training scores.

wrote new test (sklearn/model_selection/test_search.py)
and new doctest (sklearn/model_selection/_search.py)
added a few more lines in the docstring of GridSearchCV and RandomizedSearchCV.
Revised code according to suggestions.
Add a few more lines to test_grid_search_results():
    1. check test_rank_score always >= 1
    2. check all regular scores (test/train_mean/std_score) and timing >= 0
    3. check all regular scores <= 1
Note that timing can be greater than 1 in general, and std of regular scores
always <= 1 because the scores are bounded between 0 and 1.
raghavrv pushed a commit to raghavrv/scikit-learn that referenced this issue Sep 21, 2016
    Now *SearchCV.results_ includes both timing and training scores.

wrote new test (sklearn/model_selection/test_search.py)
and new doctest (sklearn/model_selection/_search.py)
added a few more lines in the docstring of GridSearchCV and RandomizedSearchCV.
Revised code according to suggestions.
Add a few more lines to test_grid_search_results():
    1. check test_rank_score always >= 1
    2. check all regular scores (test/train_mean/std_score) and timing >= 0
    3. check all regular scores <= 1
Note that timing can be greater than 1 in general, and std of regular scores
always <= 1 because the scores are bounded between 0 and 1.
jnothman pushed a commit that referenced this issue Sep 27, 2016
* Resolved issue #6894 and #6895:
    Now *SearchCV.results_ includes both timing and training scores.

wrote new test (sklearn/model_selection/test_search.py)
and new doctest (sklearn/model_selection/_search.py)
added a few more lines in the docstring of GridSearchCV and RandomizedSearchCV.
Revised code according to suggestions.
Add a few more lines to test_grid_search_results():
    1. check test_rank_score always >= 1
    2. check all regular scores (test/train_mean/std_score) and timing >= 0
    3. check all regular scores <= 1
Note that timing can be greater than 1 in general, and std of regular scores
always <= 1 because the scores are bounded between 0 and 1.

* ENH/FIX timing and training score.

* ENH separate fit / score times
* Make score_time=0 if errored; Ignore warnings in test
* Cleanup docstrings
* ENH Use helper to store the results
* Move fit time computation to else of try...except...else
* DOC readable sample scores
* COSMIT Add a commnent on why time test is >= 0 instead of > 0
  (Windows time.time precision is not accurate enought to be non-zero
   for trivial fits)

* Convey that times are in seconds
amueller added a commit to amueller/scikit-learn that referenced this issue Sep 27, 2016
* Resolved issue scikit-learn#6894 and scikit-learn#6895:
    Now *SearchCV.results_ includes both timing and training scores.

wrote new test (sklearn/model_selection/test_search.py)
and new doctest (sklearn/model_selection/_search.py)
added a few more lines in the docstring of GridSearchCV and RandomizedSearchCV.
Revised code according to suggestions.
Add a few more lines to test_grid_search_results():
    1. check test_rank_score always >= 1
    2. check all regular scores (test/train_mean/std_score) and timing >= 0
    3. check all regular scores <= 1
Note that timing can be greater than 1 in general, and std of regular scores
always <= 1 because the scores are bounded between 0 and 1.

* ENH/FIX timing and training score.

* ENH separate fit / score times
* Make score_time=0 if errored; Ignore warnings in test
* Cleanup docstrings
* ENH Use helper to store the results
* Move fit time computation to else of try...except...else
* DOC readable sample scores
* COSMIT Add a commnent on why time test is >= 0 instead of > 0
  (Windows time.time precision is not accurate enought to be non-zero
   for trivial fits)

* Convey that times are in seconds

# Conflicts:
#	doc/whats_new.rst
TomDLT pushed a commit to TomDLT/scikit-learn that referenced this issue Oct 3, 2016
* Resolved issue scikit-learn#6894 and scikit-learn#6895:
    Now *SearchCV.results_ includes both timing and training scores.

wrote new test (sklearn/model_selection/test_search.py)
and new doctest (sklearn/model_selection/_search.py)
added a few more lines in the docstring of GridSearchCV and RandomizedSearchCV.
Revised code according to suggestions.
Add a few more lines to test_grid_search_results():
    1. check test_rank_score always >= 1
    2. check all regular scores (test/train_mean/std_score) and timing >= 0
    3. check all regular scores <= 1
Note that timing can be greater than 1 in general, and std of regular scores
always <= 1 because the scores are bounded between 0 and 1.

* ENH/FIX timing and training score.

* ENH separate fit / score times
* Make score_time=0 if errored; Ignore warnings in test
* Cleanup docstrings
* ENH Use helper to store the results
* Move fit time computation to else of try...except...else
* DOC readable sample scores
* COSMIT Add a commnent on why time test is >= 0 instead of > 0
  (Windows time.time precision is not accurate enought to be non-zero
   for trivial fits)

* Convey that times are in seconds
Sundrique pushed a commit to Sundrique/scikit-learn that referenced this issue Jun 14, 2017
* Resolved issue scikit-learn#6894 and scikit-learn#6895:
    Now *SearchCV.results_ includes both timing and training scores.

wrote new test (sklearn/model_selection/test_search.py)
and new doctest (sklearn/model_selection/_search.py)
added a few more lines in the docstring of GridSearchCV and RandomizedSearchCV.
Revised code according to suggestions.
Add a few more lines to test_grid_search_results():
    1. check test_rank_score always >= 1
    2. check all regular scores (test/train_mean/std_score) and timing >= 0
    3. check all regular scores <= 1
Note that timing can be greater than 1 in general, and std of regular scores
always <= 1 because the scores are bounded between 0 and 1.

* ENH/FIX timing and training score.

* ENH separate fit / score times
* Make score_time=0 if errored; Ignore warnings in test
* Cleanup docstrings
* ENH Use helper to store the results
* Move fit time computation to else of try...except...else
* DOC readable sample scores
* COSMIT Add a commnent on why time test is >= 0 instead of > 0
  (Windows time.time precision is not accurate enought to be non-zero
   for trivial fits)

* Convey that times are in seconds
paulha pushed a commit to paulha/scikit-learn that referenced this issue Aug 19, 2017
* Resolved issue scikit-learn#6894 and scikit-learn#6895:
    Now *SearchCV.results_ includes both timing and training scores.

wrote new test (sklearn/model_selection/test_search.py)
and new doctest (sklearn/model_selection/_search.py)
added a few more lines in the docstring of GridSearchCV and RandomizedSearchCV.
Revised code according to suggestions.
Add a few more lines to test_grid_search_results():
    1. check test_rank_score always >= 1
    2. check all regular scores (test/train_mean/std_score) and timing >= 0
    3. check all regular scores <= 1
Note that timing can be greater than 1 in general, and std of regular scores
always <= 1 because the scores are bounded between 0 and 1.

* ENH/FIX timing and training score.

* ENH separate fit / score times
* Make score_time=0 if errored; Ignore warnings in test
* Cleanup docstrings
* ENH Use helper to store the results
* Move fit time computation to else of try...except...else
* DOC readable sample scores
* COSMIT Add a commnent on why time test is >= 0 instead of > 0
  (Windows time.time precision is not accurate enought to be non-zero
   for trivial fits)

* Convey that times are in seconds
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Enhancement Moderate Anything that requires some knowledge of conventions and best practices Sprint
Projects
None yet
Development

No branches or pull requests

3 participants