New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[MRG+1] Make cross-validators data independent + Reorganize grid_search, cross_validation and learning_curve into model_selection #4294

Merged
merged 8 commits into from Oct 23, 2015

Conversation

Projects
None yet
9 participants
@raghavrv
Member

raghavrv commented Feb 25, 2015

fixes #1848, #2904

Things to be done after this PR - Issue at #5053 ( PR at #5569 )


TODO

  • Make all cross-validators data-independent (Don't initialize cross-validators with data/data-dependent parameters)
  • Reorganize the classes / functions into the model_selection module.
  • Remove all deprecation from new style classes.
  • Use split in all the files that use cv ? (or fix check_cv so both old style and new style classes can be used)
  • Refactor the tests.
  • Fix all the imports
  • Make all the general tests pass
  • Make all the model_selction tests pass
  • Conclude all discussions into workable solutions/fixes.
  • Clean up the old style classes to use as much from the new members as possible. (duplicate as little code as possible) - raghavrv#2 - We don't want this in until this PR gets to master! - (See #5568)
  • Fix the examples - raghavrv#3 merged into raghavrv#4!
  • Clean up the documentation - raghavrv#4

MINOR

  • Rename p to a better name. Moved to #5053
  • Rename _check_is_partition-->_check_is_permutation?
  • Remove _empty_mask
  • As Joel said here, use binomial coefficient instead of factorial.

Open Discussions


NOTE:
The current implementation will still not support nesting EstimatorCV inside GridSearchCV... This will become possible only after allowing sample_properties to pass from GridSearchCV to EstimatorCV...


PRs whose changes to g_s / c_v / l_c have been manually incorporated into this:
#4714 - Svm decision function shape - 1 commit
#4829 - merge _check_cv into check_cv ... - 1 commit
#4857 - Document missing attribute random_state in RandomizedSearchCV - 1 commit
#4840 - FIX avoid memory cost when sampling from large parameter grids - 1 commit
#5194 (Refer #5238) - Consistent CV docstring
#5161 check for sparse pred in cross_val_predict
#5201 clarify random state in KFold
#5190 LabelKFold
#4583 LabelShuffleSplit
#5283 Remove some warnings in grid search tests
#5300 shuffle labels not idxs and tests to ensure it.


This PR is slightly based upon @pignacio's work in #3340.


@amueller's hack:
if you want to align diffs you can do this (in ipython notebook)

import inspect
import difflib
from IPython.display import HTML

def show_func_diff(func_a, func_b):
    return HTML(difflib.HtmlDiff().make_file(inspect.getsourcelines(func_a)[0], inspect.getsourcelines(func_b)[0]))

from sklearn.cross_validation import cross_val_score as cross_val_score_old
from sklearn.model_selection import cross_val_score

show_func_diff(cross_val_score, cross_val_score_old)
@jnothman

This comment has been minimized.

Member

jnothman commented Feb 25, 2015

I'm -1 for moving cross_validation generators if they're getting a rewrite and we want to be able to use the same names.

@raghavrv

This comment has been minimized.

Member

raghavrv commented Feb 25, 2015

I thought it would be easier to do it in the 3rd PR once this PR gets merged? ( 2nd PR ( #4254 ) would completely fix #1848 and 3rd ( #3340 ) would completely fix #2904 )?

@jnothman

This comment has been minimized.

Member

jnothman commented Feb 25, 2015

It's fine to do so as long as you ensure that they're released at the same
time... which defeats a little of the purpose of splitting it up.

On 26 February 2015 at 00:10, ragv notifications@github.com wrote:

I thought it would be easier to do it in the 3rd PR once this PR gets
merged? ( 2nd PR ( #4254
#4254 ) would
completely fix #1848
#1848 and 3rd ( #3340
#3340 ) would
completely fix #2904
#2904 )?


Reply to this email directly or view it on GitHub
#4294 (comment)
.

@raghavrv

This comment has been minimized.

Member

raghavrv commented Feb 25, 2015

It's fine to do so as long as you ensure that they're released at the same
time...

Sure! I'll make sure all these go into the 0.16 not scheduled for 0.16 from recent comments

which defeats a little of the purpose of splitting it up.

I agree, but that would make reviewing a tad easier, with lesser diffs to look at, I feel :)

@raghavrv

This comment has been minimized.

Member

raghavrv commented Feb 25, 2015

Or wait... like @amueller said in an earlier comment, we could put this in a branch and send the next two PRs to that... would it be better?

@amueller

This comment has been minimized.

Member

amueller commented Feb 25, 2015

really as I said in other places, I thought the whole point of doing the move now is having a deprecation path for the cross validation objects.

@GaelVaroquaux

This comment has been minimized.

Member

GaelVaroquaux commented Feb 25, 2015

Agreed

Sent from my phone. Please forgive brevity and mis spelling

On Feb 25, 2015, 19:34, at 19:34, Andreas Mueller notifications@github.com wrote:

really as I said in other places, I thought the whole point of doing
the move now is having a deprecation path for the cross validation
objects.


Reply to this email directly or view it on GitHub:
#4294 (comment)

@raghavrv

This comment has been minimized.

Member

raghavrv commented Feb 25, 2015

@GaelVaroquaux @amueller @jnothman @pignacio

Thanks for the comments... I'm salvaging this as a fix for #2904 alone (picking up relevant parts from #3340) and provide a clean deprecation path... Hope thats okay? (The renaming/moving and fixing #1848 could be done in a separate PR).

@amueller

This comment has been minimized.

Member

amueller commented Feb 25, 2015

+1

@raghavrv raghavrv changed the title from Model selection to [WIP] Model selection Mar 16, 2015

@raghavrv

This comment has been minimized.

Member

raghavrv commented Mar 16, 2015

As said earlier I am converting the PR as a full fix for #2904 alone ... Though I am not even 20% done with the same goal, It would be helpful to know if there are any oppositions to this early on :)

Will this fix (without clubbing together the fix for #1848) be considered for merge?

Andreas has expressed his +1 towards the same... Any more suggestions / critiques?

@raghavrv raghavrv changed the title from [WIP] Model selection to [WIP] Make CV iterators data independent. Mar 16, 2015

@amueller

This comment has been minimized.

Member

amueller commented Mar 17, 2015

I think it is pretty save to go ahead with that. Bundling multiple changes in a PR is rarely a good idea, as long as they can be separated in a sensible way.

@raghavrv

This comment has been minimized.

Member

raghavrv commented Mar 17, 2015

Thanks for the comment :)

@property
def n_unique_labels(self):
_deprecate_label_attribute(self, "n_unique_labels")
return len(np.unique(self.labels))

This comment has been minimized.

@raghavrv

raghavrv Mar 18, 2015

Member

@amueller unique_labels and n_unique_labels were previously defined in the __init__ and hence were publicly available as an attribute... Since we deprecate initializing data in the __init__, I've deprecated them this way... Could you kindly let me know if this seems sane? ( Or should we just remove those attributes without any deprecation? )

This comment has been minimized.

@amueller

amueller Mar 18, 2015

Member

You should just use the @deprecated decorator on the properties and not define a new function.

This comment has been minimized.

@amueller

amueller Mar 18, 2015

Member

Otherwise this is the right way.

This comment has been minimized.

@amueller

amueller Mar 18, 2015

Member

This is not so much about initializing in __init__ (which wouldn't be a problem here since these are not estimators) than warning only of someone actually uses them. But you did the right thing.

@raghavrv

This comment has been minimized.

Member

raghavrv commented Mar 18, 2015

    """Generates the train, test indices based on the CV strategy used.

    Parameters
    ----------
     y : array-like, [n_samples]
            Samples to iterate on.
    """

Does this seem like a good docstring for all the split methods?

[Sorry for re-commenting again]

@raghavrv

This comment has been minimized.

Member

raghavrv commented Mar 18, 2015

@ogrisel Will this be included for 0.16? (if not is it safe to remove Bootstrap as it was deprecated in 0.15?)

@raghavrv

This comment has been minimized.

Member

raghavrv commented Mar 18, 2015

Sorry I am just piling one comment upon another :/

How would you feel if test_size and train_size parameters are deprecated for a new data independent test_train_ratio that can be safely initialized inside the __init__ itself?

@@ -80,16 +92,36 @@ def indices(self):
return self._indices
def __iter__(self):
return self.split(None)
def split(self, y):

This comment has been minimized.

@amueller

amueller Mar 18, 2015

Member

Why is this called y? Maybe "array" or something would be better?

This comment has been minimized.

@raghavrv

raghavrv Mar 18, 2015

Member

How about data?

This comment has been minimized.

@amueller

amueller Mar 18, 2015

Member

Data is X. y isn't data, is it? But this would also be applied to y, right? Or either? Sorry, I haven't read enough of the rest of the PR.

Parameters
----------
y : array-like, [n_samples]
Samples to iterate on.

This comment has been minimized.

@amueller

amueller Mar 18, 2015

Member

Maybe "samples to split into folds"?

@amueller

This comment has been minimized.

Member

amueller commented Mar 18, 2015

This will not be included in 0.16, but bootstrap will only be removed in 0.17.

@amueller

This comment has been minimized.

Member

amueller commented Mar 18, 2015

You are right, bootstrap can savely be removed, and will be in #4370 (I'll fix that in a second).

@raghavrv

This comment has been minimized.

Member

raghavrv commented Mar 18, 2015

@amueller Thanks a lot for the comments!! :) That cleared up a few things... :)

Just one more additional issue - How would you feel if test_size and train_size parameters are deprecated for a new data independent test_train_ratio that can be safely initialized inside the __init__ itself?

@amueller

This comment has been minimized.

Member

amueller commented Mar 18, 2015

I don't understand why you would want to deprecate test_size or train_size. They both support floats, right? We could deprecate ints, but I don't see why you would want to do that.

@raghavrv

This comment has been minimized.

Member

raghavrv commented Mar 18, 2015

We could deprecate ints

Yes that would suffice...

but I don't see why you would want to do that.

I thought of doing that since the int value for either are data dependent, which we don't want in __init__ right?

@amueller

This comment has been minimized.

Member

amueller commented Mar 18, 2015

Why would it necessarily be data dependent? Maybe someone always wants to have 100 data points in training, regardless of the data?

@raghavrv

This comment has been minimized.

Member

raghavrv commented Mar 18, 2015

Ah that makes sense! good that I asked :) thanks!

@jnothman

This comment has been minimized.

Member

jnothman commented Mar 19, 2015

This is an aside, but I have wondered whether cross-validation generators
should be outputting sample weights, rather than lists of indices. This
would allow memmapping to work efficiently when appropriate, although lots
of zero-weighted content (particularly in test data) will incur runtime
cost in many current implementations.

On 19 March 2015 at 07:31, ragv notifications@github.com wrote:

Ah that makes sense! good that I asked :) thanks!


Reply to this email directly or view it on GitHub
#4294 (comment)
.

@raghavrv

This comment has been minimized.

Member

raghavrv commented Mar 20, 2015

Do you mean to say we could set the sample weights and train the model instead of selecting the samples...? Would that not mean every model is expected to take in sample weights?

We could perhaps have a separate transformer to convert the indices to sample weights... (Sorry if I did not understand your comment correctly)

@amueller

This comment has been minimized.

Member

amueller commented Mar 20, 2015

@ragv I think that is what @jnothman meant.
@jnothman maybe not yet, but an interesting idea for the future ;)

@fabianp

This comment has been minimized.

Member

fabianp commented Oct 23, 2015

I went though the code and tried it on my laptop. All works as expected. I could only come up with is those very minor comments. So +1 from me

@fabianp fabianp changed the title from [MRG] Reorganize grid_search, cross_validation and learning_curve into model_selection to [MRG+1] Reorganize grid_search, cross_validation and learning_curve into model_selection Oct 23, 2015

@raghavrv

This comment has been minimized.

Member

raghavrv commented Oct 23, 2015

@fabianp Thanks for the reviews and +1 Fabian :)

LeavePOut, ShuffleSplit, LabelShuffleSplit, StratifiedKFold,
StratifiedShuffleSplit, PredefinedSplit)
LABEL_CVS = (LabelKFold, LeaveOneLabelOut, LeavePLabelOut, LabelShuffleSplit,)

This comment has been minimized.

@arjoly

arjoly Oct 23, 2015

Member

Those constants shouldn't be defined here.

This comment has been minimized.

@raghavrv

raghavrv Oct 23, 2015

Member

Can we have a separate file, _constants.py inside model_selection? (This will also hold the best param dict that will be added by @zermelozf soon?)

Also @amueller

This comment has been minimized.

@raghavrv

raghavrv Oct 23, 2015

Member

Have moved this to _validation.py as adviced by Arnaud and Andy IRL.

raghavrv added some commits Jun 4, 2015

MAINT Refactor cv, gs and lc into the model_selection module.
Main Commits - Major
--------------------

* ENH Reogranize classes/fn from grid_search into search.py
* ENH Reogranize classes/fn from cross_validation into split.py
* ENH Reogranize cls/fn from cross_validation/learning_curve into validate.py

* MAINT Merge _check_cv into check_cv inside the model_selection module
* MAINT Update all the imports to point to the model_selection module
* FIX use iter_cv to iterate throught the new style/old style cv objs
* TST Add tests for the new model_selection members
* ENH Wrap the old-style cv obj/iterables instead of using iter_cv

* ENH Use scipy's binomial coefficient function comb for calucation of nCk
* ENH Few enhancements to the split module
* ENH Improve check_cv input validation and docstring
* MAINT _get_test_folds(X, y, labels) --> _get_test_folds(labels)
* TST if 1d arrays for X introduce any errors
* ENH use 1d X arrays for all tests;
* ENH X_10 --> X (global var)

Minor
-----

* ENH _PartitionIterator --> _BaseCrossValidator;
* ENH CVIterator --> CVIterableWrapper
* TST Import the old SKF locally
* FIX/TST Clean up the split module's tests.
* DOC Improve documentation of the cv parameter
* COSMIT consistently hyphenate cross-validation/cross-validator
* TST Calculate n_samples from X
* COSMIT Use separate lines for each import.
* COSMIT cross_validation_generator --> cross_validator

Commits merged manually
-----------------------

* FIX Document the random_state attribute in RandomSearchCV
* MAINT Use check_cv instead of _check_cv
* ENH refactor OVO decision function, use it in SVC for sklearn-like
  decision_function shape
* FIX avoid memory cost when sampling from large parameter grids
ENH Major to Minor incremental enhancements to the model_selection
Squashed commit messages - (For reference)

Major
-----

* ENH p --> n_labels
* FIX *ShuffleSplit: all float/invalid type errors at init and int error at split
* FIX make PredefinedSplit accept test_folds in constructor; Cleanup docstrings
* ENH+TST KFold: make rng to be generated at every split call for reproducibility
* FIX/MAINT KFold: make shuffle a public attr
* FIX Make CVIterableWrapper private.
* FIX reuse len_cv instead of recalculating it
* FIX Prevent adding *SearchCV estimators from the old grid_search module
* re-FIX In all_estimators: the sorting to use only the 1st item (name)
    To avoid collision between the old and the new GridSearch classes.
* FIX test_validate.py: Use 2D X (1D X is being detected as a single sample)
* MAINT validate.py --> validation.py
* MAINT make the submodules private
* MAINT Support old cv/gs/lc until 0.19
* FIX/MAINT n_splits --> get_n_splits
* FIX/TST test_logistic.py/test_ovr_multinomial_iris:
    pass predefined folds as an iterable
* MAINT expose BaseCrossValidator
* Update the model_selection module with changes from master
  - From #5161
  -  - MAINT remove redundant p variable
  -  - Add check for sparse prediction in cross_val_predict
  - From #5201 - DOC improve random_state param doc
  - From #5190 - LabelKFold and test
  - From #4583 - LabelShuffleSplit and tests
  - From #5300 - shuffle the `labels` not the `indxs` in LabelKFold + tests
  - From #5378 - Make the GridSearchCV docs more accurate.
  - From #5458 - Remove shuffle from LabelKFold
  - From #5466(#4270) - Gaussian Process by Jan Metzen
  - From #4826 - Move custom error / warnings into sklearn.exception

Minor
-----

* ENH Make the KFold shuffling test stronger
* FIX/DOC Use the higher level model_selection module as ref
* DOC in check_cv "y : array-like, optional"
* DOC a supervised learning problem --> supervised learning problems
* DOC cross-validators --> cross-validation strategies
* DOC Correct Olivier Grisel's name ;)
* MINOR/FIX cv_indices --> kfold
* FIX/DOC Align the 'See also' section of the new KFold, LeaveOneOut
* TST/FIX imports on separate lines
* FIX use __class__ instead of classmethod
* TST/FIX import directly from model_selection
* COSMIT Relocate the random_state documentation
* COSMIT remove pass
* MAINT Remove deprecation warnings from old tests
* FIX correct import at test_split
* FIX/MAINT Move P_sparse, X, y defns to top; rm unused W_sparse, X_sparse
* FIX random state to avoid doctest failure
* TST n_splits and split wrapping of _CVIterableWrapper
* FIX/MAINT Use multilabel indicator matrix directly
* TST/DOC clarify why we conflate classes 0 and 1
* DOC add comment that this was taken from BaseEstimator
* FIX use of labels is not needed in stratified k fold
* Fix cross_validation reference
* Fix the labels param doc
FIX/DOC/MAINT Addressing the review comments by Arnaud and Andy
COSMIT Sort the members alphabetically
COSMIT len_cv --> n_splits
COSMIT Merge 2 if; FIX Use kwargs
DOC Add my name to the authors :D
DOC make labels parameter consistent
FIX Remove hack for boolean indices; + COSMIT idx --> indices; DOC Add Returns
COSMIT preds --> predictions
DOC Add Returns and neatly arrange X, y, labels
FIX idx(s)/ind(s)--> indice(s)
COSMIT Merge if and else to elif
COSMIT n --> n_samples
COSMIT Use bincount only once
COSMIT cls --> class_i / class_i (ith class indices) -->
perm_indices_class_i
FIX/ENH/TST Addressing the final reviews
COSMIT c --> count
FIX/TST make check_cv raise ValueError for string cv value
TST nested cv (gs inside cross_val_score) works for diff cvs
FIX/ENH Raise ValueError when labels is None for label based cvs;
TST if labels is being passed correctly to the cv and that the
ValueError is being propagated to the cross_val_score/predict and grid
search
FIX pass labels to cross_val_score
FIX use make_classification
DOC Add Returns; COSMIT Remove scaffolding
TST add a test to check the _build_repr helper
REVERT the old GS/RS should also be tested by the common tests.
ENH Add a tuple of all/label based CVS
FIX raise VE even at get_n_splits if labels is None
FIX Fabian's comments
PEP8

amueller added a commit that referenced this pull request Oct 23, 2015

Merge pull request #4294 from rvraghav93/model_selection
[MRG+1] Reorganize grid_search, cross_validation and learning_curve into model_selection

@amueller amueller merged commit 646e47c into scikit-learn:master Oct 23, 2015

1 of 3 checks passed

ci/circleci Your tests failed
Details
continuous-integration/appveyor Waiting for AppVeyor build to complete
Details
continuous-integration/travis-ci/pr The Travis CI build passed
Details
@amueller

This comment has been minimized.

Member

amueller commented Oct 23, 2015

Blame me if anything breaks.

@amueller

This comment has been minimized.

Member

amueller commented Oct 23, 2015

🍻

1 similar comment
@arjoly

This comment has been minimized.

Member

arjoly commented Oct 23, 2015

🍻

@raghavrv

This comment has been minimized.

Member

raghavrv commented Oct 23, 2015

Thanks a lot Andy, Vlad, Joel and Arnaud for the reviews and merge 🍻 :)

@Sandy4321

This comment has been minimized.

Sandy4321 commented Nov 1, 2015

so how to use now starting from 0?

On Fri, Oct 23, 2015 at 11:50 AM, Raghav R V notifications@github.com
wrote:

Thanks a lot Andy, Vlad, Joel and Arnaud for the reviews and merge [image:
🍻] :)


Reply to this email directly or view it on GitHub
#4294 (comment)
.

@raghavrv raghavrv referenced this pull request Apr 14, 2016

Merged

[MRG+1] ENH/MNT Rename labels --> groups #6660

9 of 9 tasks complete

@raghavrv raghavrv changed the title from [MRG+1] Reorganize grid_search, cross_validation and learning_curve into model_selection to [MRG+1] Make cross-validators data independent + Reorganize grid_search, cross_validation and learning_curve into model_selection May 27, 2016

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment