Skip to content

Commit

Permalink
More doc fixes. Latex builds again.
Browse files Browse the repository at this point in the history
  • Loading branch information
amueller committed Nov 20, 2015
1 parent a05bc8d commit fb123ed
Show file tree
Hide file tree
Showing 22 changed files with 368 additions and 151 deletions.
14 changes: 14 additions & 0 deletions doc/datasets/index.rst
Expand Up @@ -255,6 +255,20 @@ features::
_`Faster API-compatible implementation`: https://github.com/mblondel/svmlight-loader


.. make sure everything is in a toc tree
.. toctree::
:maxdepth: 2
:hidden:

olivetti_faces
twenty_newsgroups
mldata
labeled_faces
covtype
rcv1


.. include:: olivetti_faces.rst

.. include:: twenty_newsgroups.rst
Expand Down
2 changes: 1 addition & 1 deletion doc/modules/clustering.rst
Expand Up @@ -4,7 +4,7 @@
Clustering
==========

`Clustering <https://en.wikipedia.org/wiki/Cluster_analysis>`_ of
`Clustering <https://en.wikipedia.org/wiki/Cluster_analysis>`__ of
unlabeled data can be performed with the module :mod:`sklearn.cluster`.

Each clustering algorithm comes in two variants: a class, that implements
Expand Down
2 changes: 1 addition & 1 deletion doc/whats_new.rst
Expand Up @@ -235,7 +235,7 @@ Enhancements

- The "Wisconsin Breast Cancer" classical two-class classification dataset
is now included in scikit-learn, available with
:fun:`sklearn.dataset.load_breast_cancer`.
:func:`sklearn.dataset.load_breast_cancer`.

- Upgraded to joblib 0.9.3 to benefit from the new automatic batching of
short tasks. This makes it possible for scikit-learn to benefit from
Expand Down
2 changes: 1 addition & 1 deletion examples/applications/face_recognition.py
Expand Up @@ -10,7 +10,7 @@
.. _LFW: http://vis-www.cs.umass.edu/lfw/
Expected results for the top 5 most represented people in the dataset::
Expected results for the top 5 most represented people in the dataset:
================== ============ ======= ========== =======
precision recall f1-score support
Expand Down
8 changes: 4 additions & 4 deletions examples/ensemble/plot_partial_dependence.py
Expand Up @@ -3,7 +3,7 @@
Partial Dependence Plots
========================
Partial dependence plots show the dependence between the target function [1]_
Partial dependence plots show the dependence between the target function [2]_
and a set of 'target' features, marginalizing over the
values of all other features (the complement features). Due to the limits
of human perception the size of the target feature set must be small (usually,
Expand All @@ -13,7 +13,7 @@
This example shows how to obtain partial dependence plots from a
:class:`~sklearn.ensemble.GradientBoostingRegressor` trained on the California
housing dataset. The example is taken from [HTF2009]_.
housing dataset. The example is taken from [1]_.
The plot shows four one-way and one two-way partial dependence plots.
The target variables for the one-way PDP are:
Expand All @@ -38,10 +38,10 @@
of the house age, whereas for values less than two there is a strong dependence
on age.
.. [HTF2009] T. Hastie, R. Tibshirani and J. Friedman,
.. [1] T. Hastie, R. Tibshirani and J. Friedman,
"Elements of Statistical Learning Ed. 2", Springer, 2009.
.. [1] For classification you can think of it as the regression score before
.. [2] For classification you can think of it as the regression score before
the link function.
"""
print(__doc__)
Expand Down
15 changes: 8 additions & 7 deletions sklearn/calibration.py
Expand Up @@ -46,20 +46,21 @@ class CalibratedClassifierCV(BaseEstimator, ClassifierMixin):
to offer more accurate predict_proba outputs. If cv=prefit, the
classifier must have been fit already on data.
method : 'sigmoid' | 'isotonic'
method : 'sigmoid' or 'isotonic'
The method to use for calibration. Can be 'sigmoid' which
corresponds to Platt's method or 'isotonic' which is a
non-parameteric approach. It is not advised to use isotonic calibration
with too few calibration samples (<<1000) since it tends to overfit.
with too few calibration samples ``(<<1000)`` since it tends to overfit.
Use sigmoids (Platt's calibration) in this case.
cv : integer/cross-validation generator/iterable or "prefit", optional
cv : integer, cross-validation generator, iterable or "prefit", optional
Determines the cross-validation splitting strategy.
Possible inputs for cv are:
- None, to use the default 3-fold cross-validation,
- integer, to specify the number of folds.
- An object to be used as a cross-validation generator.
- An iterable yielding train/test splits.
- None, to use the default 3-fold cross-validation,
- integer, to specify the number of folds.
- An object to be used as a cross-validation generator.
- An iterable yielding train/test splits.
For integer/None inputs, if ``y`` is binary or multiclass,
:class:`StratifiedKFold` used. If ``y`` is neither binary nor
Expand Down
9 changes: 5 additions & 4 deletions sklearn/covariance/graph_lasso_.py
Expand Up @@ -468,10 +468,11 @@ class GraphLassoCV(GraphLasso):
cv : int, cross-validation generator or an iterable, optional
Determines the cross-validation splitting strategy.
Possible inputs for cv are:
- None, to use the default 3-fold cross-validation,
- integer, to specify the number of folds.
- An object to be used as a cross-validation generator.
- An iterable yielding train/test splits.
- None, to use the default 3-fold cross-validation,
- integer, to specify the number of folds.
- An object to be used as a cross-validation generator.
- An iterable yielding train/test splits.
For integer/None inputs :class:`KFold` is used.
Expand Down
43 changes: 24 additions & 19 deletions sklearn/cross_validation.py
Expand Up @@ -1089,7 +1089,7 @@ def __len__(self):


class LabelShuffleSplit(ShuffleSplit):
'''Shuffle-Labels-Out cross-validation iterator
"""Shuffle-Labels-Out cross-validation iterator
Provides randomized train/test indices to split data according to a
third-party provided label. This label information can be used to encode
Expand Down Expand Up @@ -1118,7 +1118,7 @@ class LabelShuffleSplit(ShuffleSplit):
Labels of samples
n_iter : int (default 5)
Number of re-shuffling & splitting iterations.
Number of re-shuffling and splitting iterations.
test_size : float (default 0.2), int, or None
If float, should be between 0.0 and 1.0 and represent the
Expand All @@ -1134,7 +1134,8 @@ class LabelShuffleSplit(ShuffleSplit):
random_state : int or RandomState
Pseudo-random number generator state used for random sampling.
'''
"""
def __init__(self, labels, n_iter=5, test_size=0.2, train_size=None,
random_state=None):

Expand Down Expand Up @@ -1208,10 +1209,11 @@ def cross_val_predict(estimator, X, y=None, cv=None, n_jobs=1,
cv : int, cross-validation generator or an iterable, optional
Determines the cross-validation splitting strategy.
Possible inputs for cv are:
- None, to use the default 3-fold cross-validation,
- integer, to specify the number of folds.
- An object to be used as a cross-validation generator.
- An iterable yielding train/test splits.
- None, to use the default 3-fold cross-validation,
- integer, to specify the number of folds.
- An object to be used as a cross-validation generator.
- An iterable yielding train/test splits.
For integer/None inputs, if ``y`` is binary or multiclass,
:class:`StratifiedKFold` used. If the estimator is a classifier
Expand Down Expand Up @@ -1382,10 +1384,11 @@ def cross_val_score(estimator, X, y=None, scoring=None, cv=None, n_jobs=1,
cv : int, cross-validation generator or an iterable, optional
Determines the cross-validation splitting strategy.
Possible inputs for cv are:
- None, to use the default 3-fold cross-validation,
- integer, to specify the number of folds.
- An object to be used as a cross-validation generator.
- An iterable yielding train/test splits.
- None, to use the default 3-fold cross-validation,
- integer, to specify the number of folds.
- An object to be used as a cross-validation generator.
- An iterable yielding train/test splits.
For integer/None inputs, if ``y`` is binary or multiclass,
:class:`StratifiedKFold` used. If the estimator is a classifier
Expand Down Expand Up @@ -1643,10 +1646,11 @@ def check_cv(cv, X=None, y=None, classifier=False):
cv : int, cross-validation generator or an iterable, optional
Determines the cross-validation splitting strategy.
Possible inputs for cv are:
- None, to use the default 3-fold cross-validation,
- integer, to specify the number of folds.
- An object to be used as a cross-validation generator.
- An iterable yielding train/test splits.
- None, to use the default 3-fold cross-validation,
- integer, to specify the number of folds.
- An object to be used as a cross-validation generator.
- An iterable yielding train/test splits.
For integer/None inputs, if ``y`` is binary or multiclass,
:class:`StratifiedKFold` used. If the estimator is a classifier
Expand Down Expand Up @@ -1716,10 +1720,11 @@ def permutation_test_score(estimator, X, y, cv=None,
cv : int, cross-validation generator or an iterable, optional
Determines the cross-validation splitting strategy.
Possible inputs for cv are:
- None, to use the default 3-fold cross-validation,
- integer, to specify the number of folds.
- An object to be used as a cross-validation generator.
- An iterable yielding train/test splits.
- None, to use the default 3-fold cross-validation,
- integer, to specify the number of folds.
- An object to be used as a cross-validation generator.
- An iterable yielding train/test splits.
For integer/None inputs, if ``y`` is binary or multiclass,
:class:`StratifiedKFold` used. If the estimator is a classifier
Expand Down

0 comments on commit fb123ed

Please sign in to comment.