sklearn
In development
The following estimators and functions, when fit with the same data and parameters, may produce different models from the previous version. This often occurs due to changes in the modelling logic (bug fixes or enhancements), or in random sampling procedures.
discriminant_analysis.LinearDiscriminantAnalysis
for multiclass classification.discriminant_analysis.LinearDiscriminantAnalysis
with 'eigen' solver.linear_model.BayesianRidge
- Decision trees and derived ensembles when both max_depth and max_leaf_nodes are set.
linear_model.LogisticRegression
andlinear_model.LogisticRegressionCV
with 'saga' solver.ensemble.GradientBoostingClassifier
for multiclass classification.svm.SVC.decision_function
andmulticlass.OneVsOneClassifier.decision_function
.
Details are listed in the changelog below.
(While we are trying to better inform users by providing this information, we cannot assure that this list is complete.)
Support for Python 3.4 and below has been officially dropped.
- The R2 score used when calling
score
on a regressor will usemultioutput='uniform_average'
from version 0.23 to keep consistent withmetrics.r2_score
. This will influence thescore
method of all the multioutput regressors (except formultioutput.MultiOutputRegressor
).13157
byHanmin Qin <qinhanmin2014>
.
- Added support to bin the data passed into
calibration.calibration_curve
by quantiles instead of uniformly between 0 and 1.13086
byScott Cole <srcole>
.
- A new clustering algorithm:
cluster.OPTICS
: an algoritm related tocluster.DBSCAN
, that has hyperparameters easier to set and that scales better, byShane <espg>
,Adrin Jalali <adrinjalali>
, andErich Schubert <kno10>
.
- Added support for 64-bit group IDs and pointers in SVMLight files
datasets.svmlight_format
10727
byBryan K Woods <bryan-woods>
. datasets.load_sample_images
returns images with a deterministic order.13250
byThomas Fan <thomasjpfan>
.
decomposition.KernelPCA
now has deterministic output (resolved sign ambiguity in eigenvalue decomposition of the kernel matrix).13241
byAurélien Bellet <bellet>
.- Fixed a bug in
decomposition.NMF
where init = 'nndsvd', init = 'nndsvda', and init = 'nndsvdar' are allowed when n_components < n_features instead of n_components <= min(n_samples, n_features).11650
byHossein Pourbozorg <hossein-pourbozorg>
andZijie (ZJ) Poh <zjpoh>
. - The default value of the
init
argument indecomposition.non_negative_factorization
will change fromrandom
toNone
in version 0.23 to make it consistent withdecomposition.NMF
. A FutureWarning is raised when the default value is used.12988
byZijie (ZJ) Poh <zjpoh>
.
discriminant_analysis.LinearDiscriminantAnalysis
now preservesfloat32
andfloat64
dtypes.8769
and11000
byThibault Sejourne <thibsej>
- A
ChangedBehaviourWarning
is now raised whendiscriminant_analysis.LinearDiscriminantAnalysis
is given as parametern_components > min(n_features, n_classes - 1)
, andn_components
is changed tomin(n_features, n_classes - 1)
if so. Previously the change was made, but silently.11526
byWilliam de Vazelhes<wdevazelhes>
. - Fixed a bug in
discriminant_analysis.LinearDiscriminantAnalysis
where the predicted probabilities would be incorrectly computed in the multiclass case.6848
, byAgamemnon Krasoulis <agamemnonc>
and Guillaume Lemaitre <glemaitre>. - Fixed a bug in
discriminant_analysis.LinearDiscriminantAnalysis
where the predicted probabilities would be incorrectly computed witheigen
solver.11727
, byAgamemnon Krasoulis <agamemnonc>
.
- Fixed a bug in
dummy.DummyClassifier
where thepredict_proba
method was returning int32 array instead of float64 for thestratified
strategy.13266
byChristos Aridas<chkoar>
.
- Make
ensemble.IsolationForest
prefer threads over processes when running withn_jobs > 1
as the underlying decision tree fit calls do release the GIL. This changes reduces memory usage and communication overhead.12543
byIsaac Storch <istorch>
and Olivier Grisel. - Minimized the validation of X in
ensemble.AdaBoostClassifier
andensemble.AdaBoostRegressor
13174
byChristos Aridas <chkoar>
. - Fixed a bug in
ensemble.GradientBoostingClassifier
andensemble.GradientBoostingRegressor
, which didn't support scikit-learn estimators as the initial estimator. Also added support of initial estimator which does not support sample weights.12436
byJérémie du Boisberranger <jeremiedbb>
and12983
byNicolas Hug<NicolasHug>
. - Fixed the output of the average path length computed in
ensemble.IsolationForest
when the input is either 0, 1 or 2.13251
byAlbert Thomas <albertcthomas>
andjoshuakennethjones <joshuakennethjones>
. - Make
ensemble.IsolationForest
more memory efficient by avoiding keeping in memory each tree prediction.13260
by Nicolas Goix. - Fixed a bug in
ensemble.GradientBoostingClassifier
where the gradients would be incorrectly computed in multiclass classification problems.12715
byNicolas Hug<NicolasHug>
. - Fixed a bug in
ensemble.GradientBoostingClassifier
where the default initial prediction of a multiclass classifier would predict the classes priors instead of the log of the priors.12983
byNicolas Hug<NicolasHug>
. - Fixed a bug in
ensemble
where thepredict
method would error for multiclass multioutput forests models if any targets were strings.12834
byElizabeth Sander <elsander>
. - Fixed a bug in
ensemble.gradient_boosting.LossFunction
andensemble.gradient_boosting.LeastSquaresError
where the default value oflearning_rate
inupdate_terminal_regions
is not consistent with the document and the caller functions.6463
bymovelikeriver <movelikeriver>
.
- Deprecated
externals.six
since we have dropped support for Python 2.7.12916
byHanmin Qin <qinhanmin2014>
.
- Added
impute.IterativeImputer
, which is a strategy for imputing missing values by modeling each feature with missing values as a function of other features in a round-robin fashion.8478
and12177
bySergey Feldman <sergeyf>
Ben Lawson <benlawson>
. - In
impute.MissingIndicator
avoid implicit densification by raising an exception if input is sparse add missing_values property is set to 0.13240
byBartosz Telenczuk <btel>
.
- Allow different dtypes (such as float32) in
isotonic.IsotonicRegression
8769
byVlad Niculae <vene>
- Fixed a performance issue of
saga
andsag
solvers when called in ajoblib.Parallel
setting withn_jobs > 1
andbackend="threading"
, causing them to perform worse than in the sequential case.13389
byPierre Glaser <pierreglaser>
. linear_model.LogisticRegression
andlinear_model.LogisticRegressionCV
now support Elastic-Net penalty, with the 'saga' solver.11646
byNicolas Hug <NicolasHug>
.- Added
linear_model.lars_path_gram
, which islinear_model.lars_path
in the sufficient stats mode, allowing users to computelinear_model.lars_path
without providingX
andy
.11699
byKuai Yu <yukuairoy>
. linear_model.make_dataset
now preservesfloat32
andfloat64
dtypes.8769
and11000
byNelle Varoquaux <NelleV>
,Arthur Imbert <Henley13>
,Guillaume Lemaitre <glemaitre>
, andJoan Massich <massich>
linear_model.LogisticRegression
now supports an unregularized objective by settingpenalty
to'none'
. This is equivalent to settingC=np.inf
with l2 regularization. Not supported by the liblinear solver.12860
byNicolas Hug <NicolasHug>
.- sparse_cg solver in
linear_model.ridge.Ridge
now supports fitting the intercept (i.e.fit_intercept=True
) when inputs are sparse.13336
byBartosz Telenczuk <btel>
. - Fixed a bug in
linear_model.LogisticRegression
andlinear_model.LogisticRegressionCV
with 'saga' solver, where the weights would not be correctly updated in some cases.11646
by Tom Dupre la Tour. - Fixed the posterior mean, posterior covariance and returned regularization parameters in
linear_model.BayesianRidge
. The posterior mean and the posterior covariance were not the ones computed with the last update of the regularization parameters and the returned regularization parameters were not the final ones. Also fixed the formula of the log marginal likelihood used to compute the score when compute_score=True.12174
byAlbert Thomas <albertcthomas>
. - Fixed a bug in
linear_model.LassoLarsIC
, where user inputcopy_X=False
at instance creation would be overridden by default parameter valuecopy_X=True
infit
.12972
byLucio Fernandez-Arjona <luk-f-a>
- Fixed a bug in
linear_model.LinearRegression
that was not returning the same coeffecients and intercepts withfit_intercept=True
in sparse and dense case.13279
by Alexandre Gramfort - Fixed a bug in
linear_model.HuberRegressor
that was broken when X was of dtype bool.13328
by Alexandre Gramfort. - The use of
linear_model.lars_path
withX=None
while passingGram
is deprecated in version 0.21 and will be removed in version 0.23. Uselinear_model.lars_path_gram
instead.11699
byKuai Yu <yukuairoy>
. linear_model.logistic_regression_path
is deprecated in version 0.21 and will be removed in version 0.23.12821
byNicolas Hug <NicolasHug>
.
- Make
manifold.tsne.trustworthiness
use an inverted index instead of an np.where lookup to find the rank of neighbors in the input space. This improves efficiency in particular when computed with lots of neighbors and/or small datasets.9907
byWilliam de Vazelhes <wdevazelhes>
.
- Added the
metrics.max_error
metric and a corresponding'max_error'
scorer for single output regression.12232
byKrishna Sangeeth <whiletruelearn>
. - Add
metrics.multilabel_confusion_matrix
, which calculates a confusion matrix with true positive, false positive, false negative and true negative counts for each class. This facilitates the calculation of set-wise metrics such as recall, specificity, fall out and miss rate.11179
byShangwu Yao <ShangwuYao>
and Joel Nothman. metrics.jaccard_score
has been added to calculate the Jaccard coefficient as an evaluation metric for binary, multilabel and multiclass tasks, with an interface analogous tometrics.f1_score
.13151
byGaurav Dhingra <gxyd>
and Joel Nothman.- Faster
metrics.pairwise.pairwise_distances
with n_jobs > 1 by using a thread-based backend, instead of process-based backends.8216
byPierre Glaser <pierreglaser>
andRomuald Menuet <zanospi>
- The pairwise manhattan distances with sparse input now uses the BLAS shipped with scipy instead of the bundled BLAS.
12732
byJérémie du Boisberranger <jeremiedbb>
- Use label accuracy instead of micro-average on
metrics.classification_report
to avoid confusion. micro-average is only shown for multi-label or multi-class with a subset of classes because it is otherwise identical to accuracy.12334
byEmmanuel Arias <eamanu@eamanu.com>
, Joel Nothman and Andreas Müller - The metric
metrics.r2_score
is degenerate with a single sample and now it returns NaN and raisesexceptions.UndefinedMetricWarning
.12855
byPawel Sendyk <psendyk>
. - The parameter
labels
inmetrics.hamming_loss
is deprecated in version 0.21 and will be removed in version 0.23.10580
byReshama Shaikh <reshamas>
and Sandra Mitrovic <SandraMNE>. metrics.jaccard_similarity_score
is deprecated in favour of the more consistentmetrics.jaccard_score
. The former behavior for binary and multiclass targets is broken.13151
by Joel Nothman.
- Fixed a bug in
mixture.BaseMixture
and therefore on estimators based on it, i.e.mixture.GaussianMixture
andmixture.BayesianGaussianMixture
, wherefit_predict
andfit.predict
were not equivalent.13142
byJérémie du Boisberranger <jeremiedbb>
.
- Classes
~model_selection.GridSearchCV
and~model_selection.RandomizedSearchCV
now allow for refit=callable to add flexibility in identifying the best estimator. An example for this interface has been added.11354
byWenhao Zhang <wenhaoz@ucla.edu>
, Joel Nothman andAdrin Jalali <adrinjalali>
. - Classes
~model_selection.GridSearchCV
,~model_selection.RandomizedSearchCV
, and methods~model_selection.cross_val_score
,~model_selection.cross_val_predict
,~model_selection.cross_validate
, now print train scores when return_train_scores is True and verbose > 2. For~model_selection.learning_curve
, and~model_selection.validation_curve
only the latter is required.12613
and12669
byMarc Torrellas <marctorrellas>
. - Fixed a bug where
model_selection.StratifiedKFold
shuffles each class's samples with the samerandom_state
, makingshuffle=True
ineffective.13124
byHanmin Qin <qinhanmin2014>
. - Fixed an issue in
~model_selection.cross_val_predict
where method="predict_proba" returned always 0.0 when one of the classes was excluded in a cross-validation fold.13366
byGuillaume Fournier <gfournier>
- Fixed an issue in
multiclass.OneVsOneClassifier.decision_function
where the decision_function value of a given sample was different depending on whether the decision_function was evaluated on the sample alone or on a batch containing this same sample due to the scaling used in decision_function.10440
byJonathan Ohayon <Johayon>
.
- A metric learning algorithm:
neighbors.NeighborhoodComponentsAnalysis
, which implements the Neighborhood Components Analysis algorithm described in Goldberger et al. (2005).10058
byWilliam de Vazelhes <wdevazelhes>
andJohn Chiotellis <johny-c>
. - Methods in
neighbors.NearestNeighbors
:~neighbors.NearestNeighbors.kneighbors
,~neighbors.NearestNeighbors.radius_neighbors
,~neighbors.NearestNeighbors.kneighbors_graph
,~neighbors.NearestNeighbors.radius_neighbors_graph
now raiseNotFittedError
, rather thanAttributeError
, when called beforefit
12279
byKrishna Sangeeth <whiletruelearn>
.
- Fixed a bug in
neural_network.MLPClassifier
andneural_network.MLPRegressor
where the optionshuffle=False
was being ignored.12582
bySam Waterbury <samwaterbury>
.
pipeline.Pipeline
can now use indexing notation (e.g.my_pipeline[0:-1]
) to extract a subsequence of steps as another Pipeline instance. A Pipeline can also be indexed directly to extract a particular step (e.g.my_pipeline['svc']
), rather than accessingnamed_steps
.2568
by Joel Nothman.pipeline.Pipeline
now supports using'passthrough'
as a transformer.11144
byThomas Fan <thomasjpfan>
.pipeline.Pipeline
implements__len__
and thereforelen(pipeline)
returns the number of steps in the pipeline.13439
byLakshya KD <LakshKD>
.
OneHotEncoder
now supports dropping one feature per category with a new drop parameter.12908
byDrew Johnston <drewmjohnston>
.- Make
preprocessing.MultiLabelBinarizer
to cache class mappings instead of calculating it every time on the fly.12116
byEkaterina Krivich <kiote>
and Joel Nothman. preprocessing.PolynomialFeatures
now supports compressed sparse row (CSR) matrices as input for degrees 2 and 3. This is typically much faster than the dense case as it scales with matrix density and expansion degree (on the order of density^degree), and is much, much faster than the compressed sparse column (CSC) case.12197
byAndrew Nystrom <awnystrom>
.- Speed improvement in
preprocessing.PolynomialFeatures
, in the dense case. Also added a new parameterorder
which controls output order for further speed performances.12251
by Tom Dupre la Tour. - Fixed the calculation overflow when using a float16 dtype with
preprocessing.StandardScaler
.13007
byRaffaello Baluyot <baluyotraf>
- Fixed a bug in
preprocessing.QuantileTransformer
andpreprocessing.quantile_transform
to force n_quantiles to be at most equal to n_samples. Values of n_quantiles larger than n_samples were either useless or resulting in a wrong approximation of the cumulative distribution function estimator.13333
byAlbert Thomas <albertcthomas>
.
- Fixed an issue in
svm.SVC.decision_function
whendecision_function_shape='ovr'
. The decision_function value of a given sample was different depending on whether the decision_function was evaluated on the sample alone or on a batch containing this same sample due to the scaling used in decision_function.10440
byJonathan Ohayon <Johayon>
.
- Decision Trees can now be plotted with matplotlib using
tree.plot_tree
without relying on thedot
library, removing a hard-to-install dependency.8508
by Andreas Müller. - Decision Trees can now be exported in a human readable textual format using
tree.export.export_text
.6261
by Giuseppe Vettigli <JustGlowing>. get_n_leaves()
andget_depth()
have been added totree.BaseDecisionTree
and consequently all estimators based on it, includingtree.DecisionTreeClassifier
,tree.DecisionTreeRegressor
,tree.ExtraTreeClassifier
, andtree.ExtraTreeRegressor
.12300
byAdrin Jalali <adrinjalali>
.- Fixed an issue with
tree.BaseDecisionTree
and consequently all estimators based on it, includingtree.DecisionTreeClassifier
,tree.DecisionTreeRegressor
,tree.ExtraTreeClassifier
, andtree.ExtraTreeRegressor
, where they used to exceed the givenmax_depth
by 1 while expanding the tree ifmax_leaf_nodes
andmax_depth
were both specified by the user. Please note that this also affects all ensemble methods using decision trees.12344
byAdrin Jalali <adrinjalali>
.
- The __repr__() method of all estimators (used when calling print(estimator)) has been entirely re-written, building on Python's pretty printing standard library. All parameters are printed by default, but this can be altered with the
print_changed_only
option insklearn.set_config
.11705
byNicolas Hug <NicolasHug>
. - Memory copies are avoided when casting arrays to a different dtype in multiple estimators.
11973
byRoman Yurchak <rth>
.
These changes mostly affect library developers.
- Add
check_fit_idempotent
to~utils.estimator_checks.check_estimator
, which checks that when fit is called twice with the same data, the ouput of predict, predict_proba, transform, and decision_function does not change.12328
byNicolas Hug <NicolasHug>