Skip to content

Latest commit

 

History

History
641 lines (507 loc) · 28.5 KB

v0.23.rst

File metadata and controls

641 lines (507 loc) · 28.5 KB

sklearn

Version 0.23.0

In Development

Put the changes in their relevant module.

Enforcing keyword-only arguments

In an effort to promote clear and non-ambiguous use of the library, most constructor and function parameters are now expected to be passed as keyword arguments (i.e. using the param=value syntax) instead of positional. To ease the transition, a FutureWarning is raised if a keyword-only parameter is used as positional. In version 0.25, these parameters will be strictly keyword-only, and a TypeError will be raised. 15005 by Joel Nothman, Adrin Jalali, Thomas Fan, and Nicolas Hug. See SLEP009 for more details.

Changed models

The following estimators and functions, when fit with the same data and parameters, may produce different models from the previous version. This often occurs due to changes in the modelling logic (bug fixes or enhancements), or in random sampling procedures.

  • ensemble.BaggingClassifier, ensemble.BaggingRegressor, and ensemble.IsolationForest.
  • cluster.KMeans with algorithm="elkan" and algorithm="full".
  • cluster.Birch
  • compose.ColumnTransformer.get_feature_names
  • compose.ColumnTransformer.fit
  • datasets.make_multilabel_classification
  • decomposition.PCA with n_components='mle'
  • decomposition.NMF and decomposition.non_negative_factorization with float32 dtype input.
  • decomposition.KernelPCA.inverse_transform
  • ensemble.HistGradientBoostingClassifier and ensemble.HistGradientBoostingRegrerssor
  • estimator_samples_ in ensemble.BaggingClassifier, ensemble.BaggingRegressor and ensemble.IsolationForest
  • ensemble.StackingClassifier and ensemble.StackingRegressor with sample_weight
  • gaussian_process.GaussianProcessRegressor
  • linear_model.RANSACRegressor with sample_weight.
  • linear_model.RidgeClassifierCV
  • metrics.mean_squared_error with squared and multioutput='raw_values'.
  • metrics.mutual_info_score with negative scores.
  • metrics.confusion_matrix with zero length y_true and y_pred
  • neural_network.MLPClassifier
  • preprocessing.StandardScaler with partial_fit and sparse input.
  • preprocessing.Normalizer with norm='max'
  • Any model using the svm.libsvm or the svm.liblinear solver, including svm.LinearSVC, svm.LinearSVR, svm.NuSVC, svm.NuSVR, svm.OneClassSVM, svm.SVC, svm.SVR, linear_model.LogisticRegression.
  • tree.DecisionTreeClassifier, tree.ExtraTreeClassifier and ensemble.GradientBoostingClassifier as well as predict method of tree.DecisionTreeRegressor, tree.ExtraTreeRegressor, and ensemble.GradientBoostingRegressor and read-only float32 input in predict, decision_path and predict_proba.

Details are listed in the changelog below.

(While we are trying to better inform users by providing this information, we cannot assure that this list is complete.)

Changelog

sklearn.cluster

  • cluster.Birch implementation of the predict method avoids high memory footprint by calculating the distances matrix using a chunked scheme. 16149 by Jeremie du Boisberranger <jeremiedbb> and Alex Shacked <alexshacked>.
  • The critical parts of cluster.KMeans have a more optimized implementation. Parallelism is now over the data instead of over initializations allowing better scalability. 11950 by Jeremie du Boisberranger <jeremiedbb>.
  • cluster.KMeans now supports sparse data when solver = "elkan". 11950 by Jeremie du Boisberranger <jeremiedbb>.
  • cluster.AgglomerativeClustering has a faster and more memory efficient implementation of single linkage clustering. 11514 by Leland McInnes <lmcinnes>.
  • cluster.KMeans with algorithm="elkan" now converges with tol=0 as with the default algorithm="full". 16075 by Erich Schubert <kno10>.
  • Fixed a bug in cluster.Birch where the n_clusters parameter could not have a np.int64 type. 16484 by Jeremie du Boisberranger <jeremiedbb>.
  • The n_jobs parameter of cluster.KMeans, cluster.SpectralCoclustering and cluster.SpectralBiclustering is deprecated. They now use OpenMP based parallelism. For more details on how to control the number of threads, please refer to our parallelism notes. 11950 by Jeremie du Boisberranger <jeremiedbb>.
  • The precompute_distances parameter of cluster.KMeans is deprecated. It has no effect. 11950 by Jeremie du Boisberranger <jeremiedbb>.

sklearn.compose

  • compose.ColumnTransformer is now faster when working with dataframes and strings are used to specific subsets of data for transformers. 16431 by Thomas Fan.
  • compose.ColumnTransformer method get_feature_names now supports 'passthrough' columns, with the feature name being either the column name for a dataframe, or 'xi' for column index i. 14048 by Lewis Ball <lrjball>.
  • compose.ColumnTransformer method get_feature_names now returns correct results when one of the transformer steps applies on an empty list of columns 15963 by Roman Yurchak.
  • compose.ColumnTransformer.fit will error when selecting a column name that is not unique in the dataframe. 16431 by Thomas Fan.

sklearn.datasets

  • datasets.fetch_openml has reduced memory usage because it no longer stores the full dataset text stream in memory. 16084 by Joel Nothman.
  • datasets.fetch_california_housing now supports heterogeneous data using pandas by setting as_frame=True. 15950 by Stephanie Andrews <gitsteph> and Reshama Shaikh <reshamas>.
  • embedded dataset loaders load_breast_cancer, load_diabetes, load_digits, load_iris, load_linnerud and load_wine now support loading as a pandas DataFrame by setting as_frame=True. 15980 by wconnell and Reshama Shaikh <reshamas>.
  • Added return_centers parameter in datasets.make_blobs, which can be used to return centers for each cluster. 15709 by shivamgargsya and Venkatachalam N <venkyyuvy>.
  • Functions datasets.make_circles and datasets.make_moons now accept two-element tuple. 15707 by Maciej J Mikulski <mjmikulski>.
  • datasets.make_multilabel_classification now generates ValueError for arguments n_classes < 1 OR length < 1. 16006 by Rushabh Vasani <rushabh-v>.
  • The StreamHandler was removed from sklearn.logger to avoid double logging of messages in common cases where a hander is attached to the root logger, and to follow the Python logging documentation recommendation for libraries to leave the log message handling to users and application code. 16451 by Christoph Deil <cdeil>.

sklearn.decomposition

  • decomposition.NMF and decomposition.non_negative_factorization now preserves float32 dtype. 16280 by Jeremie du Boisberranger <jeremiedbb>.
  • TruncatedSVD.transform is now faster on given sparse csc matrices. 16837 by wornbb.
  • decomposition.PCA with a float n_components parameter, will exclusively choose the components that explain the variance greater than n_components. 15669 by Krishna Chaitanya <krishnachaitanya9>
  • decomposition.PCA with n_components='mle' now correctly handles small eigenvalues, and does not infer 0 as the correct number of components. 16224 by Lisa Schwetlick <lschwetlick>, and Gelavizh Ahmadi <gelavizh1> and Marija Vlajic Wheeler <marijavlajic> and 16841 by Nicolas Hug.
  • decomposition.KernelPCA method inverse_transform now applies the correct inverse transform to the transformed data. 16655 by Lewis Ball <lrjball>.
  • Fixed bug that was causing decomposition.KernelPCA to sometimes raise invalid value encountered in multiply during fit. 16718 by Gui Miotto <gui-miotto>.
  • Added n_components_ attribute to decomposition.SparsePCA and decomposition.MiniBatchSparsePCA. 16981 by Mateusz Górski <Reksbril>.

sklearn.ensemble

  • ensemble.HistGradientBoostingClassifier and ensemble.HistGradientBoostingRegressor now support sample_weight. 14696 by Adrin Jalali and Nicolas Hug.
  • Early stopping in ensemble.HistGradientBoostingClassifier and ensemble.HistGradientBoostingRegressor is now determined with a new early_stopping parameter instead of n_iter_no_change. Default value is 'auto', which enables early stopping if there are at least 10,000 samples in the training set. 14516 by Johann Faouzi <johannfaouzi>.
  • ensemble.HistGradientBoostingClassifier and ensemble.HistGradientBoostingRegressor now support monotonic constraints, useful when features are supposed to have a positive/negative effect on the target. 15582 by Nicolas Hug.
  • Added boolean verbose flag to classes: ensemble.VotingClassifier and ensemble.VotingRegressor. 16069 by Sam Bail <spbail>, Hanna Bruce MacDonald <hannahbrucemacdonald>, Reshama Shaikh <reshamas>, and Chiara Marmo <cmarmo>.
  • Fixed a bug in ensemble.HistGradientBoostingClassifier and ensemble.HistGradientBoostingRegrerssor that would not respect the max_leaf_nodes parameter if the criteria was reached at the same time as the max_depth criteria. 16183 by Nicolas Hug.
  • Changed the convention for max_depth parameter of ensemble.HistGradientBoostingClassifier and ensemble.HistGradientBoostingRegressor. The depth now corresponds to the number of edges to go from the root to the deepest leaf. Stumps (trees with one split) are now allowed. 16182 by Santhosh B <santhoshbala18>
  • Fixed a bug in ensemble.BaggingClassifier, ensemble.BaggingRegressor and ensemble.IsolationForest where the attribute estimators_samples_ did not generate the proper indices used during fit. 16437 by Jin-Hwan CHO <chofchof>.
  • Fixed a bug in ensemble.StackingClassifier and ensemble.StackingRegressor where the sample_weight argument was not being passed to cross_val_predict when evaluating the base estimators on cross-validation folds to obtain the input to the meta estimator. 16539 by Bill DeRose <wderose>.
  • Added additional option loss="poisson" to ensemble.HistGradientBoostingRegressor, which adds Poisson deviance with log-link useful for modeling count data. 16692 by Christian Lorentzen <lorentzenchr>
  • Fixed a bug where ensemble.HistGradientBoostingRegressor and ensemble.HistGradientBoostingClassifier would fail with multiple calls to fit when warm_start=True, early_stopping=True, and there is no validation set. 16663 by Thomas Fan.

sklearn.feature_extraction

  • feature_extraction.text.CountVectorizer now sorts features after pruning them by document frequency. This improves performances for datasets with large vocabularies combined with min_df or max_df. 15834 by Santiago M. Mola <smola>.

sklearn.feature_selection

  • Added support for multioutput data in feature_selection.RFE and feature_selection.RFECV. 16103 by Divyaprabha M <divyaprabha123>.
  • Adds feature_selection.SelectorMixin back to public API. 16132 by trimeta.

sklearn.gaussian_process

  • gaussian_process.kernels.Matern returns the RBF kernel when nu=np.inf. 15503 by Sam Dixon <sam-dixon>.
  • Fixed bug in gaussian_process.GaussianProcessRegressor that caused predicted standard deviations to only be between 0 and 1 when WhiteKernel is not used. 15782 by plgreenLIRU.

sklearn.impute

  • impute.IterativeImputer accepts both scalar and array-like inputs for max_value and min_value. Array-like inputs allow a different max and min to be specified for each feature. 16403 by Narendra Mukherjee <narendramukherjee>.
  • impute.SimpleImputer, impute.KNNImputer, and impute.SimpleImputer accepts pandas' nullable integer dtype with missing values. 16508 by Thomas Fan.

sklearn.inspection

  • inspection.partial_dependence and inspection.plot_partial_dependence now support the fast 'recursion' method for ensemble.RandomForestRegressor and tree.DecisionTreeRegressor. 15864 by Nicolas Hug.

sklearn.linear_model

  • Added generalized linear models (GLM) with non normal error distributions, including linear_model.PoissonRegressor, linear_model.GammaRegressor and linear_model.TweedieRegressor which use Poisson, Gamma and Tweedie distributions respectively. 14300 by Christian Lorentzen <lorentzenchr>, Roman Yurchak, and Olivier Grisel.
  • Support of sample_weight in linear_model.ElasticNet and linear_model.Lasso for dense feature matrix X. 15436 by Christian Lorentzen <lorentzenchr>.
  • linear_model.RidgeCV and linear_model.RidgeClassifierCV now does not allocate a potentially large array to store dual coefficients for all hyperparameters during its fit, nor an array to store all error or LOO predictions unless store_cv_values is True. 15652 by Jérôme Dockès <jeromedockes>.
  • linear_model.LassoLars and linear_model.Lars now support a jitter parameter that adds random noise to the target. This might help with stability in some edge cases. 15179 by angelaambroz.
  • Fixed a bug where if a sample_weight parameter was passed to the fit method of linear_model.RANSACRegressor, it would not be passed to the wrapped base_estimator during the fitting of the final model. 15773 by Jeremy Alexandre <J-A16>.
  • add best_score_ attribute to linear_model.RidgeCV and linear_model.RidgeClassifierCV. 15653 by Jérôme Dockès <jeromedockes>.
  • Fixed a bug in linear_model.RidgeClassifierCV to pass a specific scoring strategy. Before the internal estimator outputs score instead of predictions. 14848 by Venkatachalam N <venkyyuvy>.
  • linear_model.LogisticRegression will now avoid an unnecessary iteration when solver='newton-cg' by checking for inferior or equal instead of strictly inferior for maximum of absgrad and tol in utils.optimize._newton_cg. 16266 by Rushabh Vasani <rushabh-v>.
  • Deprecated public attributes standard_coef_, standard_intercept_, average_coef_, and average_intercept_ in linear_model.SGDClassifier, linear_model.SGDRegressor, linear_model.PassiveAggressiveClassifier, linear_model.PassiveAggressiveRegressor. 16261 by Carlos Brandt <chbrandt>.
  • linear_model.ARDRegression is more stable and much faster when n_samples > n_features. It can now scale to hundreds of thousands of samples. The stability fix might imply changes in the number of non-zero coefficients and in the predicted output. 16849 by Nicolas Hug.
  • Fixed a bug in linear_model.ElasticNetCV, linear_model.MultitaskElasticNetCV, linear_model.LassoCV and linear_model.MultitaskLassoCV where fitting would fail when using joblib loky backend. 14264 by Jérémie du Boisberranger <jeremiedbb>.

sklearn.metrics

  • metrics.pairwise.pairwise_distances_chunked now allows its reduce_func to not have a return value, enabling in-place operations. 16397 by Joel Nothman.
  • Fixed a bug in metrics.mean_squared_error to not ignore argument squared when argument multioutput='raw_values'. 16323 by Rushabh Vasani <rushabh-v>
  • Fixed a bug in metrics.mutual_info_score where negative scores could be returned. 16362 by Thomas Fan.
  • Fixed a bug in metrics.confusion_matrix that would raise an error when y_true and y_pred were length zero and labels was not None. In addition, we raise an error when an empty list is given to the labels parameter. 16442 by Kyle Parsons <parsons-kyle-89>.
  • Changed the formatting of values in metrics.ConfusionMatrixDisplay.plot and metrics.plot_confusion_matrix to pick the shorter format (either '2g' or 'd'). 16159 by Rick Mackenbach <Rick-Mackenbach> and Thomas Fan.
  • From version 0.25, metrics.pairwise.pairwise_distances will no longer automatically compute the VI parameter for Mahalanobis distance and the V parameter for seuclidean distance if Y is passed. The user will be expected to compute this parameter on the training data of their choice and pass it to pairwise_distances. 16993 by Joel Nothman.

sklearn.model_selection

  • model_selection.GridSearchCV and model_selection.RandomizedSearchCV yields stack trace information in fit failed warning messages in addition to previously emitted type and details. 15622 by Gregory Morse <GregoryMorse>.
  • :func: cross_val_predict supports method="predict_proba" when y=None. 15918 by Luca Kubin <lkubin>.
  • model_selection.fit_grid_point is deprecated in 0.23 and will be removed in 0.25. 16401 by Arie Pratama Sutiono <ariepratama>

sklearn.multioutput

  • multioutput.RegressorChain now supports fit_params for base_estimator during fit. 16111 by Venkatachalam N <venkyyuvy>.

sklearn.naive_bayes

  • A correctly formatted error message is shown in naive_bayes.CategoricalNB when the number of features in the input differs between predict and fit. 16090 by Madhura Jayaratne <madhuracj>.

sklearn.neural_network

  • neural_network.MLPClassifier and neural_network.MLPRegressor has reduced memory footprint when using stochastic solvers, 'sgd' or 'adam', and shuffle=True. 14075 by meyer89.
  • Increases the numerical stability of the logistic loss function in neural_network.MLPClassifier by clipping the probabilities. 16117 by Thomas Fan.

sklearn.inspection

  • inspection.PartialDependenceDisplay now exposes the deciles lines as attributes so they can be hidden or customized. 15785 by Nicolas Hug

sklearn.preprocessing

  • argument drop of preprocessing.OneHotEncoder will now accept value 'if_binary' and will drop the first category of each feature with two categories. 16245 by Rushabh Vasani <rushabh-v>.
  • preprocessing.OneHotEncoder's drop_idx_ ndarray can now contain None, where drop_idx_[i] = None means that no category is dropped for index i. 16585 by Chiara Marmo <cmarmo>.
  • preprocessing.MaxAbsScaler, preprocessing.MinMaxScaler, preprocessing.StandardScaler, preprocessing.PowerTransformer, preprocessing.QuantileTransformer, preprocessing.RobustScaler now supports pandas' nullable integer dtype with missing values. 16508 by Thomas Fan.
  • preprocessing.OneHotEncoder is now faster at transforming. 15762 by Thomas Fan.
  • Fix a bug in preprocessing.StandardScaler which was incorrectly computing statistics when calling partial_fit on sparse inputs. 16466 by Guillaume Lemaitre <glemaitre>.
  • Fix a bug in preprocessing.Normalizer with norm='max', which was not taking the absolute value of the maximum values before normalizing the vectors. 16632 by Maura Pintor <Maupin1991> and Battista Biggio <bbiggio>.

sklearn.semi_supervised

  • semi_supervised.LabelSpreading and semi_supervised.LabelPropagation avoids divide by zero warnings when normalizing label_distributions_. 15946 by ngshya.

sklearn.svm

  • Improved libsvm and liblinear random number generators used to randomly select coordinates in the coordinate descent algorithms. Platform-dependent C rand() was used, which is only able to generate numbers up to 32767 on windows platform (see this blog post) and also has poor randomization power as suggested by this presentation. It was replaced with C++11 mt19937, a Mersenne Twister that correctly generates 31bits/63bits random numbers on all platforms. In addition, the crude "modulo" postprocessor used to get a random number in a bounded interval was replaced by the tweaked Lemire method as suggested by this blog post. Any model using the svm.libsvm or the svm.liblinear solver, including svm.LinearSVC, svm.LinearSVR, svm.NuSVC, svm.NuSVR, svm.OneClassSVM, svm.SVC, svm.SVR, linear_model.LogisticRegression, is affected. In particular users can expect a better convergence when the number of samples (LibSVM) or the number of features (LibLinear) is large. 13511 by Sylvain Marié <smarie>.
  • Fix use of custom kernel not taking float entries such as string kernels in svm.SVC and svm.SVR. Note that custom kennels are now expected to validate their input where they previously received valid numeric arrays. 11296 by Alexandre Gramfort and Georgi Peev <georgipeev>.
  • svm.SVR and svm.OneClassSVM attributes, probA_ and probB_, are now deprecated as they were not useful. 15558 by Thomas Fan.

sklearn.tree

  • tree.plot_tree rotate parameter was unused and has been deprecated. 15806 by Chiara Marmo <cmarmo>.
  • Fix support of read-only float32 array input in predict, decision_path and predict_proba methods of tree.DecisionTreeClassifier, tree.ExtraTreeClassifier and ensemble.GradientBoostingClassifier as well as predict method of tree.DecisionTreeRegressor, tree.ExtraTreeRegressor, and ensemble.GradientBoostingRegressor. 16331 by Alexandre Batisse <batalex>.

sklearn.utils

  • improve error message in utils.validation.column_or_1d. 15926 by Loïc Estève <lesteve>.
  • add warning in utils.check_array for pandas sparse DataFrame. 16021 by Rushabh Vasani <rushabh-v>.
  • utils.check_array now constructs a sparse matrix from a pandas DataFrame that contains only SparseArray columns. 16728 by Thomas Fan.
  • utils.validation.check_array supports pandas' nullable integer dtype with missing values when force_all_finite is set to False or 'allow-nan' in which case the data is converted to floating point values where pd.NA values are replaced by np.nan. As a consequence, all sklearn.preprocessing transformers that accept numeric inputs with missing values represented as np.nan now also accepts being directly fed pandas dataframes with pd.Int* or `pd.Uint* typed columns that use pd.NA as a missing value marker. 16508 by Thomas Fan.
  • Passing classes to utils.estimator_checks.check_estimator and utils.estimator_checks.parametrize_with_checks is now deprecated, and support for classes will be removed in 0.24. Pass instances instead. 17032 by Nicolas Hug.
  • utils.all_estimators now only returns public estimators. 15380 by Thomas Fan.

sklearn.cluster

  • cluster.AgglomerativeClustering add specific error when distance matrix is not square and affinity=precomputed. 16257 by Simona Maggio <simonamaggio>.

Miscellaneous

  • scikit-learn now works with mypy without errors. 16726 by Roman Yurchak.
  • Most estimators now expose a n_features_in_ attribute. This attribute is equal to the number of features passed to the fit method. See SLEP010 for details. 16112 by Nicolas Hug.
  • Estimators now have a requires_y tags which is False by default except for estimators that inherit from ~sklearn.base.RegressorMixin or ~sklearn.base.ClassifierMixin. This tag is used to ensure that a proper error message is raised when y was expected but None was passed. 16622 by Nicolas Hug.
  • The default setting print_changed_only has been changed from False to True. This means that the repr of estimators is now more concise and only shows the parameters whose default value has been changed when printing an estimator. You can restore the previous behaviour by using sklearn.set_config(print_changed_only=False). Also, note that it is always possible to quickly inspect the parameters of any estimator using est.get_params(deep=False). 17061 by Nicolas Hug.

Code and Documentation Contributors

Thanks to everyone who has contributed to the maintenance and improvement of the project since version 0.20, including: