Skip to content

Latest commit

 

History

History
2222 lines (1631 loc) · 76.4 KB

whats_new.rst

File metadata and controls

2222 lines (1631 loc) · 76.4 KB

sklearn

0.14.1

0.14.1 is a minor bug-fix release that fixed a small regression in the setup.py

0.14

Changelog

  • Missing values with sparse and dense matrices can be imputed with the transformer preprocessing.Imputer by Nicolas Trésegnie.
  • The core implementation of decisions trees has been rewritten from scratch, allowing for faster tree induction and lower memory consumption in all tree-based estimators. By Gilles Louppe.
  • Added ensemble.AdaBoostClassifier and ensemble.AdaBoostRegressor, by Noel Dawe and Gilles Louppe. See the AdaBoost <adaboost> section of the user guide for details and examples.
  • Added grid_search.RandomizedSearchCV and grid_search.ParameterSampler for randomized hyperparameter optimization. By Andreas Müller.
  • Added biclustering <biclustering> algorithms (sklearn.cluster.bicluster.SpectralCoclustering and sklearn.cluster.bicluster.SpectralBiclustering), data generation methods (sklearn.datasets.make_biclusters and sklearn.datasets.make_checkerboard), and scoring metrics (sklearn.metrics.consensus_score). By Kemal Eren.
  • Added Restricted Boltzmann Machines<rbm> (neural_network.BernoulliRBM). By Yann Dauphin.
  • Python 3 support by Justin Vincent, Lars Buitinck, Subhodeep Moitra and Olivier Grisel. All tests now pass under Python 3.3.
  • Ability to pass one penalty (alpha value) per target in linear_model.Ridge, by @eickenberg and Mathieu Blondel.
  • Fixed sklearn.linear_model.stochastic_gradient.py L2 regularization issue (minor practical significants). By Norbert Crombach and Mathieu Blondel .
  • Added an interactive version of Andreas Müller's Machine Learning Cheat Sheet (for scikit-learn) to the documentation. See Choosing the right estimator <ml_map>. By Jaques Grobler.
  • grid_search.GridSearchCV and cross_validation.cross_val_score now support the use of advanced scoring function such as area under the ROC curve and f-beta scores. See scoring_parameter for details. By Andreas Müller and Lars Buitinck. Passing a function from sklearn.metrics as score_func is deprecated.
  • Multi-label classification output is now supported by metrics.accuracy_score, metrics.zero_one_loss, metrics.f1_score, metrics.fbeta_score, metrics.classification_report, metrics.precision_score and metrics.recall_score by Arnaud Joly.
  • Two new metrics metrics.hamming_loss and metrics.jaccard_similarity_score are added with multi-label support by Arnaud Joly.
  • Speed and memory usage improvements in feature_extraction.text.CountVectorizer and feature_extraction.text.TfidfVectorizer, by Jochen Wersdörfer and Roman Sinayev.
  • The min_df parameter in feature_extraction.text.CountVectorizer and feature_extraction.text.TfidfVectorizer, which used to be 2, has been reset to 1 to avoid unpleasant surprises (empty vocabularies) for novice users who try it out on tiny document collections. A value of at least 2 is still recommended for practical use.
  • svm.LinearSVC, linear_model.SGDClassifier and linear_model.SGDRegressor now have a sparsify method that converts their coef_ into a sparse matrix, meaning stored models trained using these estimators can be made much more compact.
  • linear_model.SGDClassifier now produces multiclass probability estimates when trained under log loss or modified Huber loss.
  • Hyperlinks to documentation in example code on the website by Martin Luessi.
  • Fixed bug in preprocessing.MinMaxScaler causing incorrect scaling of the features for non-default feature_range settings. By Andreas Müller.
  • max_features in tree.DecisionTreeClassifier, tree.DecisionTreeRegressor and all derived ensemble estimators now supports percentage values. By Gilles Louppe.
  • Performance improvements in isotonic.IsotonicRegression by Nelle Varoquaux.
  • metrics.accuracy_score has an option normalize to return the fraction or the number of correctly classified sample by Arnaud Joly.
  • Added metrics.log_loss that computes log loss, aka cross-entropy loss. By Jochen Wersdörfer and Lars Buitinck.
  • A bug that caused ensemble.AdaBoostClassifier's to output incorrect probabilities has been fixed.
  • Feature selectors now share a mixin providing consistent transform, inverse_transform and get_support methods. By Joel Nothman.
  • A fitted grid_search.GridSearchCV or grid_search.RandomizedSearchCV can now generally be pickled. By Joel Nothman.
  • Refactored and vectorized implementation of metrics.roc_curve and metrics.precision_recall_curve. By Joel Nothman.
  • The new estimator sklearn.decomposition.TruncatedSVD performs dimensionality reduction using SVD on sparse matrices, and can be used for latent semantic analysis (LSA). By Lars Buitinck.
  • Added self-contained example of out-of-core learning on text data example_applications_plot_out_of_core_classification.py. By Eustache Diemert.
  • The default number of components for sklearn.decomposition.RandomizedPCA is now correctly documented to be n_features. This was the default behavior, so programs using it will continue to work as they did.
  • sklearn.cluster.KMeans now fits several orders of magnitude faster on sparse data (the speedup depends on the sparsity). By Lars Buitinck.
  • Reduce memory footprint of FastICA by Denis Engemann and Alexandre Gramfort.
  • Verbose output in sklearn.ensemble.gradient_boosting now uses a column format and prints progress in decreasing frequency. It also shows the remaining time. By Peter Prettenhofer.
  • sklearn.ensemble.gradient_boosting provides out-of-bag improvement ~sklearn.ensemble.GradientBoostingRegressor.oob_improvement_ rather than the OOB score for model selection. An example that shows how to use OOB estimates to select the number of trees was added. By Peter Prettenhofer.
  • Most metrics now support string labels for multiclass classification by Arnaud Joly and Lars Buitinck.
  • New OrthogonalMatchingPursuitCV class by Alexandre Gramfort and Vlad Niculae.
  • Fixed a bug in sklearn.covariance.GraphLassoCV: the 'alphas' parameter now works as expected when given a list of values. By Philippe Gervais.
  • Fixed an important bug in sklearn.covariance.GraphLassoCV that prevented all folds provided by a CV object to be used (only the first 3 were used). When providing a CV object, execution time may thus increase significantly compared to the previous version (bug results are correct now). By Philippe Gervais.
  • cross_validation.cross_val_score and the grid_search module is now tested with multi-output data by Arnaud Joly.
  • datasets.make_multilabel_classification can now return the output in label indicator multilabel format by Arnaud Joly.
  • K-nearest neighbors, neighbors.KNeighborsRegressor and neighbors.RadiusNeighborsRegressor, and radius neighbors, neighbors.RadiusNeighborsRegressor and neighbors.RadiusNeighborsClassifier support multioutput data by Arnaud Joly.
  • Random state in LibSVM-based estimators (svm.SVC, NuSVC, OneClassSVM, svm.SVR, svm.NuSVR) can now be controlled. This is useful to ensure consistency in the probability estimates for the classifiers trained with probability=True. By Vlad Niculae.
  • Out-of-core learning support for discrete naive Bayes classifiers sklearn.naive_bayes.MultinomialNB and sklearn.naive_bayes.BernoulliNB by adding the partial_fit method by Olivier Grisel.
  • New website design and navigation by Gilles Louppe, Nelle Varoquaux, Vincent Michel and Andreas Müller.
  • Improved documentation on multi-class, multi-label and multi-output classification <multiclass> by Yannick Schwartz and Arnaud Joly.
  • Better input and error handling in the metrics module by Arnaud Joly and Joel Nothman.
  • Speed optimization of the hmm module by Mikhail Korobov
  • Significant speed improvements for sklearn.cluster.DBSCAN_ by cleverless

API changes summary

  • The auc_score was renamed roc_auc_score.
  • Testing scikit-learn with sklearn.test() is deprecated. Use nosetest sklearn from the command line.
  • Feature importances in tree.DecisionTreeClassifier, tree.DecisionTreeRegressor and all derived ensemble estimators are now computed on the fly when accessing the feature_importances_ attribute. Setting compute_importances=True is no longer required. By Gilles Louppe.
  • linear_model.lasso_path and linear_model.enet_path can return its results in the same format as that of linear_model.lars_path. This is done by setting the return_models parameter to False. By Jaques Grobler and Alexandre Gramfort
  • grid_search.IterGrid was renamed to grid_search.ParameterGrid.
  • Fixed bug in KFold causing imperfect class balance in some cases. By Alexandre Gramfort and Tadej Janež.
  • sklearn.neighbors.BallTree has been refactored, and a sklearn.neighbors.KDTree has been added which shares the same interface. The Ball Tree now works with a wide variety of distance metrics. Both classes have many new methods, including single-tree and dual-tree queries, breadth-first and depth-first searching, and more advanced queries such as kernel density estimation and 2-point correlation functions. By Jake Vanderplas
  • Support for scipy.spatial.cKDTree within neighbors queries has been removed, and the functionality replaced with the new KDTree class.
  • sklearn.neighbors.KernelDensity has been added, which performs efficient kernel density estimation with a variety of kernels.
  • sklearn.decomposition.KernelPCA now always returns output with n_components components, unless the new parameter remove_zero_eig is set to True. This new behavior is consistent with the way kernel PCA was always documented; previously, the removal of components with zero eigenvalues was tacitly performed on all data.
  • gcv_mode="auto" no longer tries to perform SVD on a densified sparse matrix in sklearn.linear_model.RidgeCV.
  • Sparse matrix support in sklearn.decomposition.RandomizedPCA is now deprecated in favor of the new TruncatedSVD.
  • cross_validation.KFold and cross_validation.StratifiedKFold now enforce n_folds >= 2 otherwise a ValueError is raised. By Olivier Grisel.
  • datasets.load_files's charset and charset_errors parameters were renamed encoding and decode_errors.
  • Attribute oob_score_ in sklearn.ensemble.GradientBoostingRegressor and sklearn.ensemble.GradientBoostingClassifier is deprecated and has been replaced by oob_improvement_ .
  • Attributes in OrthogonalMatchingPursuit have been deprecated (copy_X, Gram, ...) and precompute_gram renamed precompute for consistency. See #2224.
  • sklearn.preprocessing.StandardScaler now converts integer input to float, and raises a warning. Previously it rounded for dense integer input.
  • Better input validation, warning on unexpected shapes for y.

People

List of contributors for release 0.14 by number of commits.

  • 277 Gilles Louppe
  • 245 Lars Buitinck
  • 187 Andreas Mueller
  • 124 Arnaud Joly
  • 112 Jaques Grobler
  • 109 Gael Varoquaux
  • 107 Olivier Grisel
  • 102 Noel Dawe
  • 99 Kemal Eren
  • 79 Joel Nothman
  • 75 Jake VanderPlas
  • 73 Nelle Varoquaux
  • 71 Vlad Niculae
  • 65 Peter Prettenhofer
  • 64 Alexandre Gramfort
  • 54 Mathieu Blondel
  • 38 Nicolas Trésegnie
  • 35 eustache
  • 27 Denis Engemann
  • 25 Yann N. Dauphin
  • 19 Justin Vincent
  • 17 Robert Layton
  • 15 Doug Coleman
  • 14 Michael Eickenberg
  • 13 Robert Marchman
  • 11 Fabian Pedregosa
  • 11 Philippe Gervais
  • 10 Jim Holmström
  • 10 Tadej Janež
  • 10 syhw
  • 9 Mikhail Korobov
  • 9 Steven De Gryze
  • 8 sergeyf
  • 7 Ben Root
  • 7 Hrishikesh Huilgolkar
  • 6 Kyle Kastner
  • 6 Martin Luessi
  • 6 Rob Speer
  • 5 Federico Vaggi
  • 5 Raul Garreta
  • 5 Rob Zinkov
  • 4 Ken Geis
  • 3 A. Flaxman
  • 3 Denton Cockburn
  • 3 Dougal Sutherland
  • 3 Ian Ozsvald
  • 3 Johannes Schönberger
  • 3 Robert McGibbon
  • 3 Roman Sinayev
  • 3 Szabo Roland
  • 2 Diego Molla
  • 2 Imran Haque
  • 2 Jochen Wersdörfer
  • 2 Sergey Karayev
  • 2 Yannick Schwartz
  • 2 jamestwebber
  • 1 Abhijeet Kolhe
  • 1 Alexander Fabisch
  • 1 Bastiaan van den Berg
  • 1 Benjamin Peterson
  • 1 Daniel Velkov
  • 1 Fazlul Shahriar
  • 1 Felix Brockherde
  • 1 Félix-Antoine Fortin
  • 1 Harikrishnan S
  • 1 Jack Hale
  • 1 JakeMick
  • 1 James McDermott
  • 1 John Benediktsson
  • 1 John Zwinck
  • 1 Joshua Vredevoogd
  • 1 Justin Pati
  • 1 Kevin Hughes
  • 1 Kyle Kelley
  • 1 Matthias Ekman
  • 1 Miroslav Shubernetskiy
  • 1 Naoki Orii
  • 1 Norbert Crombach
  • 1 Rafael Cunha de Almeida
  • 1 Rolando Espinoza La fuente
  • 1 Seamus Abshere
  • 1 Sergey Feldman
  • 1 Sergio Medina
  • 1 Stefano Lattarini
  • 1 Steve Koch
  • 1 Sturla Molden
  • 1 Thomas Jarosch
  • 1 Yaroslav Halchenko

0.13.1

The 0.13.1 release only fixes some bugs and does not add any new functionality.

Changelog

  • Fixed a testing error caused by the function cross_validation.train_test_split being interpreted as a test by Yaroslav Halchenko.
  • Fixed a bug in the reassignment of small clusters in the cluster.MiniBatchKMeans by Gael Varoquaux.
  • Fixed default value of gamma in decomposition.KernelPCA by Lars Buitinck.
  • Updated joblib to 0.7.0d by Gael Varoquaux.
  • Fixed scaling of the deviance in ensemble.GradientBoostingClassifier by Peter Prettenhofer.
  • Better tie-breaking in multiclass.OneVsOneClassifier by Andreas Müller.
  • Other small improvements to tests and documentation.

People

List of contributors for release 0.13.1 by number of commits.

0.13

New Estimator Classes

  • dummy.DummyClassifier and dummy.DummyRegressor, two data-independent predictors by Mathieu Blondel. Useful to sanity-check your estimators. See dummy_estimators in the user guide. Multioutput support added by Arnaud Joly.
  • decomposition.FactorAnalysis, a transformer implementing the classical factor analysis, by Christian Osendorfer and Alexandre Gramfort. See FA in the user guide.
  • feature_extraction.FeatureHasher, a transformer implementing the "hashing trick" for fast, low-memory feature extraction from string fields by Lars Buitinck and feature_extraction.text.HashingVectorizer for text documents by Olivier Grisel See feature_hashing and hashing_vectorizer for the documentation and sample usage.
  • pipeline.FeatureUnion, a transformer that concatenates results of several other transformers by Andreas Müller. See feature_union in the user guide.
  • random_projection.GaussianRandomProjection, random_projection.SparseRandomProjection and the function random_projection.johnson_lindenstrauss_min_dim. The first two are transformers implementing Gaussian and sparse random projection matrix by Olivier Grisel and Arnaud Joly. See random_projection in the user guide.
  • kernel_approximation.Nystroem, a transformer for approximating arbitrary kernels by Andreas Müller. See nystroem_kernel_approx in the user guide.
  • preprocessing.OneHotEncoder, a transformer that computes binary encodings of categorical features by Andreas Müller. See preprocessing_categorical_features in the user guide.
  • linear_model.PassiveAggressiveClassifier and linear_model.PassiveAggressiveRegressor, predictors implementing an efficient stochastic optimization for linear models by Rob Zinkov and Mathieu Blondel. See passive_aggressive in the user guide.
  • ensemble.RandomTreesEmbedding, a transformer for creating high-dimensional sparse representations using ensembles of totally random trees by Andreas Müller. See random_trees_embedding in the user guide.
  • manifold.SpectralEmbedding and function manifold.spectral_embedding, implementing the "laplacian eigenmaps" transformation for non-linear dimensionality reduction by Wei Li. See spectral_embedding in the user guide.
  • isotonic.IsotonicRegression by Fabian Pedregosa, Alexandre Gramfort and Nelle Varoquaux,

Changelog

  • metrics.zero_one_loss (formerly metrics.zero_one) now has option for normalized output that reports the fraction of misclassifications, rather than the raw number of misclassifications. By Kyle Beauchamp.
  • tree.DecisionTreeClassifier and all derived ensemble models now support sample weighting, by Noel Dawe and Gilles Louppe.
  • Speedup improvement when using bootstrap samples in forests of randomized trees, by Peter Prettenhofer and Gilles Louppe.
  • Partial dependence plots for gradient_boosting in ensemble.partial_dependence.partial_dependence by Peter Prettenhofer. See example_ensemble_plot_partial_dependence.py for an example.
  • The table of contents on the website has now been made expandable by Jaques Grobler.
  • feature_selection.SelectPercentile now breaks ties deterministically instead of returning all equally ranked features.
  • feature_selection.SelectKBest and feature_selection.SelectPercentile are more numerically stable since they use scores, rather than p-values, to rank results. This means that they might sometimes select different features than they did previously.
  • Ridge regression and ridge classification fitting with sparse_cg solver no longer has quadratic memory complexity, by Lars Buitinck and Fabian Pedregosa.
  • Ridge regression and ridge classification now support a new fast solver called lsqr, by Mathieu Blondel.
  • Speed up of metrics.precision_recall_curve by Conrad Lee.
  • Added support for reading/writing svmlight files with pairwise preference attribute (qid in svmlight file format) in datasets.dump_svmlight_file and datasets.load_svmlight_file by Fabian Pedregosa.
  • Faster and more robust metrics.confusion_matrix and clustering_evaluation by Wei Li.
  • cross_validation.cross_val_score now works with precomputed kernels and affinity matrices, by Andreas Müller.
  • LARS algorithm made more numerically stable with heuristics to drop regressors too correlated as well as to stop the path when numerical noise becomes predominant, by Gael Varoquaux.
  • Faster implementation of metrics.precision_recall_curve by Conrad Lee.
  • New kernel metrics.chi2_kernel by Andreas Müller, often used in computer vision applications.
  • Fix of longstanding bug in naive_bayes.BernoulliNB fixed by Shaun Jackman.
  • Implement predict_proba in multiclass.OneVsRestClassifier, by Andrew Winterman.
  • Improve consistency in gradient boosting: estimators ensemble.GradientBoostingRegressor and ensemble.GradientBoostingClassifier use the estimator tree.DecisionTreeRegressor instead of the tree._tree.Tree datastructure by Arnaud Joly.
  • Fixed a floating point exception in the decision trees <tree> module, by Seberg.
  • Fix metrics.roc_curve fails when y_true has only one class by Wei Li.
  • Add the metrics.mean_absolute_error function which computes the mean absolute error. The metrics.mean_squared_error, metrics.mean_absolute_error and metrics.r2_score metrics support multioutput by Arnaud Joly.
  • Fixed class_weight support in svm.LinearSVC and linear_model.LogisticRegression by Andreas Müller. The meaning of class_weight was reversed as erroneously higher weight meant less positives of a given class in earlier releases.
  • Improve narrative documentation and consistency in sklearn.metrics for regression and classification metrics by Arnaud Joly.
  • Fixed a bug in sklearn.svm.SVC when using csr-matrices with unsorted indices by Xinfan Meng and Andreas Müller.
  • MiniBatchKMeans: Add random reassignment of cluster centers with little observations attached to them, by Gael Varoquaux.

API changes summary

  • Renamed all occurrences of n_atoms to n_components for consistency. This applies to decomposition.DictionaryLearning, decomposition.MiniBatchDictionaryLearning, decomposition.dict_learning, decomposition.dict_learning_online.
  • Renamed all occurrences of max_iters to max_iter for consistency. This applies to semi_supervised.LabelPropagation and semi_supervised.label_propagation.LabelSpreading.
  • Renamed all occurrences of learn_rate to learning_rate for consistency in ensemble.BaseGradientBoosting and ensemble.GradientBoostingRegressor.
  • The module sklearn.linear_model.sparse is gone. Sparse matrix support was already integrated into the "regular" linear models.
  • sklearn.metrics.mean_square_error, which incorrectly returned the accumulated error, was removed. Use mean_squared_error instead.
  • Passing class_weight parameters to fit methods is no longer supported. Pass them to estimator constructors instead.
  • GMMs no longer have decode and rvs methods. Use the score, predict or sample methods instead.
  • The solver fit option in Ridge regression and classification is now deprecated and will be removed in v0.14. Use the constructor option instead.
  • feature_extraction.text.DictVectorizer now returns sparse matrices in the CSR format, instead of COO.
  • Renamed k in cross_validation.KFold and cross_validation.StratifiedKFold to n_folds, renamed n_bootstraps to n_iter in cross_validation.Bootstrap.
  • Renamed all occurrences of n_iterations to n_iter for consistency. This applies to cross_validation.ShuffleSplit, cross_validation.StratifiedShuffleSplit, utils.randomized_range_finder and utils.randomized_svd.
  • Replaced rho in linear_model.ElasticNet and linear_model.SGDClassifier by l1_ratio. The rho parameter had different meanings; l1_ratio was introduced to avoid confusion. It has the same meaning as previously rho in linear_model.ElasticNet and (1-rho) in linear_model.SGDClassifier.
  • linear_model.LassoLars and linear_model.Lars now store a list of paths in the case of multiple targets, rather than an array of paths.
  • The attribute gmm of hmm.GMMHMM was renamed to gmm_ to adhere more strictly with the API.
  • cluster.spectral_embedding was moved to manifold.spectral_embedding.
  • Renamed eig_tol in manifold.spectral_embedding, cluster.SpectralClustering to eigen_tol, renamed mode to eigen_solver.
  • Renamed mode in manifold.spectral_embedding and cluster.SpectralClustering to eigen_solver.
  • classes_ and n_classes_ attributes of tree.DecisionTreeClassifier and all derived ensemble models are now flat in case of single output problems and nested in case of multi-output problems.
  • The estimators_ attribute of ensemble.gradient_boosting.GradientBoostingRegressor and ensemble.gradient_boosting.GradientBoostingClassifier is now an array of :class:'tree.DecisionTreeRegressor'.
  • Renamed chunk_size to batch_size in decomposition.MiniBatchDictionaryLearning and decomposition.MiniBatchSparsePCA for consistency.
  • svm.SVC and svm.NuSVC now provide a classes_ attribute and support arbitrary dtypes for labels y. Also, the dtype returned by predict now reflects the dtype of y during fit (used to be np.float).
  • Changed default test_size in cross_validation.train_test_split to None, added possibility to infer test_size from train_size in cross_validation.ShuffleSplit and cross_validation.StratifiedShuffleSplit.
  • Renamed function sklearn.metrics.zero_one to sklearn.metrics.zero_one_loss. Be aware that the default behavior in sklearn.metrics.zero_one_loss is different from sklearn.metrics.zero_one: normalize=False is changed to normalize=True.
  • Renamed function metrics.zero_one_score to metrics.accuracy_score.
  • datasets.make_circles now has the same number of inner and outer points.
  • In the Naive Bayes classifiers, the class_prior parameter was moved from fit to __init__.

People

List of contributors for release 0.13 by number of commits.

0.12.1

The 0.12.1 release is a bug-fix release with no additional features, but is instead a set of bug fixes

Changelog

People

0.12

Changelog

  • Various speed improvements of the decision trees <tree> module, by Gilles Louppe.
  • ensemble.GradientBoostingRegressor and ensemble.GradientBoostingClassifier now support feature subsampling via the max_features argument, by Peter Prettenhofer.
  • Added Huber and Quantile loss functions to ensemble.GradientBoostingRegressor, by Peter Prettenhofer.
  • Decision trees <tree> and forests of randomized trees <forest> now support multi-output classification and regression problems, by Gilles Louppe.
  • Added preprocessing.LabelEncoder, a simple utility class to normalize labels or transform non-numerical labels, by Mathieu Blondel.
  • Added the epsilon-insensitive loss and the ability to make probabilistic predictions with the modified huber loss in sgd, by Mathieu Blondel.
  • Added multidimensional_scaling, by Nelle Varoquaux.
  • SVMlight file format loader now detects compressed (gzip/bzip2) files and decompresses them on the fly, by Lars Buitinck.
  • SVMlight file format serializer now preserves double precision floating point values, by Olivier Grisel.
  • A common testing framework for all estimators was added, by Andreas Müller.
  • Understandable error messages for estimators that do not accept sparse input by Gael Varoquaux
  • Speedups in hierarchical clustering by Gael Varoquaux. In particular building the tree now supports early stopping. This is useful when the number of clusters is not small compared to the number of samples.
  • Add MultiTaskLasso and MultiTaskElasticNet for joint feature selection, by Alexandre Gramfort.
  • Added metrics.auc_score and metrics.average_precision_score convenience functions by Andreas Müller.
  • Improved sparse matrix support in the feature_selection module by Andreas Müller.
  • New word boundaries-aware character n-gram analyzer for the text_feature_extraction module by @kernc.
  • Fixed bug in spectral clustering that led to single point clusters by Andreas Müller.
  • In feature_extraction.text.CountVectorizer, added an option to ignore infrequent words, min_df by Andreas Müller.
  • Add support for multiple targets in some linear models (ElasticNet, Lasso and OrthogonalMatchingPursuit) by Vlad Niculae and Alexandre Gramfort.
  • Fixes in decomposition.ProbabilisticPCA score function by Wei Li.
  • Fixed feature importance computation in gradient_boosting.

API changes summary

  • The old scikits.learn package has disappeared; all code should import from sklearn instead, which was introduced in 0.9.
  • In metrics.roc_curve, the thresholds array is now returned with it's order reversed, in order to keep it consistent with the order of the returned fpr and tpr.
  • In hmm objects, like hmm.GaussianHMM, hmm.MultinomialHMM, etc., all parameters must be passed to the object when initialising it and not through fit. Now fit will only accept the data as an input parameter.
  • For all SVM classes, a faulty behavior of gamma was fixed. Previously, the default gamma value was only computed the first time fit was called and then stored. It is now recalculated on every call to fit.
  • All Base classes are now abstract meta classes so that they can not be instantiated.
  • cluster.ward_tree now also returns the parent array. This is necessary for early-stopping in which case the tree is not completely built.
  • In feature_extraction.text.CountVectorizer the parameters min_n and max_n were joined to the parameter n_gram_range to enable grid-searching both at once.
  • In feature_extraction.text.CountVectorizer, words that appear only in one document are now ignored by default. To reproduce the previous behavior, set min_df=1.
  • Fixed API inconsistency: linear_model.SGDClassifier.predict_proba now returns 2d array when fit on two classes.
  • Fixed API inconsistency: qda.QDA.decision_function and lda.LDA.decision_function now return 1d arrays when fit on two classes.
  • Grid of alphas used for fitting linear_model.LassoCV and linear_model.ElasticNetCV is now stored in the attribute alphas_ rather than overriding the init parameter alphas.
  • Linear models when alpha is estimated by cross-validation store the estimated value in the alpha_ attribute rather than just alpha or best_alpha.
  • ensemble.GradientBoostingClassifier now supports ensemble.GradientBoostingClassifier.staged_predict_proba, and ensemble.GradientBoostingClassifier.staged_predict.
  • svm.sparse.SVC and other sparse SVM classes are now deprecated. The all classes in the svm module now automatically select the sparse or dense representation base on the input.
  • All clustering algorithms now interpret the array X given to fit as input data, in particular cluster.SpectralClustering and cluster.AffinityPropagation which previously expected affinity matrices.
  • For clustering algorithms that take the desired number of clusters as a parameter, this parameter is now called n_clusters.

People

0.11

Changelog

Highlights

  • Gradient boosted regression trees (gradient_boosting) for classification and regression by Peter Prettenhofer and Scott White .
  • Simple dict-based feature loader with support for categorical variables (feature_extraction.DictVectorizer) by Lars Buitinck.
  • Added Matthews correlation coefficient (metrics.matthews_corrcoef) and added macro and micro average options to metrics.precision_score, metrics.recall_score and metrics.f1_score by Satrajit Ghosh.
  • out_of_bag of generalization error for ensemble by Andreas Müller.
  • randomized_l1: Randomized sparse linear models for feature selection, by Alexandre Gramfort and Gael Varoquaux
  • label_propagation for semi-supervised learning, by Clay Woolam. Note the semi-supervised API is still work in progress, and may change.
  • Added BIC/AIC model selection to classical gmm and unified the API with the remainder of scikit-learn, by Bertrand Thirion
  • Added sklearn.cross_validation.StratifiedShuffleSplit, which is a sklearn.cross_validation.ShuffleSplit with balanced splits, by Yannick Schwartz.
  • sklearn.neighbors.NearestCentroid classifier added, along with a shrink_threshold parameter, which implements shrunken centroid classification, by Robert Layton.

Other changes

  • Merged dense and sparse implementations of sgd module and exposed utility extension types for sequential datasets seq_dataset and weight vectors weight_vector by Peter Prettenhofer.
  • Added partial_fit (support for online/minibatch learning) and warm_start to the sgd module by Mathieu Blondel.
  • Dense and sparse implementations of svm classes and linear_model.LogisticRegression merged by Lars Buitinck.
  • Regressors can now be used as base estimator in the multiclass module by Mathieu Blondel.
  • Added n_jobs option to metrics.pairwise.pairwise_distances and metrics.pairwise.pairwise_kernels for parallel computation, by Mathieu Blondel.
  • k_means can now be run in parallel, using the n_jobs argument to either k_means or KMeans, by Robert Layton.
  • Improved cross_validation and grid_search documentation and introduced the new cross_validation.train_test_split helper function by Olivier Grisel
  • svm.SVC members coef_ and intercept_ changed sign for consistency with decision_function; for kernel==linear, coef_ was fixed in the the one-vs-one case, by Andreas Müller.
  • Performance improvements to efficient leave-one-out cross-validated Ridge regression, esp. for the n_samples > n_features case, in linear_model.RidgeCV, by Reuben Fletcher-Costin.
  • Refactoring and simplification of the text_feature_extraction API and fixed a bug that caused possible negative IDF, by Olivier Grisel.
  • Beam pruning option in _BaseHMM module has been removed since it is difficult to cythonize. If you are interested in contributing a cython version, you can use the python version in the git history as a reference.
  • Classes in neighbors now support arbitrary Minkowski metric for nearest neighbors searches. The metric can be specified by argument p.

API changes summary

  • covariance.EllipticEnvelop is now deprecated - Please use covariance.EllipticEnvelope instead.
  • NeighborsClassifier and NeighborsRegressor are gone in the module neighbors. Use the classes KNeighborsClassifier, RadiusNeighborsClassifier, KNeighborsRegressor and/or RadiusNeighborsRegressor instead.
  • Sparse classes in the sgd module are now deprecated.
  • In mixture.GMM, mixture.DPGMM and mixture.VBGMM, parameters must be passed to an object when initialising it and not through fit. Now fit will only accept the data as an input parameter.
  • methods rvs and decode in GMM module are now deprecated. sample and score or predict should be used instead.
  • attribute _scores and _pvalues in univariate feature selection objects are now deprecated. scores_ or pvalues_ should be used instead.
  • In LogisticRegression, LinearSVC, SVC and NuSVC, the class_weight parameter is now an initialization parameter, not a parameter to fit. This makes grid searches over this parameter possible.
  • LFW data is now always shape (n_samples, n_features) to be consistent with the Olivetti faces dataset. Use images and pairs attribute to access the natural images shapes instead.
  • In svm.LinearSVC, the meaning of the multi_class parameter changed. Options now are 'ovr' and 'crammer_singer', with 'ovr' being the default. This does not change the default behavior but hopefully is less confusing.
  • Classs feature_selection.text.Vectorizer is deprecated and replaced by feature_selection.text.TfidfVectorizer.
  • The preprocessor / analyzer nested structure for text feature extraction has been removed. All those features are now directly passed as flat constructor arguments to feature_selection.text.TfidfVectorizer and feature_selection.text.CountVectorizer, in particular the following parameters are now used:

    • analyzer can be 'word' or 'char' to switch the default analysis scheme, or use a specific python callable (as previously).
    • tokenizer and preprocessor have been introduced to make it still possible to customize those steps with the new API.
    • input explicitly control how to interpret the sequence passed to fit and predict: filenames, file objects or direct (byte or unicode) strings.
    • charset decoding is explicit and strict by default.
    • the vocabulary, fitted or not is now stored in the vocabulary_ attribute to be consistent with the project conventions.
  • Class feature_selection.text.TfidfVectorizer now derives directly from feature_selection.text.CountVectorizer to make grid search trivial.
  • methods rvs in _BaseHMM module are now deprecated. sample should be used instead.
  • Beam pruning option in _BaseHMM module is removed since it is difficult to be Cythonized. If you are interested, you can look in the history codes by git.
  • The SVMlight format loader now supports files with both zero-based and one-based column indices, since both occur "in the wild".
  • Arguments in class ShuffleSplit are now consistent with StratifiedShuffleSplit. Arguments test_fraction and train_fraction are deprecated and renamed to test_size and train_size and can accept both float and int.
  • Arguments in class Bootstrap are now consistent with StratifiedShuffleSplit. Arguments n_test and n_train are deprecated and renamed to test_size and train_size and can accept both float and int.
  • Argument p added to classes in neighbors to specify an arbitrary Minkowski metric for nearest neighbors searches.

People

0.10

Changelog

  • Python 2.5 compatibility was dropped; the minimum Python version needed to use scikit-learn is now 2.6.
  • sparse_inverse_covariance estimation using the graph Lasso, with associated cross-validated estimator, by Gael Varoquaux
  • New Tree <tree> module by Brian Holt, Peter Prettenhofer, Satrajit Ghosh and Gilles Louppe. The module comes with complete documentation and examples.
  • Fixed a bug in the RFE module by Gilles Louppe (issue #378).
  • Fixed a memory leak in in svm module by Brian Holt (issue #367).
  • Faster tests by Fabian Pedregosa and others.
  • Silhouette Coefficient cluster analysis evaluation metric added as sklearn.metrics.silhouette_score by Robert Layton.
  • Fixed a bug in k_means in the handling of the n_init parameter: the clustering algorithm used to be run n_init times but the last solution was retained instead of the best solution by Olivier Grisel.
  • Minor refactoring in sgd module; consolidated dense and sparse predict methods; Enhanced test time performance by converting model parameters to fortran-style arrays after fitting (only multi-class).
  • Adjusted Mutual Information metric added as sklearn.metrics.adjusted_mutual_info_score by Robert Layton.
  • Models like SVC/SVR/LinearSVC/LogisticRegression from libsvm/liblinear now support scaling of C regularization parameter by the number of samples by Alexandre Gramfort.
  • New Ensemble Methods <ensemble> module by Gilles Louppe and Brian Holt. The module comes with the random forest algorithm and the extra-trees method, along with documentation and examples.
  • outlier_detection: outlier and novelty detection, by Virgile Fritsch.
  • kernel_approximation: a transform implementing kernel approximation for fast SGD on non-linear kernels by Andreas Müller.
  • Fixed a bug due to atom swapping in OMP by Vlad Niculae.
  • SparseCoder by Vlad Niculae.
  • mini_batch_kmeans performance improvements by Olivier Grisel.
  • k_means support for sparse matrices by Mathieu Blondel.
  • Improved documentation for developers and for the sklearn.utils module, by Jake Vanderplas.
  • Vectorized 20newsgroups dataset loader (sklearn.datasets.fetch_20newsgroups_vectorized) by Mathieu Blondel.
  • multiclass by Lars Buitinck.
  • Utilities for fast computation of mean and variance for sparse matrices by Mathieu Blondel.
  • Make sklearn.preprocessing.scale and sklearn.preprocessing.Scaler work on sparse matrices by Olivier Grisel
  • Feature importances using decision trees and/or forest of trees, by Gilles Louppe.
  • Parallel implementation of forests of randomized trees by Gilles Louppe.
  • sklearn.cross_validation.ShuffleSplit can subsample the train sets as well as the test sets by Olivier Grisel.
  • Errors in the build of the documentation fixed by Andreas Müller.

API changes summary

Here are the code migration instructions when upgrading from scikit-learn version 0.9:

  • Some estimators that may overwrite their inputs to save memory previously had overwrite_ parameters; these have been replaced with copy_ parameters with exactly the opposite meaning.

    This particularly affects some of the estimators in linear_model. The default behavior is still to copy everything passed in.

  • The SVMlight dataset loader sklearn.datasets.load_svmlight_file no longer supports loading two files at once; use load_svmlight_files instead. Also, the (unused) buffer_mb parameter is gone.
  • Sparse estimators in the sgd module use dense parameter vector coef_ instead of sparse_coef_. This significantly improves test time performance.
  • The covariance module now has a robust estimator of covariance, the Minimum Covariance Determinant estimator.
  • Cluster evaluation metrics in metrics.cluster have been refactored but the changes are backwards compatible. They have been moved to the metrics.cluster.supervised, along with metrics.cluster.unsupervised which contains the Silhouette Coefficient.
  • The permutation_test_score function now behaves the same way as cross_val_score (i.e. uses the mean score across the folds.)
  • Cross Validation generators now use integer indices (indices=True) by default instead of boolean masks. This make it more intuitive to use with sparse matrix data.
  • The functions used for sparse coding, sparse_encode and sparse_encode_parallel have been combined into sklearn.decomposition.sparse_encode, and the shapes of the arrays have been transposed for consistency with the matrix factorization setting, as opposed to the regression setting.
  • Fixed an off-by-one error in the SVMlight/LibSVM file format handling; files generated using sklearn.datasets.dump_svmlight_file should be re-generated. (They should continue to work, but accidentally had one extra column of zeros prepended.)
  • BaseDictionaryLearning class replaced by SparseCodingMixin.
  • sklearn.utils.extmath.fast_svd has been renamed sklearn.utils.extmath.randomized_svd and the default oversampling is now fixed to 10 additional random vectors instead of doubling the number of components to extract. The new behavior follows the reference paper.

People

The following people contributed to scikit-learn since last release:

0.9

scikit-learn 0.9 was released on September 2011, three months after the 0.8 release and includes the new modules manifold, dirichlet_process as well as several new algorithms and documentation improvements.

This release also includes the dictionary-learning work developed by Vlad Niculae as part of the Google Summer of Code program.

banner2 banner1 banner3

Changelog

  • New manifold module by Jake Vanderplas and Fabian Pedregosa.
  • New Dirichlet Process <dirichlet_process> Gaussian Mixture Model by Alexandre Passos
  • neighbors module refactoring by Jake Vanderplas : general refactoring, support for sparse matrices in input, speed and documentation improvements. See the next section for a full list of API changes.
  • Improvements on the feature_selection module by Gilles Louppe : refactoring of the RFE classes, documentation rewrite, increased efficiency and minor API changes.
  • SparsePCA by Vlad Niculae, Gael Varoquaux and Alexandre Gramfort
  • Printing an estimator now behaves independently of architectures and Python version thanks to Jean Kossaifi.
  • Loader for libsvm/svmlight format <libsvm_loader> by Mathieu Blondel and Lars Buitinck
  • Documentation improvements: thumbnails in example gallery <examples-index> by Fabian Pedregosa.
  • Important bugfixes in svm module (segfaults, bad performance) by Fabian Pedregosa.
  • Added multinomial_naive_bayes and bernoulli_naive_bayes by Lars Buitinck
  • Text feature extraction optimizations by Lars Buitinck
  • Chi-Square feature selection (feature_selection.univariate_selection.chi2) by Lars Buitinck.
  • sample_generators module refactoring by Gilles Louppe
  • multiclass by Mathieu Blondel
  • Ball tree rewrite by Jake Vanderplas
  • Implementation of dbscan algorithm by Robert Layton
  • Kmeans predict and transform by Robert Layton
  • Preprocessing module refactoring by Olivier Grisel
  • Faster mean shift by Conrad Lee
  • New Bootstrap, ShuffleSplit and various other improvements in cross validation schemes by Olivier Grisel and Gael Varoquaux
  • Adjusted Rand index and V-Measure clustering evaluation metrics by Olivier Grisel
  • Added Orthogonal Matching Pursuit <linear_model.OrthogonalMatchingPursuit> by Vlad Niculae
  • Added 2D-patch extractor utilities in the feature_extraction module by Vlad Niculae
  • Implementation of linear_model.LassoLarsCV (cross-validated Lasso solver using the Lars algorithm) and linear_model.LassoLarsIC (BIC/AIC model selection in Lars) by Gael Varoquaux and Alexandre Gramfort
  • Scalability improvements to metrics.roc_curve by Olivier Hervieu
  • Distance helper functions metrics.pairwise.pairwise_distances and metrics.pairwise.pairwise_kernels by Robert Layton
  • Mini-Batch K-Means <cluster.MiniBatchKMeans> by Nelle Varoquaux and Peter Prettenhofer.
  • mldata utilities by Pietro Berkes.
  • olivetti_faces by David Warde-Farley.

API changes summary

Here are the code migration instructions when upgrading from scikit-learn version 0.8:

  • The scikits.learn package was renamed sklearn. There is still a scikits.learn package alias for backward compatibility.

    Third-party projects with a dependency on scikit-learn 0.9+ should upgrade their codebase. For instance under Linux / MacOSX just run (make a backup first!):

    find -name "*.py" | xargs sed -i 's/\bscikits.learn\b/sklearn/g'
  • Estimators no longer accept model parameters as fit arguments: instead all parameters must be only be passed as constructor arguments or using the now public set_params method inherited from base.BaseEstimator.

    Some estimators can still accept keyword arguments on the fit but this is restricted to data-dependent values (e.g. a Gram matrix or an affinity matrix that are precomputed from the X data matrix.

  • The cross_val package has been renamed to cross_validation although there is also a cross_val package alias in place for backward compatibility.

    Third-party projects with a dependency on scikit-learn 0.9+ should upgrade their codebase. For instance under Linux / MacOSX just run (make a backup first!):

    find -name "*.py" | xargs sed -i 's/\bcross_val\b/cross_validation/g'
  • The score_func argument of the sklearn.cross_validation.cross_val_score function is now expected to accept y_test and y_predicted as only arguments for classification and regression tasks or X_test for unsupervised estimators.
  • gamma parameter for support vector machine algorithms is set to 1 / n_features by default, instead of 1 / n_samples.
  • The sklearn.hmm has been marked as orphaned: it will be removed from scikit-learn in version 0.11 unless someone steps up to contribute documentation, examples and fix lurking numerical stability issues.
  • sklearn.neighbors has been made into a submodule. The two previously available estimators, NeighborsClassifier and NeighborsRegressor have been marked as deprecated. Their functionality has been divided among five new classes: NearestNeighbors for unsupervised neighbors searches, KNeighborsClassifier & RadiusNeighborsClassifier for supervised classification problems, and KNeighborsRegressor & RadiusNeighborsRegressor for supervised regression problems.
  • sklearn.ball_tree.BallTree has been moved to sklearn.neighbors.BallTree. Using the former will generate a warning.
  • sklearn.linear_model.LARS() and related classes (LassoLARS, LassoLARSCV, etc.) have been renamed to sklearn.linear_model.Lars().
  • All distance metrics and kernels in sklearn.metrics.pairwise now have a Y parameter, which by default is None. If not given, the result is the distance (or kernel similarity) between each sample in Y. If given, the result is the pairwise distance (or kernel similarity) between samples in X to Y.
  • sklearn.metrics.pairwise.l1_distance is now called manhattan_distance, and by default returns the pairwise distance. For the component wise distance, set the parameter sum_over_features to False.

Backward compatibility package aliases and other deprecated classes and functions will be removed in version 0.11.

People

38 people contributed to this release.

0.8

scikit-learn 0.8 was released on May 2011, one month after the first "international" scikit-learn coding sprint and is marked by the inclusion of important modules: hierarchical_clustering, cross_decomposition, NMF, initial support for Python 3 and by important enhancements and bug fixes.

Changelog

Several new modules where introduced during this release:

Some other modules benefited from significant improvements or cleanups.

  • Initial support for Python 3: builds and imports cleanly, some modules are usable while others have failing tests by Fabian Pedregosa.
  • decomposition.PCA is now usable from the Pipeline object by Olivier Grisel.
  • Guide performance-howto by Olivier Grisel.
  • Fixes for memory leaks in libsvm bindings, 64-bit safer BallTree by Lars Buitinck.
  • bug and style fixing in k_means algorithm by Jan Schlüter.
  • Add attribute converged to Gaussian Mixture Models by Vincent Schut.
  • Implement transform, predict_log_proba in lda.LDA by Mathieu Blondel.
  • Refactoring in the svm module and bug fixes by Fabian Pedregosa, Gael Varoquaux and Amit Aides.
  • Refactored SGD module (removed code duplication, better variable naming), added interface for sample weight by Peter Prettenhofer.
  • Wrapped BallTree with Cython by Thouis (Ray) Jones.
  • Added function svm.l1_min_c by Paolo Losi.
  • Typos, doc style, etc. by Yaroslav Halchenko, Gael Varoquaux, Olivier Grisel, Yann Malet, Nicolas Pinto, Lars Buitinck and Fabian Pedregosa.

People

People that made this release possible preceded by number of commits:

0.7

scikit-learn 0.7 was released in March 2011, roughly three months after the 0.6 release. This release is marked by the speed improvements in existing algorithms like k-Nearest Neighbors and K-Means algorithm and by the inclusion of an efficient algorithm for computing the Ridge Generalized Cross Validation solution. Unlike the preceding release, no new modules where added to this release.

Changelog

  • Performance improvements for Gaussian Mixture Model sampling [Jan Schlüter].
  • Implementation of efficient leave-one-out cross-validated Ridge in linear_model.RidgeCV [Mathieu Blondel]
  • Better handling of collinearity and early stopping in linear_model.lars_path [Alexandre Gramfort and Fabian Pedregosa].
  • Fixes for liblinear ordering of labels and sign of coefficients [Dan Yamins, Paolo Losi, Mathieu Blondel and Fabian Pedregosa].
  • Performance improvements for Nearest Neighbors algorithm in high-dimensional spaces [Fabian Pedregosa].
  • Performance improvements for cluster.KMeans [Gael Varoquaux and James Bergstra].
  • Sanity checks for SVM-based classes [Mathieu Blondel].
  • Refactoring of neighbors.NeighborsClassifier and neighbors.kneighbors_graph: added different algorithms for the k-Nearest Neighbor Search and implemented a more stable algorithm for finding barycenter weigths. Also added some developer documentation for this module, see notes_neighbors for more information [Fabian Pedregosa].
  • Documentation improvements: Added pca.RandomizedPCA and linear_model.LogisticRegression to the class reference. Also added references of matrices used for clustering and other fixes [Gael Varoquaux, Fabian Pedregosa, Mathieu Blondel, Olivier Grisel, Virgile Fritsch , Emmanuelle Gouillart]
  • Binded decision_function in classes that make use of liblinear, dense and sparse variants, like svm.LinearSVC or linear_model.LogisticRegression [Fabian Pedregosa].
  • Performance and API improvements to metrics.euclidean_distances and to pca.RandomizedPCA [James Bergstra].
  • Fix compilation issues under NetBSD [Kamel Ibn Hassen Derouiche]
  • Allow input sequences of different lengths in hmm.GaussianHMM [Ron Weiss].
  • Fix bug in affinity propagation caused by incorrect indexing [Xinfan Meng]

People

People that made this release possible preceded by number of commits:

0.6

scikit-learn 0.6 was released on december 2010. It is marked by the inclusion of several new modules and a general renaming of old ones. It is also marked by the inclusion of new example, including applications to real-world datasets.

Changelog

  • New stochastic gradient descent module by Peter Prettenhofer. The module comes with complete documentation and examples.
  • Improved svm module: memory consumption has been reduced by 50%, heuristic to automatically set class weights, possibility to assign weights to samples (see example_svm_plot_weighted_samples.py for an example).
  • New gaussian_process module by Vincent Dubourg. This module also has great documentation and some very neat examples. See example_gaussian_process_plot_gp_regression.py or example_gaussian_process_plot_gp_probabilistic_classification_after_regression.py for a taste of what can be done.
  • It is now possible to use liblinear’s Multi-class SVC (option multi_class in svm.LinearSVC)
  • New features and performance improvements of text feature extraction.
  • Improved sparse matrix support, both in main classes (grid_search.GridSearchCV) as in modules sklearn.svm.sparse and sklearn.linear_model.sparse.
  • Lots of cool new examples and a new section that uses real-world datasets was created. These include: example_applications_face_recognition.py, example_applications_plot_species_distribution_modeling.py, example_applications_svm_gui.py, example_applications_wikipedia_principal_eigenvector.py and others.
  • Faster least_angle_regression algorithm. It is now 2x faster than the R version on worst case and up to 10x times faster on some cases.
  • Faster coordinate descent algorithm. In particular, the full path version of lasso (linear_model.lasso_path) is more than 200x times faster than before.
  • It is now possible to get probability estimates from a linear_model.LogisticRegression model.
  • module renaming: the glm module has been renamed to linear_model, the gmm module has been included into the more general mixture model and the sgd module has been included in linear_model.
  • Lots of bug fixes and documentation improvements.

People

People that made this release possible preceded by number of commits:

0.5

Changelog

New classes

  • Support for sparse matrices in some classifiers of modules svm and linear_model (see svm.sparse.SVC, svm.sparse.SVR, svm.sparse.LinearSVC, linear_model.sparse.Lasso, linear_model.sparse.ElasticNet)
  • New pipeline.Pipeline object to compose different estimators.
  • Recursive Feature Elimination routines in module feature_selection.
  • Addition of various classes capable of cross validation in the linear_model module (linear_model.LassoCV, linear_model.ElasticNetCV, etc.).
  • New, more efficient LARS algorithm implementation. The Lasso variant of the algorithm is also implemented. See linear_model.lars_path, linear_model.Lars and linear_model.LassoLars.
  • New Hidden Markov Models module (see classes hmm.GaussianHMM, hmm.MultinomialHMM, hmm.GMMHMM)
  • New module feature_extraction (see class reference <feature_extraction_ref>)
  • New FastICA algorithm in module sklearn.fastica

Documentation

Fixes

  • API changes: adhere variable names to PEP-8, give more meaningful names.
  • Fixes for svm module to run on a shared memory context (multiprocessing).
  • It is again possible to generate latex (and thus PDF) from the sphinx docs.

Examples

  • new examples using some of the mlcomp datasets: example_mlcomp_sparse_document_classification.py, example_document_classification_20newsgroups.py
  • Many more examples. See here the full list of examples.

External dependencies

  • Joblib is now a dependency of this package, although it is shipped with (sklearn.externals.joblib).

Removed modules

  • Module ann (Artificial Neural Networks) has been removed from the distribution. Users wanting this sort of algorithms should take a look into pybrain.

Misc

  • New sphinx theme for the web page.

Authors

The following is a list of authors for this release, preceded by number of commits:

  • 262 Fabian Pedregosa
  • 240 Gael Varoquaux
  • 149 Alexandre Gramfort
  • 116 Olivier Grisel
  • 40 Vincent Michel
  • 38 Ron Weiss
  • 23 Matthieu Perrot
  • 10 Bertrand Thirion
  • 7 Yaroslav Halchenko
  • 9 VirgileFritsch
  • 6 Edouard Duchesnay
  • 4 Mathieu Blondel
  • 1 Ariel Rokem
  • 1 Matthieu Brucher

0.4

Changelog

Major changes in this release include:

  • Coordinate Descent algorithm (Lasso, ElasticNet) refactoring & speed improvements (roughly 100x times faster).
  • Coordinate Descent Refactoring (and bug fixing) for consistency with R's package GLMNET.
  • New metrics module.
  • New GMM module contributed by Ron Weiss.
  • Implementation of the LARS algorithm (without Lasso variant for now).
  • feature_selection module redesign.
  • Migration to GIT as content management system.
  • Removal of obsolete attrselect module.
  • Rename of private compiled extensions (aded underscore).
  • Removal of legacy unmaintained code.
  • Documentation improvements (both docstring and rst).
  • Improvement of the build system to (optionally) link with MKL. Also, provide a lite BLAS implementation in case no system-wide BLAS is found.
  • Lots of new examples.
  • Many, many bug fixes ...

Authors

The committer list for this release is the following (preceded by number of commits):

  • 143 Fabian Pedregosa
  • 35 Alexandre Gramfort
  • 34 Olivier Grisel
  • 11 Gael Varoquaux
  • 5 Yaroslav Halchenko
  • 2 Vincent Michel
  • 1 Chris Filo Gorgolewski

Earlier versions

Earlier versions included contributions by Fred Mailhot, David Cooke, David Huard, Dave Morrill, Ed Schofield, Travis Oliphant, Pearu Peterson.