Skip to content

Commit

Permalink
Merge pull request #353 from glemaitre/0.3.1
Browse files Browse the repository at this point in the history
Release 0.3.1
  • Loading branch information
glemaitre committed Oct 9, 2017
2 parents aa489c0 + 29a36cc commit 6880e00
Show file tree
Hide file tree
Showing 8 changed files with 29 additions and 16 deletions.
2 changes: 1 addition & 1 deletion build_tools/circle/build_doc.sh
Expand Up @@ -93,7 +93,7 @@ source activate $CONDA_ENV_NAME

conda install --yes pip numpy scipy scikit-learn pillow matplotlib sphinx \
sphinx_rtd_theme numpydoc
pip install sphinx-gallery
pip install sphinx-gallery==0.1.11

# Build and install imbalanced-learn in dev mode
cd "$HOME/$CIRCLE_PROJECT_REPONAME"
Expand Down
8 changes: 6 additions & 2 deletions conda-recipe/imbalanced-learn/meta.yaml
@@ -1,11 +1,15 @@
package:
name: imbalanced-learn
version: "0.3.0"
version: "0.3.1"

source:
git_rev: 0.3.0
git_rev: 0.3.1
git_url: https://github.com/scikit-learn-contrib/imbalanced-learn.git

build:
number: 0
noarch: python

requirements:
build:
- python
Expand Down
14 changes: 7 additions & 7 deletions doc/combine.rst
Expand Up @@ -8,19 +8,19 @@ Combination of over- and under-sampling

We previously presented :class:`SMOTE` and showed that this method can generate
noisy samples by interpolating new points between marginal outliers and
inliers. This issue can be solved by cleaning the resulted space obtained
after over-sampling.
inliers. This issue can be solved by cleaning the space resulting
from over-sampling.

.. currentmodule:: imblearn.combine

In this regard, Tomek's link and edited nearest-neighbours are the two cleaning
methods which have been added pipeline after SMOTE over-sampling to obtain a
cleaner space. Therefore, imbalanced-learn implemented two ready-to-use class
which pipeline both over- and under-sampling methods: (i) :class:`SMOTETomek`
methods that have been added to the pipeline after applying SMOTE over-sampling
to obtain a cleaner space. The two ready-to use classes imbalanced-learn implements
for combining over- and undersampling methods are: (i) :class:`SMOTETomek`
and (ii) :class:`SMOTEENN`.

These two classes can be used as any other sampler with identical parameters
than their former samplers::
Those two classes can be used like any other sampler with parameters identical
to their former samplers::

>>> from collections import Counter
>>> from sklearn.datasets import make_classification
Expand Down
2 changes: 1 addition & 1 deletion doc/conf.py
Expand Up @@ -103,7 +103,7 @@
# built documents.
#
# The short X.Y version.
__version__ = '0.3.0'
__version__ = '0.3.1'
version = __version__
# The full version, including alpha/beta/rc tags.
release = __version__
Expand Down
6 changes: 3 additions & 3 deletions doc/developers_utils.rst
Expand Up @@ -12,8 +12,8 @@ All the following functions and classes are in the module :mod:`imblearn.utils`.
These utilities are meant to be used internally within the imbalanced-learn
package. They are not guaranteed to be stable between versions of
imbalance-learn. Backports, in particular, will be removed as the
imbalance-learn dependencies evolve.
imbalanced-learn. Backports, in particular, will be removed as the
imbalanced-learn dependencies evolve.
Validation Tools
Expand Down Expand Up @@ -97,7 +97,7 @@ same information as the deprecation warning as explained above. Use the
``k`` was renamed to ``n_clusters`` in version 0.13 and will be removed
in 0.15.

On the top of all the functionality provided by scikit-learn. Imbalance-learn
On the top of all the functionality provided by scikit-learn. imbalanced-learn
provides :func:`deprecate_parameter`: which is used to deprecate a sampler's
parameter (attribute) by another one.

Expand Down
Expand Up @@ -73,6 +73,15 @@ class NeighbourhoodCleaningRule(BaseCleaningSampler):
:class:`sklearn.neighbors.base.KNeighborsMixin` that will be used to
find the nearest-neighbors.
threshold_cleaning : float, optional (default=0.5)
Threshold used to whether consider a class or not during the cleaning
after applying ENN. A class will be considered during cleaning when:
Ci > C x T ,
where Ci and C is the number of samples in the class and the data set,
respectively and theta is the threshold.
n_jobs : int, optional (default=1)
The number of threads to open if possible.
Expand Down
2 changes: 1 addition & 1 deletion imblearn/version.py
Expand Up @@ -21,7 +21,7 @@
# Dev branch marker is: 'X.Y.dev' or 'X.Y.devN' where N is an integer.
# 'X.Y.dev0' is the canonical version of 'X.Y.dev'
#
__version__ = '0.3.0'
__version__ = '0.3.1'

_IMBALANCED_DATASET_INSTALL_MSG = 'See %s for installation information.' % (
'http://contrib.scikit-learn.org/imbalanced-learn/install.html')
Expand Down
2 changes: 1 addition & 1 deletion setup.cfg
@@ -1,5 +1,5 @@
[bumpversion]
current_version = 0.3.0
current_version = 0.3.1
tag = False
parse = (?P<major>\d+)\.(?P<minor>\d+)\.(?P<patch>\d+)(\.(?P<release>[a-z]+)(?P<dev>\d+))?
serialize =
Expand Down

0 comments on commit 6880e00

Please sign in to comment.