Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CLN: remove versionadded:: 0.20 #29126

Merged
merged 1 commit into from Oct 22, 2019
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
2 changes: 0 additions & 2 deletions doc/source/development/contributing.rst
Expand Up @@ -1197,8 +1197,6 @@ submitting a pull request.

For more, see the `pytest <http://docs.pytest.org/en/latest/>`_ documentation.

.. versionadded:: 0.20.0

Furthermore one can run

.. code-block:: python
Expand Down
6 changes: 0 additions & 6 deletions doc/source/getting_started/basics.rst
Expand Up @@ -172,8 +172,6 @@ You are highly encouraged to install both libraries. See the section

These are both enabled to be used by default, you can control this by setting the options:

.. versionadded:: 0.20.0

.. code-block:: python

pd.set_option('compute.use_bottleneck', False)
Expand Down Expand Up @@ -891,8 +889,6 @@ functionality.
Aggregation API
~~~~~~~~~~~~~~~

.. versionadded:: 0.20.0

The aggregation API allows one to express possibly multiple aggregation operations in a single concise way.
This API is similar across pandas objects, see :ref:`groupby API <groupby.aggregate>`, the
:ref:`window functions API <stats.aggregate>`, and the :ref:`resample API <timeseries.aggregate>`.
Expand Down Expand Up @@ -1030,8 +1026,6 @@ to the built in :ref:`describe function <basics.describe>`.
Transform API
~~~~~~~~~~~~~

.. versionadded:: 0.20.0

The :meth:`~DataFrame.transform` method returns an object that is indexed the same (same size)
as the original. This API allows you to provide *multiple* operations at the same
time rather than one-by-one. Its API is quite similar to the ``.agg`` API.
Expand Down
4 changes: 0 additions & 4 deletions doc/source/user_guide/advanced.rst
Expand Up @@ -206,8 +206,6 @@ highly performant. If you want to see only the used levels, you can use the
To reconstruct the ``MultiIndex`` with only the used levels, the
:meth:`~MultiIndex.remove_unused_levels` method may be used.

.. versionadded:: 0.20.0

.. ipython:: python

new_mi = df[['foo', 'qux']].columns.remove_unused_levels()
Expand Down Expand Up @@ -928,8 +926,6 @@ If you need integer based selection, you should use ``iloc``:
IntervalIndex
~~~~~~~~~~~~~

.. versionadded:: 0.20.0

:class:`IntervalIndex` together with its own dtype, :class:`~pandas.api.types.IntervalDtype`
as well as the :class:`Interval` scalar type, allow first-class support in pandas
for interval notation.
Expand Down
2 changes: 0 additions & 2 deletions doc/source/user_guide/categorical.rst
Expand Up @@ -874,8 +874,6 @@ The below raises ``TypeError`` because the categories are ordered and not identi
Out[3]:
TypeError: to union ordered Categoricals, all categories must be the same

.. versionadded:: 0.20.0

Ordered categoricals with different categories or orderings can be combined by
using the ``ignore_ordered=True`` argument.

Expand Down
2 changes: 0 additions & 2 deletions doc/source/user_guide/computation.rst
Expand Up @@ -471,8 +471,6 @@ default of the index) in a DataFrame.
Rolling window endpoints
~~~~~~~~~~~~~~~~~~~~~~~~

.. versionadded:: 0.20.0

The inclusion of the interval endpoints in rolling window calculations can be specified with the ``closed``
parameter:

Expand Down
6 changes: 0 additions & 6 deletions doc/source/user_guide/groupby.rst
Expand Up @@ -311,8 +311,6 @@ Grouping with multiple levels is supported.
s
s.groupby(level=['first', 'second']).sum()

.. versionadded:: 0.20

Index level names may be supplied as keys.

.. ipython:: python
Expand Down Expand Up @@ -353,8 +351,6 @@ Index levels may also be specified by name.

df.groupby([pd.Grouper(level='second'), 'A']).sum()

.. versionadded:: 0.20

Index level names may be specified as keys directly to ``groupby``.

.. ipython:: python
Expand Down Expand Up @@ -1274,8 +1270,6 @@ To see the order in which each row appears within its group, use the
Enumerate groups
~~~~~~~~~~~~~~~~

.. versionadded:: 0.20.2

To see the ordering of the groups (as opposed to the order of rows
within a group given by ``cumcount``) you can use
:meth:`~pandas.core.groupby.DataFrameGroupBy.ngroup`.
Expand Down
21 changes: 0 additions & 21 deletions doc/source/user_guide/io.rst
Expand Up @@ -163,9 +163,6 @@ dtype : Type name or dict of column -> type, default ``None``
(unsupported with ``engine='python'``). Use `str` or `object` together
with suitable ``na_values`` settings to preserve and
not interpret dtype.

.. versionadded:: 0.20.0 support for the Python parser.

engine : {``'c'``, ``'python'``}
Parser engine to use. The C engine is faster while the Python engine is
currently more feature-complete.
Expand Down Expand Up @@ -417,10 +414,6 @@ However, if you wanted for all the data to be coerced, no matter the type, then
using the ``converters`` argument of :func:`~pandas.read_csv` would certainly be
worth trying.

.. versionadded:: 0.20.0 support for the Python parser.

The ``dtype`` option is supported by the 'python' engine.

.. note::
In some cases, reading in abnormal data with columns containing mixed dtypes
will result in an inconsistent dataset. If you rely on pandas to infer the
Expand Down Expand Up @@ -616,8 +609,6 @@ Filtering columns (``usecols``)
The ``usecols`` argument allows you to select any subset of the columns in a
file, either using the column names, position numbers or a callable:

.. versionadded:: 0.20.0 support for callable `usecols` arguments

.. ipython:: python

data = 'a,b,c,d\n1,2,3,foo\n4,5,6,bar\n7,8,9,baz'
Expand Down Expand Up @@ -1447,8 +1438,6 @@ is whitespace).
df = pd.read_fwf('bar.csv', header=None, index_col=0)
df

.. versionadded:: 0.20.0

``read_fwf`` supports the ``dtype`` parameter for specifying the types of
parsed columns to be different from the inferred type.

Expand Down Expand Up @@ -2221,8 +2210,6 @@ For line-delimited json files, pandas can also return an iterator which reads in
Table schema
''''''''''''

.. versionadded:: 0.20.0

`Table Schema`_ is a spec for describing tabular datasets as a JSON
object. The JSON includes information on the field names, types, and
other attributes. You can use the orient ``table`` to build
Expand Down Expand Up @@ -3071,8 +3058,6 @@ missing data to recover integer dtype:
Dtype specifications
++++++++++++++++++++

.. versionadded:: 0.20

As an alternative to converters, the type for an entire column can
be specified using the `dtype` keyword, which takes a dictionary
mapping column names to types. To interpret data with
Expand Down Expand Up @@ -3345,8 +3330,6 @@ any pickled pandas object (or any other pickled object) from file:
Compressed pickle files
'''''''''''''''''''''''

.. versionadded:: 0.20.0

:func:`read_pickle`, :meth:`DataFrame.to_pickle` and :meth:`Series.to_pickle` can read
and write compressed pickle files. The compression types of ``gzip``, ``bz2``, ``xz`` are supported for reading and writing.
The ``zip`` file format only supports reading and must contain only one data file
Expand Down Expand Up @@ -4323,8 +4306,6 @@ control compression: ``complevel`` and ``complib``.
- `bzip2 <http://bzip.org/>`_: Good compression rates.
- `blosc <http://www.blosc.org/>`_: Fast compression and decompression.

.. versionadded:: 0.20.2

Support for alternative blosc compressors:

- `blosc:blosclz <http://www.blosc.org/>`_ This is the
Expand Down Expand Up @@ -4651,8 +4632,6 @@ Performance
Feather
-------

.. versionadded:: 0.20.0

Feather provides binary columnar serialization for data frames. It is designed to make reading and writing data
frames efficient, and to make sharing data across data analysis languages easy.

Expand Down
2 changes: 0 additions & 2 deletions doc/source/user_guide/merging.rst
Expand Up @@ -843,8 +843,6 @@ resulting dtype will be upcast.
pd.merge(left, right, how='outer', on='key')
pd.merge(left, right, how='outer', on='key').dtypes

.. versionadded:: 0.20.0

Merging will preserve ``category`` dtypes of the mergands. See also the section on :ref:`categoricals <categorical.merge>`.

The left frame.
Expand Down
2 changes: 0 additions & 2 deletions doc/source/user_guide/options.rst
Expand Up @@ -561,8 +561,6 @@ However, setting this option incorrectly for your terminal will cause these char
Table schema display
--------------------

.. versionadded:: 0.20.0

``DataFrame`` and ``Series`` will publish a Table Schema representation
by default. False by default, this can be enabled globally with the
``display.html.table_schema`` option:
Expand Down
2 changes: 0 additions & 2 deletions doc/source/user_guide/reshaping.rst
Expand Up @@ -539,8 +539,6 @@ Alternatively we can specify custom bin-edges:
c = pd.cut(ages, bins=[0, 18, 35, 70])
c

.. versionadded:: 0.20.0

If the ``bins`` keyword is an ``IntervalIndex``, then these will be
used to bin the passed data.::

Expand Down
4 changes: 0 additions & 4 deletions doc/source/user_guide/text.rst
Expand Up @@ -228,8 +228,6 @@ and ``repl`` must be strings:
dollars.str.replace(r'-\$', '-')
dollars.str.replace('-$', '-', regex=False)

.. versionadded:: 0.20.0

The ``replace`` method can also take a callable as replacement. It is called
on every ``pat`` using :func:`re.sub`. The callable should expect one
positional argument (a regex object) and return a string.
Expand All @@ -254,8 +252,6 @@ positional argument (a regex object) and return a string.
pd.Series(['Foo Bar Baz', np.nan],
dtype="string").str.replace(pat, repl)

.. versionadded:: 0.20.0

The ``replace`` method also accepts a compiled regular expression object
from :func:`re.compile` as a pattern. All flags should be included in the
compiled regular expression object.
Expand Down
2 changes: 0 additions & 2 deletions doc/source/user_guide/timedeltas.rst
Expand Up @@ -327,8 +327,6 @@ similarly to the ``Series``. These are the *displayed* values of the ``Timedelta
You can convert a ``Timedelta`` to an `ISO 8601 Duration`_ string with the
``.isoformat`` method

.. versionadded:: 0.20.0

.. ipython:: python

pd.Timedelta(days=6, minutes=50, seconds=3,
Expand Down
2 changes: 0 additions & 2 deletions doc/source/user_guide/timeseries.rst
Expand Up @@ -376,8 +376,6 @@ We subtract the epoch (midnight at January 1, 1970 UTC) and then floor divide by
Using the ``origin`` Parameter
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

.. versionadded:: 0.20.0

Using the ``origin`` parameter, one can specify an alternative starting point for creation
of a ``DatetimeIndex``. For example, to use 1960-01-01 as the starting date:

Expand Down
2 changes: 0 additions & 2 deletions doc/source/user_guide/visualization.rst
Expand Up @@ -1247,8 +1247,6 @@ in ``pandas.plotting.plot_params`` can be used in a `with statement`:
Automatic date tick adjustment
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

.. versionadded:: 0.20.0

``TimedeltaIndex`` now uses the native matplotlib
tick locator methods, it is useful to call the automatic
date tick adjustment from matplotlib for figures whose ticklabels overlap.
Expand Down
2 changes: 0 additions & 2 deletions pandas/_libs/interval.pyx
Expand Up @@ -191,8 +191,6 @@ cdef class Interval(IntervalMixin):
"""
Immutable object implementing an Interval, a bounded slice-like interval.

.. versionadded:: 0.20.0

Parameters
----------
left : orderable scalar
Expand Down
2 changes: 0 additions & 2 deletions pandas/_libs/tslibs/timedeltas.pyx
Expand Up @@ -1157,8 +1157,6 @@ cdef class _Timedelta(timedelta):
``P[n]Y[n]M[n]DT[n]H[n]M[n]S``, where the ``[n]`` s are replaced by the
values. See https://en.wikipedia.org/wiki/ISO_8601#Durations.

.. versionadded:: 0.20.0

Returns
-------
formatted : str
Expand Down
2 changes: 0 additions & 2 deletions pandas/core/dtypes/concat.py
Expand Up @@ -199,8 +199,6 @@ def union_categoricals(to_union, sort_categories=False, ignore_order=False):
If true, the ordered attribute of the Categoricals will be ignored.
Results in an unordered categorical.

.. versionadded:: 0.20.0

Returns
-------
result : Categorical
Expand Down
4 changes: 0 additions & 4 deletions pandas/core/dtypes/inference.py
Expand Up @@ -162,8 +162,6 @@ def is_file_like(obj):
Note: file-like objects must be iterable, but
iterable objects need not be file-like.

.. versionadded:: 0.20.0

Parameters
----------
obj : The object to check
Expand Down Expand Up @@ -281,8 +279,6 @@ def is_nested_list_like(obj):
Check if the object is list-like, and that all of its elements
are also list-like.

.. versionadded:: 0.20.0

Parameters
----------
obj : The object to check
Expand Down
4 changes: 0 additions & 4 deletions pandas/core/frame.py
Expand Up @@ -2082,8 +2082,6 @@ def to_feather(self, fname):
"""
Write out the binary feather-format for DataFrames.

.. versionadded:: 0.20.0

Parameters
----------
fname : str
Expand Down Expand Up @@ -7870,8 +7868,6 @@ def nunique(self, axis=0, dropna=True):
Return Series with number of distinct observations. Can ignore NaN
values.

.. versionadded:: 0.20.0

Parameters
----------
axis : {0 or 'index', 1 or 'columns'}, default 0
Expand Down
19 changes: 0 additions & 19 deletions pandas/core/generic.py
Expand Up @@ -897,8 +897,6 @@ def squeeze(self, axis=None):
A specific axis to squeeze. By default, all length-1 axes are
squeezed.

.. versionadded:: 0.20.0

Returns
-------
DataFrame, Series, or scalar
Expand Down Expand Up @@ -2163,8 +2161,6 @@ def _repr_data_resource_(self):
Specifies the one-based bottommost row and rightmost column that
is to be frozen.

.. versionadded:: 0.20.0.

See Also
--------
to_csv : Write DataFrame to a comma-separated values (csv) file.
Expand Down Expand Up @@ -2756,8 +2752,6 @@ def to_pickle(self, path, compression="infer", protocol=pickle.HIGHEST_PROTOCOL)
default 'infer'
A string representing the compression to use in the output file. By
default, infers from the file extension in specified path.

.. versionadded:: 0.20.0
protocol : int
Int which indicates which protocol should be used by the pickler,
default HIGHEST_PROTOCOL (see [1]_ paragraph 12.1.2). The possible
Expand Down Expand Up @@ -3032,22 +3026,15 @@ def to_latex(
multicolumn : bool, default True
Use \multicolumn to enhance MultiIndex columns.
The default will be read from the config module.

.. versionadded:: 0.20.0
multicolumn_format : str, default 'l'
The alignment for multicolumns, similar to `column_format`
The default will be read from the config module.

.. versionadded:: 0.20.0
multirow : bool, default False
Use \multirow to enhance MultiIndex rows. Requires adding a
\usepackage{multirow} to your LaTeX preamble. Will print
centered labels (instead of top-aligned) across the contained
rows, separating groups via clines. The default will be read
from the pandas config module.

.. versionadded:: 0.20.0

caption : str, optional
The LaTeX caption to be placed inside ``\caption{}`` in the output.

Expand Down Expand Up @@ -5133,8 +5120,6 @@ def pipe(self, func, *args, **kwargs):
Call ``func`` on self producing a %(klass)s with transformed values
and that has the same axis length as self.

.. versionadded:: 0.20.0

Parameters
----------
func : function, str, list or dict
Expand Down Expand Up @@ -5805,8 +5790,6 @@ def astype(self, dtype, copy=True, errors="raise"):
- ``raise`` : allow exceptions to be raised
- ``ignore`` : suppress exceptions. On error return original object.

.. versionadded:: 0.20.0

Returns
-------
casted : same type as caller
Expand Down Expand Up @@ -7946,8 +7929,6 @@ def asfreq(self, freq, method=None, how=None, normalize=False, fill_value=None):
Value to use for missing values, applied during upsampling (note
this does not fill NaNs that already were present).

.. versionadded:: 0.20.0

Returns
-------
converted : same type as caller
Expand Down