Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 1 addition & 7 deletions doc/source/user_guide/io.rst
Original file line number Diff line number Diff line change
Expand Up @@ -303,8 +303,6 @@ compression : {``'infer'``, ``'gzip'``, ``'bz2'``, ``'zip'``, ``'xz'``, ``'zstd'
As an example, the following could be passed for faster compression and to
create a reproducible gzip archive:
``compression={'method': 'gzip', 'compresslevel': 1, 'mtime': 1}``.

.. versionchanged:: 1.2.0 Previous versions forwarded dict entries for 'gzip' to ``gzip.open``.
thousands : str, default ``None``
Thousands separator.
decimal : str, default ``'.'``
Expand Down Expand Up @@ -1472,7 +1470,7 @@ rather than reading the entire file into memory, such as the following:
table


By specifying a ``chunksize`` to ``read_csv``, the return
By specifying a ``chunksize`` to :func:`read_csv` as a context manager, the return
value will be an iterable object of type ``TextFileReader``:

.. ipython:: python
Expand All @@ -1482,10 +1480,6 @@ value will be an iterable object of type ``TextFileReader``:
for chunk in reader:
print(chunk)

.. versionchanged:: 1.2

``read_csv/json/sas`` return a context-manager when iterating through a file.

Specifying ``iterator=True`` will also return the ``TextFileReader`` object:

.. ipython:: python
Expand Down
4 changes: 0 additions & 4 deletions doc/source/user_guide/visualization.rst
Original file line number Diff line number Diff line change
Expand Up @@ -326,8 +326,6 @@ The ``by`` keyword can be specified to plot grouped histograms:
In addition, the ``by`` keyword can also be specified in :meth:`DataFrame.plot.hist`.

.. versionchanged:: 1.4.0

.. ipython:: python
data = pd.DataFrame(
Expand Down Expand Up @@ -480,8 +478,6 @@ columns:
You could also create groupings with :meth:`DataFrame.plot.box`, for instance:

.. versionchanged:: 1.4.0

.. ipython:: python
:suppress:
Expand Down
4 changes: 0 additions & 4 deletions pandas/core/arrays/categorical.py
Original file line number Diff line number Diff line change
Expand Up @@ -2545,10 +2545,6 @@ def unique(self) -> Self:
Return the ``Categorical`` which ``categories`` and ``codes`` are
unique.

.. versionchanged:: 1.3.0

Previously, unused categories were dropped from the new categories.

Returns
-------
Categorical
Expand Down
4 changes: 0 additions & 4 deletions pandas/core/arrays/masked.py
Original file line number Diff line number Diff line change
Expand Up @@ -1686,8 +1686,6 @@ def any(
missing values are present, similar :ref:`Kleene logic <boolean.kleene>`
is used as for logical operations.

.. versionchanged:: 1.4.0

Parameters
----------
skipna : bool, default True
Expand Down Expand Up @@ -1774,8 +1772,6 @@ def all(
missing values are present, similar :ref:`Kleene logic <boolean.kleene>`
is used as for logical operations.

.. versionchanged:: 1.4.0

Parameters
----------
skipna : bool, default True
Expand Down
4 changes: 1 addition & 3 deletions pandas/core/arrays/string_.py
Original file line number Diff line number Diff line change
Expand Up @@ -573,9 +573,7 @@ class StringArray(BaseStringArray, NumpyExtensionArray): # type: ignore[misc]
:meth:`pandas.array` with ``dtype="string"`` for a stable way of
creating a `StringArray` from any sequence.

.. versionchanged:: 1.5.0

StringArray now accepts array-likes containing
StringArray accepts array-likes containing
nan-likes(``None``, ``np.nan``) for the ``values`` parameter
in addition to strings and :attr:`pandas.NA`

Expand Down
15 changes: 0 additions & 15 deletions pandas/core/frame.py
Original file line number Diff line number Diff line change
Expand Up @@ -554,8 +554,6 @@ class DataFrame(NDFrame, OpsMixin):
If data is a dict containing one or more Series (possibly of different dtypes),
``copy=False`` will ensure that these inputs are not copied.

.. versionchanged:: 1.3.0

See Also
--------
DataFrame.from_records : Constructor from tuples, also record arrays.
Expand Down Expand Up @@ -2686,17 +2684,13 @@ def to_stata(
8 characters and values are repeated.
{compression_options}

.. versionchanged:: 1.4.0 Zstandard support.

{storage_options}

value_labels : dict of dicts
Dictionary containing columns as keys and dictionaries of column value
to labels as values. Labels for a single variable must be 32,000
characters or smaller.

.. versionadded:: 1.4.0

Raises
------
NotImplementedError
Expand Down Expand Up @@ -3534,8 +3528,6 @@ def to_xml(
scripts and not later versions is currently supported.
{compression_options}

.. versionchanged:: 1.4.0 Zstandard support.

{storage_options}

Returns
Expand Down Expand Up @@ -9487,13 +9479,6 @@ def groupby(
when the result's index (and column) labels match the inputs, and
are included otherwise.

.. versionchanged:: 1.5.0

Warns that ``group_keys`` will no longer be ignored when the
result from ``apply`` is a like-indexed Series or DataFrame.
Specify ``group_keys`` explicitly to include the group keys or
not.

.. versionchanged:: 2.0.0

``group_keys`` now defaults to ``True``.
Expand Down
13 changes: 0 additions & 13 deletions pandas/core/generic.py
Original file line number Diff line number Diff line change
Expand Up @@ -2415,8 +2415,6 @@ def to_json(
list-like.
{compression_options}

.. versionchanged:: 1.4.0 Zstandard support.

index : bool or None, default None
The index is only used when 'orient' is 'split', 'index', 'column',
or 'table'. Of these, 'index' and 'column' do not support
Expand Down Expand Up @@ -3850,12 +3848,6 @@ def to_csv(
The newline character or character sequence to use in the output
file. Defaults to `os.linesep`, which depends on the OS in which
this method is called ('\\n' for linux, '\\r\\n' for Windows, i.e.).

.. versionchanged:: 1.5.0

Previously was line_terminator, changed for consistency with
read_csv and the standard library 'csv' module.

chunksize : int or None
Rows to write at a time.
date_format : str, default None
Expand Down Expand Up @@ -5859,11 +5851,6 @@ def sample(
If int, array-like, or BitGenerator, seed for random number generator.
If np.random.RandomState or np.random.Generator, use as given.
Default ``None`` results in sampling with the current state of np.random.

.. versionchanged:: 1.4.0

np.random.Generator objects now accepted

axis : {0 or 'index', 1 or 'columns', None}, default None
Axis to sample. Accepts axis number or name. Default is stat axis
for given data type. For `Series` this parameter is unused and defaults to `None`.
Expand Down
47 changes: 13 additions & 34 deletions pandas/core/groupby/generic.py
Original file line number Diff line number Diff line change
Expand Up @@ -251,9 +251,7 @@ def _get_data_to_aggregate(
1 1 2
2 3 4

.. versionchanged:: 1.3.0

The resulting dtype will reflect the return value of the aggregating function.
The resulting dtype will reflect the return value of the aggregating function.

>>> s.groupby([1, 1, 2, 2]).agg(lambda x: x.astype(float).min())
1 1.0
Expand Down Expand Up @@ -307,11 +305,8 @@ def apply(self, func, *args, **kwargs) -> Series:

Notes
-----

.. versionchanged:: 1.3.0

The resulting dtype will reflect the return value of the passed ``func``,
see the examples below.
The resulting dtype will reflect the return value of the passed ``func``,
see the examples below.

Functions that mutate the passed object can produce unexpected
behavior or errors and are not supported. See :ref:`gotchas.udf-mutation`
Expand All @@ -332,9 +327,7 @@ def apply(self, func, *args, **kwargs) -> Series:
its argument and returns a Series. `apply` combines the result for
each group together into a new Series.

.. versionchanged:: 1.3.0

The resulting dtype will reflect the return value of the passed ``func``.
The resulting dtype will reflect the return value of the passed ``func``.

>>> g1.apply(lambda x: x * 2 if x.name == "a" else x / 2)
a 0.0
Expand Down Expand Up @@ -455,10 +448,8 @@ def aggregate(self, func=None, *args, engine=None, engine_kwargs=None, **kwargs)
behavior or errors and are not supported. See :ref:`gotchas.udf-mutation`
for more details.

.. versionchanged:: 1.3.0

The resulting dtype will reflect the return value of the passed ``func``,
see the examples below.
The resulting dtype will reflect the return value of the passed ``func``,
see the examples below.

Examples
--------
Expand Down Expand Up @@ -497,10 +488,8 @@ def aggregate(self, func=None, *args, engine=None, engine_kwargs=None, **kwargs)
1 1 2
2 3 4

.. versionchanged:: 1.3.0

The resulting dtype will reflect the return value of the aggregating
function.
The resulting dtype will reflect the return value of the aggregating
function.

>>> s.groupby([1, 1, 2, 2]).agg(lambda x: x.astype(float).min())
1 1.0
Expand Down Expand Up @@ -705,8 +694,6 @@ def _wrap_applied_output(
Parrot 25.0
Name: Max Speed, dtype: float64

.. versionchanged:: 1.3.0

The resulting dtype will reflect the return value of the passed ``func``,
for example:

Expand Down Expand Up @@ -1788,9 +1775,7 @@ class DataFrameGroupBy(GroupBy[DataFrame]):

See :ref:`groupby.aggregate.named` for more.

.. versionchanged:: 1.3.0

The resulting dtype will reflect the return value of the aggregating function.
The resulting dtype will reflect the return value of the aggregating function.

>>> df.groupby("A")[["B"]].agg(lambda x: x.astype(float).min())
B
Expand Down Expand Up @@ -1881,10 +1866,8 @@ def aggregate(self, func=None, *args, engine=None, engine_kwargs=None, **kwargs)
behavior or errors and are not supported. See :ref:`gotchas.udf-mutation`
for more details.

.. versionchanged:: 1.3.0

The resulting dtype will reflect the return value of the passed ``func``,
see the examples below.
The resulting dtype will reflect the return value of the passed ``func``,
see the examples below.

Examples
--------
Expand Down Expand Up @@ -1964,10 +1947,8 @@ def aggregate(self, func=None, *args, engine=None, engine_kwargs=None, **kwargs)

See :ref:`groupby.aggregate.named` for more.

.. versionchanged:: 1.3.0

The resulting dtype will reflect the return value of the aggregating
function.
The resulting dtype will reflect the return value of the aggregating
function.

>>> df.groupby("A")[["B"]].agg(lambda x: x.astype(float).min())
B
Expand Down Expand Up @@ -2326,8 +2307,6 @@ def _transform_general(self, func, engine, engine_kwargs, *args, **kwargs):
4 3.666667 4.0
5 4.000000 5.0

.. versionchanged:: 1.3.0

The resulting dtype will reflect the return value of the passed ``func``,
for example:

Expand Down
21 changes: 5 additions & 16 deletions pandas/core/groupby/groupby.py
Original file line number Diff line number Diff line change
Expand Up @@ -406,10 +406,8 @@ class providing the base-class of operations.
The group data and group index will be passed as numpy arrays to the JITed
user defined function, and no alternative execution attempts will be tried.

.. versionchanged:: 1.3.0

The resulting dtype will reflect the return value of the passed ``func``,
see the examples below.
The resulting dtype will reflect the return value of the passed ``func``,
see the examples below.

.. versionchanged:: 2.0.0

Expand Down Expand Up @@ -1518,11 +1516,8 @@ def apply(self, func, *args, include_groups: bool = False, **kwargs) -> NDFrameT

Notes
-----

.. versionchanged:: 1.3.0

The resulting dtype will reflect the return value of the passed ``func``,
see the examples below.
The resulting dtype will reflect the return value of the passed ``func``,
see the examples below.

Functions that mutate the passed object can produce unexpected
behavior or errors and are not supported. See :ref:`gotchas.udf-mutation`
Expand Down Expand Up @@ -1562,9 +1557,7 @@ def apply(self, func, *args, include_groups: bool = False, **kwargs) -> NDFrameT
its argument and returns a Series. `apply` combines the result for
each group together into a new DataFrame.

.. versionchanged:: 1.3.0

The resulting dtype will reflect the return value of the passed ``func``.
The resulting dtype will reflect the return value of the passed ``func``.

>>> g1[["B", "C"]].apply(lambda x: x.astype(float).max() - x.min())
B C
Expand Down Expand Up @@ -5563,10 +5556,6 @@ def sample(
If np.random.RandomState or np.random.Generator, use as given.
Default ``None`` results in sampling with the current state of np.random.

.. versionchanged:: 1.4.0

np.random.Generator objects now accepted

Returns
-------
Series or DataFrame
Expand Down
5 changes: 0 additions & 5 deletions pandas/core/indexes/base.py
Original file line number Diff line number Diff line change
Expand Up @@ -1953,18 +1953,13 @@ def set_names(self, names, *, level=None, inplace: bool = False) -> Self | None:

Parameters
----------

names : Hashable or a sequence of the previous or dict-like for MultiIndex
Name(s) to set.

.. versionchanged:: 1.3.0

level : int, Hashable or a sequence of the previous, optional
If the index is a MultiIndex and names is not dict-like, level(s) to set
(None for all levels). Otherwise level must be None.

.. versionchanged:: 1.3.0

inplace : bool, default False
Modifies the object directly, instead of creating a new Index or
MultiIndex.
Expand Down
2 changes: 0 additions & 2 deletions pandas/core/series.py
Original file line number Diff line number Diff line change
Expand Up @@ -5928,8 +5928,6 @@ def between(
inclusive : {"both", "neither", "left", "right"}
Include boundaries. Whether to set each bound as closed or open.

.. versionchanged:: 1.3.0

Returns
-------
Series
Expand Down
10 changes: 0 additions & 10 deletions pandas/core/shared_docs.py
Original file line number Diff line number Diff line change
Expand Up @@ -139,13 +139,6 @@
when the result's index (and column) labels match the inputs, and
are included otherwise.

.. versionchanged:: 1.5.0

Warns that ``group_keys`` will no longer be ignored when the
result from ``apply`` is a like-indexed Series or DataFrame.
Specify ``group_keys`` explicitly to include the group keys or
not.

.. versionchanged:: 2.0.0

``group_keys`` now defaults to ``True``.
Expand Down Expand Up @@ -620,9 +613,6 @@
4 None
dtype: object

.. versionchanged:: 1.4.0
Previously the explicit ``None`` was silently ignored.

When ``regex=True``, ``value`` is not ``None`` and `to_replace` is a string,
the replacement will be applied in all columns of the DataFrame.

Expand Down
Loading
Loading