Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ARROW-13404: [Doc][Python] Improve PyArrow documentation for new users #10999

Closed
wants to merge 8 commits into from
Closed
Show file tree
Hide file tree
Changes from 7 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
10 changes: 10 additions & 0 deletions docs/source/index.rst
Expand Up @@ -55,6 +55,16 @@ target environment.**
Rust <https://docs.rs/crate/arrow/>
status

.. _toc.cookbook:

.. toctree::
:maxdepth: 1
:caption: Cookbooks

C++ <https://arrow.apache.org/cookbook/cpp/>
Python <https://arrow.apache.org/cookbook/py/>
R <https://arrow.apache.org/cookbook/r/>

.. _toc.columnar:

.. toctree::
Expand Down
145 changes: 145 additions & 0 deletions docs/source/python/getstarted.rst
@@ -0,0 +1,145 @@
.. Licensed to the Apache Software Foundation (ASF) under one
.. or more contributor license agreements. See the NOTICE file
.. distributed with this work for additional information
.. regarding copyright ownership. The ASF licenses this file
.. to you under the Apache License, Version 2.0 (the
.. "License"); you may not use this file except in compliance
.. with the License. You may obtain a copy of the License at

.. http://www.apache.org/licenses/LICENSE-2.0

.. Unless required by applicable law or agreed to in writing,
.. software distributed under the License is distributed on an
.. "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
.. KIND, either express or implied. See the License for the
.. specific language governing permissions and limitations
.. under the License.

.. _getstarted:

Getting Started
===============

Arrow manages data in Arrays (:class:`pyarrow.Array`), which can be
amol- marked this conversation as resolved.
Show resolved Hide resolved
grouped in tables (:class:`pyarrow.Table`) to represent columns of data
in tabular data.

Arrow also provides support for various formats to get those tabular
data in and out of disk and networks. Most commonly used formats are
Parquet (:ref:`parquet`) and the IPC format (:ref:`ipc`).
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure the IPC format is really commonly used compared to, say, CSV :-)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here I mentioned the formats that are column major. As CSV and JSON are row oriented I didn't mentioned them as primary choices, but they are mentioned in the "Saving and Loading Tables" section together with the other available formats.


Creating Arrays and Tables
--------------------------

Arrays in Arrow are collections of data of uniform type. That allows
Arrow to use the best performing implementation to store the data and
perform computations on it. So each array is meant to have data and
a type

.. ipython:: python

import pyarrow as pa

days = pa.array([1, 12, 17, 23, 28], type=pa.int8())

Multiple arrays can be combined in tables to form the columns
in tabular data when attached to a column name

.. ipython:: python

months = pa.array([1, 3, 5, 7, 1], type=pa.int8())
years = pa.array([1990, 2000, 1995, 2000, 1995], type=pa.int16())

birthdays_table = pa.table([days, months, years],
names=["days", "months", "years"])

birthdays_table

See :ref:`data` for more details.

Saving and Loading Tables
-------------------------

Once you have tabular data, Arrow provides out of the box
the features to save and restore that data for common formats
like Parquet:

.. ipython:: python

import pyarrow.parquet as pq

pq.write_table(birthdays_table, 'birthdays.parquet')

Once you have your data on disk, loading it back is a single function call,
and Arrow is heavily optimized for memory and speed so loading
data will be as quick as possible

.. ipython:: python

reloaded_birthdays = pq.read_table('birthdays.parquet')

reloaded_birthdays

Saving and loading back data in arrow is usually done through
:ref:`Parquet <parquet>`, :ref:`IPC format <ipc>` (:ref:`feather`), :ref:`CSV <csv>` or
:ref:`Line-Delimited JSON <json>` formats.

Performing Computations
-----------------------

Arrow ships with a bunch of compute functions that can be applied
to its arrays and tables, so through the compute functions
it's possible to apply transformations to the data

.. ipython:: python

import pyarrow.compute as pc

pc.value_counts(birthdays_table["years"])

See :ref:`compute` for a list of available compute functions and
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This links to the python page, which doesn't actually have a list of them ... (but not sure if directly linking to the C++ ones is better, though, it's just not ideal ;))

Copy link
Contributor Author

@amol- amol- Aug 30, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, it's actually something I want to fix (the empty compute page) as we already have https://arrow.apache.org/docs/python/api/compute.html which does list compute functions in python

how to use them.

Working with large data
-----------------------

Arrow also provides the :class:`pyarrow.dataset` api to work with
amol- marked this conversation as resolved.
Show resolved Hide resolved
large data, which will handle for you partitioning of your data in
smaller chunks

.. ipython:: python

import pyarrow.dataset as ds

ds.write_dataset(birthdays_table, "savedir", format="parquet",
partitioning=ds.partitioning(
pa.schema([birthdays_table.schema.field("years")])
))

Loading back the partitioned dataset will detect the chunks

.. ipython:: python

birthdays_dataset = ds.dataset("savedir", format="parquet", partitioning=["years"])

birthdays_dataset.files

and will lazily load chunks of data only when iterating over them

.. ipython:: python

import datetime

current_year = datetime.datetime.utcnow().year
for table_chunk in birthdays_dataset.to_batches():
print("AGES", pc.subtract(current_year, table_chunk["years"]))

For further details on how to work with big datasets, how to filter them,
how to project them, etc., refer to :ref:`dataset` documentation.

Continuining from here
----------------------

For digging further into Arrow, you might want to read the
:doc:`PyArrow Documentation <./index>` itself or the
`Arrow Python Cookbook <https://arrow.apache.org/cookbook/py/>`_
17 changes: 11 additions & 6 deletions docs/source/python/index.rst
Expand Up @@ -15,12 +15,16 @@
.. specific language governing permissions and limitations
.. under the License.

Python bindings
===============
PyArrow - Apache Arrow Python bindings
======================================

This is the documentation of the Python API of Apache Arrow. For more details
on the Arrow format and other language bindings see the
:doc:`parent documentation <../index>`.
This is the documentation of the Python API of Apache Arrow.

Apache Arrow is a development platform for in-memory analytics.
It contains a set of technologies that enable big data systems to store, process and move data fast.

See the :doc:`parent documentation <../index>` for additional details on
the Arrow Project itself, on the Arrow format and the other language bindings.

The Arrow Python bindings (also named "PyArrow") have first-class integration
with NumPy, pandas, and built-in Python objects. They are based on the C++
Expand All @@ -34,9 +38,10 @@ files into Arrow structures.
:maxdepth: 2

install
memory
getstarted
data
compute
memory
ipc
filesystems
filesystems_deprecated
Expand Down