Skip to content

Commit

Permalink
Merge remote-tracking branch 'upstream/master'
Browse files Browse the repository at this point in the history
  • Loading branch information
Carlos Souza committed Mar 28, 2017
2 parents 8b463cb + 34c6bd0 commit 43456a5
Show file tree
Hide file tree
Showing 54 changed files with 982 additions and 963 deletions.
51 changes: 23 additions & 28 deletions .travis.yml
@@ -1,12 +1,15 @@
sudo: false
language: python
# Default Python version is usually 2.7
python: 3.5

# To turn off cached cython files and compiler cache
# set NOCACHE-true
# To delete caches go to https://travis-ci.org/OWNER/REPOSITORY/caches or run
# travis cache --delete inside the project directory from the travis command line client
# The cash directories will be deleted if anything in ci/ changes in a commit
cache:
ccache: true
directories:
- $HOME/.cache # cython cache
- $HOME/.ccache # compiler cache
Expand All @@ -23,75 +26,67 @@ git:

matrix:
fast_finish: true
exclude:
# Exclude the default Python 3.5 build
- python: 3.5
include:
- language: objective-c
os: osx
compiler: clang
cache:
ccache: true
directories:
- $HOME/.cache # cython cache
- $HOME/.ccache # compiler cache
- os: osx
language: generic
env:
- JOB="3.5_OSX" TEST_ARGS="--skip-slow --skip-network" TRAVIS_PYTHON_VERSION=3.5
- python: 2.7
- JOB="3.5_OSX" TEST_ARGS="--skip-slow --skip-network"
- os: linux
env:
- JOB="2.7_LOCALE" TEST_ARGS="--only-slow --skip-network" LOCALE_OVERRIDE="zh_CN.UTF-8"
addons:
apt:
packages:
- language-pack-zh-hans
- python: 2.7
- os: linux
env:
- JOB="2.7" TEST_ARGS="--skip-slow" LINT=true
addons:
apt:
packages:
- python-gtk2
- python: 3.5
- os: linux
env:
- JOB="3.5" TEST_ARGS="--skip-slow --skip-network" COVERAGE=true
addons:
apt:
packages:
- xsel
- python: 3.6
- os: linux
env:
- JOB="3.6" TEST_ARGS="--skip-slow --skip-network" PANDAS_TESTING_MODE="deprecate" CONDA_FORGE=true
addons:
apt:
packages:
- libatlas-base-dev
- gfortran
# In allow_failures
- python: 2.7
- os: linux
env:
- JOB="2.7_SLOW" TEST_ARGS="--only-slow --skip-network"
# In allow_failures
- python: 2.7
- os: linux
env:
- JOB="2.7_BUILD_TEST" TEST_ARGS="--skip-slow" BUILD_TEST=true
# In allow_failures
- python: 3.6
- os: linux
env:
- JOB="3.6_NUMPY_DEV" TEST_ARGS="--skip-slow --skip-network" PANDAS_TESTING_MODE="deprecate"
# In allow_failures
- python: 3.5
- os: linux
env:
- JOB="3.5_DOC_BUILD" DOC_BUILD=true
- JOB="3.5_DOC" DOC=true
allow_failures:
- python: 2.7
- os: linux
env:
- JOB="2.7_SLOW" TEST_ARGS="--only-slow --skip-network"
- python: 2.7
- os: linux
env:
- JOB="2.7_BUILD_TEST" TEST_ARGS="--skip-slow" BUILD_TEST=true
- python: 3.6
- os: linux
env:
- JOB="3.6_NUMPY_DEV" TEST_ARGS="--skip-slow --skip-network" PANDAS_TESTING_MODE="deprecate"
- python: 3.5
- os: linux
env:
- JOB="3.5_DOC_BUILD" DOC_BUILD=true
- JOB="3.5_DOC" DOC=true

before_install:
- echo "before_install"
Expand Down
2 changes: 1 addition & 1 deletion ci/build_docs.sh
Expand Up @@ -17,7 +17,7 @@ if [ "$?" != "0" ]; then
fi


if [ x"$DOC_BUILD" != x"" ]; then
if [ "$DOC" ]; then

echo "Will build docs"

Expand Down
3 changes: 2 additions & 1 deletion ci/install_travis.sh
Expand Up @@ -77,8 +77,9 @@ if [ -z "$NOCACHE" ] && [ "${TRAVIS_OS_NAME}" == "linux" ]; then
echo "[ccache]: $ccache"
export CC='ccache gcc'
elif [ -z "$NOCACHE" ] && [ "${TRAVIS_OS_NAME}" == "osx" ]; then
echo "[Install ccache]"
brew install ccache > /dev/null 2>&1
echo "[Using ccache]"
time brew install ccache
export PATH=/usr/local/opt/ccache/libexec:$PATH
gcc=$(which gcc)
echo "[gcc]: $gcc"
Expand Down
@@ -1,6 +1,5 @@
python=3.5*
python-dateutil
pytz
nomkl
numpy
cython
File renamed without changes.
File renamed without changes.
7 changes: 2 additions & 5 deletions ci/script_multi.sh
Expand Up @@ -4,11 +4,6 @@ echo "[script multi]"

source activate pandas

# don't run the tests for the doc build
if [ x"$DOC_BUILD" != x"" ]; then
exit 0
fi

if [ -n "$LOCALE_OVERRIDE" ]; then
export LC_ALL="$LOCALE_OVERRIDE";
echo "Setting LC_ALL to $LOCALE_OVERRIDE"
Expand All @@ -26,6 +21,8 @@ echo PYTHONHASHSEED=$PYTHONHASHSEED
if [ "$BUILD_TEST" ]; then
cd /tmp
python -c "import pandas; pandas.test(['-n 2'])"
elif [ "$DOC" ]; then
echo "We are not running pytest as this is a doc-build"
elif [ "$COVERAGE" ]; then
echo pytest -s -n 2 -m "not single" --cov=pandas --cov-report xml:/tmp/cov-multiple.xml --junitxml=/tmp/multiple.xml $TEST_ARGS pandas
pytest -s -n 2 -m "not single" --cov=pandas --cov-report xml:/tmp/cov-multiple.xml --junitxml=/tmp/multiple.xml $TEST_ARGS pandas
Expand Down
9 changes: 3 additions & 6 deletions ci/script_single.sh
Expand Up @@ -4,11 +4,6 @@ echo "[script_single]"

source activate pandas

# don't run the tests for the doc build
if [ x"$DOC_BUILD" != x"" ]; then
exit 0
fi

if [ -n "$LOCALE_OVERRIDE" ]; then
export LC_ALL="$LOCALE_OVERRIDE";
echo "Setting LC_ALL to $LOCALE_OVERRIDE"
Expand All @@ -18,7 +13,9 @@ if [ -n "$LOCALE_OVERRIDE" ]; then
fi

if [ "$BUILD_TEST" ]; then
echo "We are not running pytest as this is simply a build test."
echo "We are not running pytest as this is a build test."
elif [ "$DOC" ]; then
echo "We are not running pytest as this is a doc-build"
elif [ "$COVERAGE" ]; then
echo pytest -s -m "single" --cov=pandas --cov-report xml:/tmp/cov-single.xml --junitxml=/tmp/single.xml $TEST_ARGS pandas
pytest -s -m "single" --cov=pandas --cov-report xml:/tmp/cov-single.xml --junitxml=/tmp/single.xml $TEST_ARGS pandas
Expand Down
43 changes: 21 additions & 22 deletions doc/source/10min.rst
Expand Up @@ -84,29 +84,28 @@ will be completed:

@verbatim
In [1]: df2.<TAB>
df2.A df2.boxplot
df2.abs df2.C
df2.add df2.clip
df2.add_prefix df2.clip_lower
df2.add_suffix df2.clip_upper
df2.align df2.columns
df2.all df2.combine
df2.any df2.combineAdd
df2.A df2.bool
df2.abs df2.boxplot
df2.add df2.C
df2.add_prefix df2.clip
df2.add_suffix df2.clip_lower
df2.align df2.clip_upper
df2.all df2.columns
df2.any df2.combine
df2.append df2.combine_first
df2.apply df2.combineMult
df2.applymap df2.compound
df2.as_blocks df2.consolidate
df2.asfreq df2.convert_objects
df2.as_matrix df2.copy
df2.astype df2.corr
df2.at df2.corrwith
df2.at_time df2.count
df2.axes df2.cov
df2.B df2.cummax
df2.between_time df2.cummin
df2.bfill df2.cumprod
df2.blocks df2.cumsum
df2.bool df2.D
df2.apply df2.compound
df2.applymap df2.consolidate
df2.as_blocks df2.convert_objects
df2.asfreq df2.copy
df2.as_matrix df2.corr
df2.astype df2.corrwith
df2.at df2.count
df2.at_time df2.cov
df2.axes df2.cummax
df2.B df2.cummin
df2.between_time df2.cumprod
df2.bfill df2.cumsum
df2.blocks df2.D

As you can see, the columns ``A``, ``B``, ``C``, and ``D`` are automatically
tab completed. ``E`` is there as well; the rest of the attributes have been
Expand Down
1 change: 1 addition & 0 deletions doc/source/api.rst
Expand Up @@ -1277,6 +1277,7 @@ Attributes
Index.nbytes
Index.ndim
Index.size
Index.empty
Index.strides
Index.itemsize
Index.base
Expand Down
9 changes: 9 additions & 0 deletions doc/source/categorical.rst
Expand Up @@ -230,6 +230,15 @@ Categories must be unique or a `ValueError` is raised:
except ValueError as e:
print("ValueError: " + str(e))
Categories must also not be ``NaN`` or a `ValueError` is raised:

.. ipython:: python
try:
s.cat.categories = [1,2,np.nan]
except ValueError as e:
print("ValueError: " + str(e))
Appending new categories
~~~~~~~~~~~~~~~~~~~~~~~~

Expand Down
4 changes: 2 additions & 2 deletions doc/source/ecosystem.rst
Expand Up @@ -93,8 +93,8 @@ targets the IPython Notebook environment.

`Plotly’s <https://plot.ly/>`__ `Python API <https://plot.ly/python/>`__ enables interactive figures and web shareability. Maps, 2D, 3D, and live-streaming graphs are rendered with WebGL and `D3.js <http://d3js.org/>`__. The library supports plotting directly from a pandas DataFrame and cloud-based collaboration. Users of `matplotlib, ggplot for Python, and Seaborn <https://plot.ly/python/matplotlib-to-plotly-tutorial/>`__ can convert figures into interactive web-based plots. Plots can be drawn in `IPython Notebooks <https://plot.ly/ipython-notebooks/>`__ , edited with R or MATLAB, modified in a GUI, or embedded in apps and dashboards. Plotly is free for unlimited sharing, and has `cloud <https://plot.ly/product/plans/>`__, `offline <https://plot.ly/python/offline/>`__, or `on-premise <https://plot.ly/product/enterprise/>`__ accounts for private use.

Visualizing Data in Qt applications
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
`QtPandas <https://github.com/draperjames/qtpandas>`__
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Spun off from the main pandas library, the `qtpandas <https://github.com/draperjames/qtpandas>`__
library enables DataFrame visualization and manipulation in PyQt4 and PySide applications.
Expand Down
25 changes: 20 additions & 5 deletions doc/source/io.rst
Expand Up @@ -91,11 +91,12 @@ filepath_or_buffer : various
locations), or any object with a ``read()`` method (such as an open file or
:class:`~python:io.StringIO`).
sep : str, defaults to ``','`` for :func:`read_csv`, ``\t`` for :func:`read_table`
Delimiter to use. If sep is ``None``,
will try to automatically determine this. Separators longer than 1 character
and different from ``'\s+'`` will be interpreted as regular expressions, will
force use of the python parsing engine and will ignore quotes in the data.
Regex example: ``'\\r\\t'``.
Delimiter to use. If sep is ``None``, the C engine cannot automatically detect
the separator, but the Python parsing engine can, meaning the latter will be
used automatically. In addition, separators longer than 1 character and
different from ``'\s+'`` will be interpreted as regular expressions and
will also force the use of the Python parsing engine. Note that regex
delimiters are prone to ignoring quoted data. Regex example: ``'\\r\\t'``.
delimiter : str, default ``None``
Alternative argument name for sep.
delim_whitespace : boolean, default False
Expand Down Expand Up @@ -2766,6 +2767,20 @@ indices to be parsed.
read_excel('path_to_file.xls', 'Sheet1', parse_cols=[0, 2, 3])
Parsing Dates
+++++++++++++

Datetime-like values are normally automatically converted to the appropriate
dtype when reading the excel file. But if you have a column of strings that
*look* like dates (but are not actually formatted as dates in excel), you can
use the `parse_dates` keyword to parse those strings to datetimes:

.. code-block:: python
read_excel('path_to_file.xls', 'Sheet1', parse_dates=['date_strings'])
Cell Converters
+++++++++++++++

Expand Down
11 changes: 5 additions & 6 deletions doc/source/text.rst
Expand Up @@ -146,8 +146,8 @@ following code will cause trouble because of the regular expression meaning of
# We need to escape the special character (for >1 len patterns)
dollars.str.replace(r'-\$', '-')
The ``replace`` method can also take a callable as replacement. It is called
on every ``pat`` using :func:`re.sub`. The callable should expect one
The ``replace`` method can also take a callable as replacement. It is called
on every ``pat`` using :func:`re.sub`. The callable should expect one
positional argument (a regex object) and return a string.

.. versionadded:: 0.20.0
Expand Down Expand Up @@ -372,21 +372,20 @@ You can check whether elements contain a pattern:

.. ipython:: python
pattern = r'[a-z][0-9]'
pattern = r'[0-9][a-z]'
pd.Series(['1', '2', '3a', '3b', '03c']).str.contains(pattern)
or match a pattern:


.. ipython:: python
pd.Series(['1', '2', '3a', '3b', '03c']).str.match(pattern, as_indexer=True)
pd.Series(['1', '2', '3a', '3b', '03c']).str.match(pattern)
The distinction between ``match`` and ``contains`` is strictness: ``match``
relies on strict ``re.match``, while ``contains`` relies on ``re.search``.

Methods like ``match``, ``contains``, ``startswith``, and ``endswith`` take
an extra ``na`` argument so missing values can be considered True or False:
an extra ``na`` argument so missing values can be considered True or False:

.. ipython:: python
Expand Down

0 comments on commit 43456a5

Please sign in to comment.