Skip to content

Commit

Permalink
Merge branch 'master' into plot-1d-array-symbol
Browse files Browse the repository at this point in the history
  • Loading branch information
core-man committed Mar 24, 2021
2 parents c922870 + 65f5aee commit 54db5e8
Show file tree
Hide file tree
Showing 13 changed files with 288 additions and 90 deletions.
14 changes: 9 additions & 5 deletions .github/workflows/ci_tests_dev.yaml
Original file line number Diff line number Diff line change
@@ -1,5 +1,4 @@
# This workflow installs PyGMT dependencies, builds documentation and runs tests on GMT dev version
# For more information see: https://help.github.com/actions/language-and-framework-guides/using-python-with-github-actions
# This workflow installs PyGMT and runs tests on GMT dev version

name: GMT Dev Tests

Expand All @@ -10,10 +9,13 @@ on:
types: [ready_for_review]
paths-ignore:
- 'doc/**'
- 'examples/**'
- '*.md'
- '*.json'
- 'README.rst'
- 'LICENSE.txt'
- '.gitignore'
- '.pylintrc'
repository_dispatch:
types: [test-gmt-dev-command]
# Schedule daily tests
Expand All @@ -28,7 +30,7 @@ jobs:
fail-fast: false
matrix:
python-version: [3.9]
os: [ubuntu-20.04, macOS-10.15, windows-latest]
os: [ubuntu-latest, macOS-11.0, windows-latest]
gmt_git_ref: [master]
defaults:
run:
Expand Down Expand Up @@ -73,6 +75,7 @@ jobs:
- name: Setup Miniconda
uses: conda-incubator/setup-miniconda@v2.0.1
with:
activate-environment: pygmt
python-version: ${{ matrix.python-version }}
channels: conda-forge
miniconda-version: "latest"
Expand All @@ -81,8 +84,9 @@ jobs:
- name: Install dependencies
run: |
conda install ninja cmake libblas libcblas liblapack fftw gdal \
ghostscript libnetcdf hdf5 zlib curl pcre ipython \
dvc pytest pytest-cov pytest-mpl
ghostscript libnetcdf hdf5 zlib curl pcre make dvc
pip install --pre numpy pandas xarray netCDF4 packaging \
ipython pytest-cov pytest-mpl pytest>=6.0 sphinx-gallery
# Build and install latest GMT from GitHub
- name: Install GMT ${{ matrix.gmt_git_ref }} branch (Linux/macOS)
Expand Down
48 changes: 22 additions & 26 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -369,35 +369,13 @@ or run tests which contain names that match a specific keyword expression:
Writing an image-based test is only slightly more difficult than a simple test.
The main consideration is that you must specify the "baseline" or reference
image, and compare it with a "generated" or test image. This is handled using
the *decorator* functions `@check_figures_equal` and
`@pytest.mark.mpl_image_compare` whose usage are further described below.

#### Using check_figures_equal

This approach draws the same figure using two different methods (the reference
method and the tested method), and checks that both of them are the same.
It takes two `pygmt.Figure` objects ('fig_ref' and 'fig_test'), generates a png
image, and checks for the Root Mean Square (RMS) error between the two.
Here's an example:

```python
@check_figures_equal()
def test_my_plotting_case():
"Test that my plotting function works"
fig_ref, fig_test = Figure(), Figure()
fig_ref.grdimage("@earth_relief_01d_g", projection="W120/15c", cmap="geo")
fig_test.grdimage(grid, projection="W120/15c", cmap="geo")
return fig_ref, fig_test
```

Note: This is the recommended way to test plots whenever possible, such as when
we want to compare a reference GMT plot created from NetCDF files with one
generated by PyGMT that passes through several layers of virtualfile machinery.
Using this method will help save space in the git repository by not having to
store baseline images as with the other method below.
the *decorator* functions `@pytest.mark.mpl_image_compare` and `@check_figures_equal`
whose usage are further described below.

#### Using mpl_image_compare

> **This is the preferred way to test plots whenever possible.**
This method uses the [pytest-mpl](https://github.com/matplotlib/pytest-mpl)
plug-in to test plot generating code.
Every time the tests are run, `pytest-mpl` compares the generated plots with known
Expand Down Expand Up @@ -502,6 +480,24 @@ summarized as follows:
git push
dvc push

#### Using check_figures_equal

This approach draws the same figure using two different methods (the reference
method and the tested method), and checks that both of them are the same.
It takes two `pygmt.Figure` objects ('fig_ref' and 'fig_test'), generates a png
image, and checks for the Root Mean Square (RMS) error between the two.
Here's an example:

```python
@check_figures_equal()
def test_my_plotting_case():
"Test that my plotting function works"
fig_ref, fig_test = Figure(), Figure()
fig_ref.grdimage("@earth_relief_01d_g", projection="W120/15c", cmap="geo")
fig_test.grdimage(grid, projection="W120/15c", cmap="geo")
return fig_ref, fig_test
```

### Documentation

#### Building the documentation
Expand Down
1 change: 1 addition & 0 deletions doc/api/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -67,6 +67,7 @@ Operations on tabular data:
.. autosummary::
:toctree: generated

blockmean
blockmedian
surface

Expand Down
1 change: 1 addition & 0 deletions pygmt/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,7 @@
from pygmt.session_management import begin as _begin
from pygmt.session_management import end as _end
from pygmt.src import (
blockmean,
blockmedian,
config,
grd2cpt,
Expand Down
2 changes: 1 addition & 1 deletion pygmt/datasets/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
# Load sample data included with GMT (downloaded from the GMT cache server).

from pygmt.datasets.earth_relief import load_earth_relief
from pygmt.datasets.tutorial import (
from pygmt.datasets.samples import (
load_japan_quakes,
load_ocean_ridge_points,
load_sample_bathymetry,
Expand Down
2 changes: 1 addition & 1 deletion pygmt/datasets/tutorial.py → pygmt/datasets/samples.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
"""
Functions to load sample data from the GMT tutorials.
Functions to load sample data.
"""
import pandas as pd
from pygmt.src import which
Expand Down
2 changes: 1 addition & 1 deletion pygmt/src/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
"""
# pylint: disable=import-outside-toplevel
from pygmt.src.basemap import basemap
from pygmt.src.blockmedian import blockmedian
from pygmt.src.blockm import blockmean, blockmedian
from pygmt.src.coast import coast
from pygmt.src.colorbar import colorbar
from pygmt.src.config import config
Expand Down
146 changes: 118 additions & 28 deletions pygmt/src/blockmedian.py → pygmt/src/blockm.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
"""
blockmedian - Block average (x,y,z) data tables by median estimation.
blockm - Block average (x,y,z) data tables by mean or median estimation.
"""
import pandas as pd
from pygmt.clib import Session
Expand All @@ -15,6 +15,122 @@
)


def _blockm(block_method, table, outfile, **kwargs):
r"""
Block average (x,y,z) data tables by mean or median estimation.
Reads arbitrarily located (x,y,z) triples [or optionally weighted
quadruples (x,y,z,w)] from a table and writes to the output a mean or
median (depending on ``block_method``) position and value for every
non-empty block in a grid region defined by the ``region`` and ``spacing``
parameters.
Parameters
----------
block_method : str
Name of the GMT module to call. Must be "blockmean" or "blockmedian".
Returns
-------
output : pandas.DataFrame or None
Return type depends on whether the ``outfile`` parameter is set:
- :class:`pandas.DataFrame` table with (x, y, z) columns if ``outfile``
is not set
- None if ``outfile`` is set (filtered output will be stored in file
set by ``outfile``)
"""

kind = data_kind(table)
with GMTTempFile(suffix=".csv") as tmpfile:
with Session() as lib:
if kind == "matrix":
if not hasattr(table, "values"):
raise GMTInvalidInput(f"Unrecognized data type: {type(table)}")
file_context = lib.virtualfile_from_matrix(table.values)
elif kind == "file":
if outfile is None:
raise GMTInvalidInput("Please pass in a str to 'outfile'")
file_context = dummy_context(table)
else:
raise GMTInvalidInput(f"Unrecognized data type: {type(table)}")

with file_context as infile:
if outfile is None:
outfile = tmpfile.name
arg_str = " ".join([infile, build_arg_string(kwargs), "->" + outfile])
lib.call_module(module=block_method, args=arg_str)

# Read temporary csv output to a pandas table
if outfile == tmpfile.name: # if user did not set outfile, return pd.DataFrame
result = pd.read_csv(tmpfile.name, sep="\t", names=table.columns)
elif outfile != tmpfile.name: # return None if outfile set, output in outfile
result = None

return result


@fmt_docstring
@use_alias(
I="spacing",
R="region",
V="verbose",
a="aspatial",
f="coltypes",
r="registration",
)
@kwargs_to_strings(R="sequence")
def blockmean(table, outfile=None, **kwargs):
r"""
Block average (x,y,z) data tables by mean estimation.
Reads arbitrarily located (x,y,z) triples [or optionally weighted
quadruples (x,y,z,w)] from a table and writes to the output a mean
position and value for every non-empty block in a grid region defined by
the ``region`` and ``spacing`` parameters.
Full option list at :gmt-docs:`blockmean.html`
{aliases}
Parameters
----------
table : pandas.DataFrame or str
Either a pandas dataframe with (x, y, z) or (longitude, latitude,
elevation) values in the first three columns, or a file name to an
ASCII data table.
spacing : str
*xinc*\[\ *unit*\][**+e**\|\ **n**]
[/*yinc*\ [*unit*][**+e**\|\ **n**]].
*xinc* [and optionally *yinc*] is the grid spacing.
region : str or list
*xmin/xmax/ymin/ymax*\[\ **+r**\][**+u**\ *unit*].
Specify the region of interest.
outfile : str
Required if ``table`` is a file. The file name for the output ASCII
file.
{V}
{a}
{f}
{r}
Returns
-------
output : pandas.DataFrame or None
Return type depends on whether the ``outfile`` parameter is set:
- :class:`pandas.DataFrame` table with (x, y, z) columns if ``outfile``
is not set
- None if ``outfile`` is set (filtered output will be stored in file
set by ``outfile``)
"""
return _blockm(block_method="blockmean", table=table, outfile=outfile, **kwargs)


@fmt_docstring
@use_alias(
I="spacing",
Expand Down Expand Up @@ -73,30 +189,4 @@ def blockmedian(table, outfile=None, **kwargs):
- None if ``outfile`` is set (filtered output will be stored in file
set by ``outfile``)
"""
kind = data_kind(table)
with GMTTempFile(suffix=".csv") as tmpfile:
with Session() as lib:
if kind == "matrix":
if not hasattr(table, "values"):
raise GMTInvalidInput(f"Unrecognized data type: {type(table)}")
file_context = lib.virtualfile_from_matrix(table.values)
elif kind == "file":
if outfile is None:
raise GMTInvalidInput("Please pass in a str to 'outfile'")
file_context = dummy_context(table)
else:
raise GMTInvalidInput(f"Unrecognized data type: {type(table)}")

with file_context as infile:
if outfile is None:
outfile = tmpfile.name
arg_str = " ".join([infile, build_arg_string(kwargs), "->" + outfile])
lib.call_module(module="blockmedian", args=arg_str)

# Read temporary csv output to a pandas table
if outfile == tmpfile.name: # if user did not set outfile, return pd.DataFrame
result = pd.read_csv(tmpfile.name, sep="\t", names=table.columns)
elif outfile != tmpfile.name: # return None if outfile set, output in outfile
result = None

return result
return _blockm(block_method="blockmedian", table=table, outfile=outfile, **kwargs)
24 changes: 16 additions & 8 deletions pygmt/src/plot3d.py
Original file line number Diff line number Diff line change
Expand Up @@ -111,11 +111,14 @@ def plot3d(
Offset the plot symbol or line locations by the given amounts
*dx*/*dy*\ [/*dz*] [Default is no offset].
{G}
intensity : float or bool
Provide an *intens* value (nominally in the -1 to +1 range) to
modulate the fill color by simulating illumination [Default is None].
If using ``intensity=True``, we will instead read *intens* from the
first data column after the symbol parameters (if given).
intensity : float or bool or 1d array
Provide an *intensity* value (nominally in the -1 to +1 range) to
modulate the fill color by simulating illumination. If using
``intensity=True``, we will instead read *intensity* from the first
data column after the symbol parameters (if given). *intensity* can
also be a 1d array to set varying intensity for symbols, but it is only
valid for ``x``/``y``/``z``.
close : str
[**+b**\|\ **d**\|\ **D**][**+xl**\|\ **r**\|\ *x0*]\
[**+yl**\|\ **r**\|\ *y0*][**+p**\ *pen*].
Expand Down Expand Up @@ -183,9 +186,14 @@ def plot3d(
)
extra_arrays.append(sizes)

if "t" in kwargs and is_nonstr_iter(kwargs["t"]):
extra_arrays.append(kwargs["t"])
kwargs["t"] = ""
for flag in ["I", "t"]:
if flag in kwargs and is_nonstr_iter(kwargs[flag]):
if kind != "vectors":
raise GMTInvalidInput(
f"Can't use arrays for {plot3d.aliases[flag]} if data is matrix or file."
)
extra_arrays.append(kwargs[flag])
kwargs[flag] = ""

with Session() as lib:
# Choose how data will be passed in to the module
Expand Down
4 changes: 4 additions & 0 deletions pygmt/tests/baseline/test_plot3d_varying_intensity.png.dvc
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
outs:
- md5: 79f7d8062dbb6f29ffa0a3aaa7382f13
size: 24052
path: test_plot3d_varying_intensity.png
Loading

0 comments on commit 54db5e8

Please sign in to comment.