Skip to content

Commit

Permalink
Merge branch 'master' into inject-aliases
Browse files Browse the repository at this point in the history
  • Loading branch information
Meghan Jones committed May 24, 2021
2 parents f37faea + d2a43b6 commit 377ba97
Show file tree
Hide file tree
Showing 6 changed files with 135 additions and 125 deletions.
134 changes: 73 additions & 61 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,6 @@ read it carefully.
- [Testing your code](#testing-your-code)
- [Testing plots](#testing-plots)
- [Documentation](#documentation)
- [Code Review](#code-review)


## What Can I Do?
Expand Down Expand Up @@ -205,36 +204,73 @@ hesitate to [ask questions](#how-can-i-talk-to-you)):

### General guidelines

We follow the [git pull request workflow](http://www.asmeurer.com/git-workflow/) to
make changes to our codebase.
We follow the [git pull request workflow](http://www.asmeurer.com/git-workflow)
to make changes to our codebase.
Every change made goes through a pull request, even our own, so that our
[continuous integration](https://en.wikipedia.org/wiki/Continuous_integration) services
have a change to check that the code is up to standards and passes all our tests.
[continuous integration](https://en.wikipedia.org/wiki/Continuous_integration)
services have a chance to check that the code is up to standards and passes all
our tests.
This way, the *master* branch is always stable.

General guidelines for pull requests (PRs):

* **Open an issue first** describing what you want to do. If there is already an issue
that matches your PR, leave a comment there instead to let us know what you plan to
do.
* Each pull request should consist of a **small** and logical collection of changes.
* Larger changes should be broken down into smaller components and integrated
separately. For example, break the wrapping of aliases into multiple pull requests.
* Bug fixes should be submitted in separate PRs.
* Use underscores for all Python (*.py) files as per [PEP8](https://www.python.org/dev/peps/pep-0008/),
not hyphens. Directory names should also use underscores instead of hyphens.
* Describe what your PR changes and *why* this is a good thing. Be as specific as you
can. The PR description is how we keep track of the changes made to the project over
time.
* Do not commit changes to files that are irrelevant to your feature or bugfix (eg:
`.gitignore`, IDE project files, etc).
* Write descriptive commit messages. Chris Beams has written a
[guide](https://chris.beams.io/posts/git-commit/) on how to write good commit
messages.
* Be willing to accept criticism and work on improving your code; we don't want to break
other users' code, so care must be taken not to introduce bugs.
* Be aware that the pull request review process is not immediate, and is generally
proportional to the size of the pull request.
General guidelines for making a Pull Request (PR):

* What should be included in a PR
- Have a quick look at the titles of all the existing issues first. If there
is already an issue that matches your PR, leave a comment there to let us
know what you plan to do. Otherwise, **open an issue** describing what you
want to do.
- Each pull request should consist of a **small** and logical collection of
changes; larger changes should be broken down into smaller parts and
integrated separately.
- Bug fixes should be submitted in separate PRs.
* How to write and submit a PR
- Use underscores for all Python (*.py) files as per
[PEP8](https://www.python.org/dev/peps/pep-0008/), not hyphens. Directory
names should also use underscores instead of hyphens.
- Describe what your PR changes and *why* this is a good thing. Be as
specific as you can. The PR description is how we keep track of the changes
made to the project over time.
- Do not commit changes to files that are irrelevant to your feature or
bugfix (e.g.: `.gitignore`, IDE project files, etc).
- Write descriptive commit messages. Chris Beams has written a
[guide](https://chris.beams.io/posts/git-commit/) on how to write good
commit messages.
* PR review
- Be willing to accept criticism and work on improving your code; we don't
want to break other users' code, so care must be taken not to introduce
bugs.
- Be aware that the pull request review process is not immediate, and is
generally proportional to the size of the pull request.

#### Code Review

After you've submitted a pull request, you should expect to hear at least a
comment within a couple of days. We may suggest some changes, improvements or
alternative implementation details.

To increase the chances of getting your pull request accepted quickly, try to:

* Submit a friendly PR
- Write a good and detailed description of what the PR does.
- Write some documentation for your code (docstrings) and leave comments
explaining the *reason* behind non-obvious things.
- Write tests for the code you wrote/modified if needed.
Please refer to [Testing your code](#testing-your-code) or
[Testing plots](#testing-plots).
- Include an example of new features in the gallery or tutorials.
Please refer to [Gallery plots](#gallery-plots) or [Tutorials](#tutorials).
* Have a good coding style
- Use readable code, as it is better than clever code (even with comments).
- Follow the [PEP8](http://pep8.org) style guide for code and the
[numpy style guide](https://numpydoc.readthedocs.io/en/latest/format.html)
for docstrings. Please refer to [Code style](#code-style).

Pull requests will automatically have tests run by GitHub Actions.
This includes running both the unit tests as well as code linters.
GitHub will show the status of these checks on the pull request.
Try to get them all passing (green).
If you have any trouble, leave a comment in the PR or
[get in touch](#how-can-i-talk-to-you).

### Setting up your environment

Expand Down Expand Up @@ -510,11 +546,17 @@ def test_my_plotting_case():

### Documentation

Most documentation sources are in Python `*.py` files under the `examples/`
folder, and the code docstrings can be found e.g. under the `pygmt/src/` and
`pygmt/datasets/` folders. The documentation are written in
[reStructuredText](https://docutils.sourceforge.io/rst.html) and
built by [Sphinx](http://www.sphinx-doc.org/). Please refer to
[reStructuredText Cheatsheet](https://docs.generic-mapping-tools.org/latest/rst-cheatsheet.html)
if you are new to reStructuredText.

#### Building the documentation

Most documentation sources are in the `doc` folder.
We use [sphinx](http://www.sphinx-doc.org/) to build the web pages from these sources.
To build the HTML files:
To build the HTML files from sources:

```bash
cd doc
Expand Down Expand Up @@ -560,33 +602,3 @@ https://docs.generic-mapping-tools.org/latest/gmt.conf.html#term-COLOR_FOREGROUN

Sphinx will create a link to the automatically generated page for that
function/class/module.

**All docstrings** should follow the
[numpy style guide](https://numpydoc.readthedocs.io/en/latest/format.html).
All functions/classes/methods should have docstrings with a full description of all
arguments and return values.

### Code Review

After you've submitted a pull request, you should expect to hear at least a comment
within a couple of days.
We may suggest some changes or improvements or alternatives.

Some things that will increase the chance that your pull request is accepted quickly:

* Write a good and detailed description of what the PR does.
* Write tests for the code you wrote/modified.
* Readable code is better than clever code (even with comments).
* Write documentation for your code (docstrings) and leave comments explaining the
*reason* behind non-obvious things.
* Include an example of new features in the gallery or tutorials.
* Follow the [PEP8](http://pep8.org) style guide for code and the
[numpy guide](https://numpydoc.readthedocs.io/en/latest/format.html)
for documentation.

Pull requests will automatically have tests run by GitHub Actions.
This includes running both the unit tests as well as code linters.
GitHub will show the status of these checks on the pull request.
Try to get them all passing (green).
If you have any trouble, leave a comment in the PR or
[get in touch](#how-can-i-talk-to-you).
4 changes: 2 additions & 2 deletions pygmt/clib/session.py
Original file line number Diff line number Diff line change
Expand Up @@ -734,7 +734,7 @@ def _check_dtype_and_dim(self, array, ndim):
return self[DTYPES[array.dtype.type]]

def put_vector(self, dataset, column, vector):
"""
r"""
Attach a numpy 1D array as a column on a GMT dataset.
Use this function to attach numpy array data to a GMT dataset and pass
Expand All @@ -744,7 +744,7 @@ def put_vector(self, dataset, column, vector):
first. Use ``family='GMT_IS_DATASET|GMT_VIA_VECTOR'``.
Not at all numpy dtypes are supported, only: float64, float32, int64,
int32, uint64, uint32, datetime64 and str_.
int32, uint64, uint32, datetime64 and str\_.
.. warning::
The numpy array must be C contiguous in memory. If it comes from a
Expand Down
48 changes: 19 additions & 29 deletions pygmt/src/blockm.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,12 +3,9 @@
"""
import pandas as pd
from pygmt.clib import Session
from pygmt.exceptions import GMTInvalidInput
from pygmt.helpers import (
GMTTempFile,
build_arg_string,
data_kind,
dummy_context,
fmt_docstring,
kwargs_to_strings,
use_alias,
Expand Down Expand Up @@ -41,29 +38,24 @@ def _blockm(block_method, table, outfile, **kwargs):
set by ``outfile``)
"""

kind = data_kind(table)
with GMTTempFile(suffix=".csv") as tmpfile:
with Session() as lib:
if kind == "matrix":
if not hasattr(table, "values"):
raise GMTInvalidInput(f"Unrecognized data type: {type(table)}")
file_context = lib.virtualfile_from_matrix(table.values)
elif kind == "file":
if outfile is None:
raise GMTInvalidInput("Please pass in a str to 'outfile'")
file_context = dummy_context(table)
else:
raise GMTInvalidInput(f"Unrecognized data type: {type(table)}")

with file_context as infile:
# Choose how data will be passed into the module
table_context = lib.virtualfile_from_data(check_kind="vector", data=table)
# Run blockm* on data table
with table_context as infile:
if outfile is None:
outfile = tmpfile.name
arg_str = " ".join([infile, build_arg_string(kwargs), "->" + outfile])
lib.call_module(module=block_method, args=arg_str)

# Read temporary csv output to a pandas table
if outfile == tmpfile.name: # if user did not set outfile, return pd.DataFrame
result = pd.read_csv(tmpfile.name, sep="\t", names=table.columns)
try:
column_names = table.columns.to_list()
result = pd.read_csv(tmpfile.name, sep="\t", names=column_names)
except AttributeError: # 'str' object has no attribute 'columns'
result = pd.read_csv(tmpfile.name, sep="\t", header=None, comment=">")
elif outfile != tmpfile.name: # return None if outfile set, output in outfile
result = None

Expand Down Expand Up @@ -95,10 +87,10 @@ def blockmean(table, outfile=None, **kwargs):
Parameters
----------
table : pandas.DataFrame or str
Either a pandas dataframe with (x, y, z) or (longitude, latitude,
elevation) values in the first three columns, or a file name to an
ASCII data table.
table : str or {table-like}
Pass in (x, y, z) or (longitude, latitude, elevation) values by
providing a file name to an ASCII data table, a 2D
{table-classes}.
spacing : str
*xinc*\[\ *unit*\][**+e**\|\ **n**]
Expand All @@ -110,8 +102,7 @@ def blockmean(table, outfile=None, **kwargs):
Specify the region of interest.
outfile : str
Required if ``table`` is a file. The file name for the output ASCII
file.
The file name for the output ASCII file.
{V}
{a}
Expand Down Expand Up @@ -156,10 +147,10 @@ def blockmedian(table, outfile=None, **kwargs):
Parameters
----------
table : pandas.DataFrame or str
Either a pandas dataframe with (x, y, z) or (longitude, latitude,
elevation) values in the first three columns, or a file name to an
ASCII data table.
table : str or {table-like}
Pass in (x, y, z) or (longitude, latitude, elevation) values by
providing a file name to an ASCII data table, a 2D
{table-classes}.
spacing : str
*xinc*\[\ *unit*\][**+e**\|\ **n**]
Expand All @@ -171,8 +162,7 @@ def blockmedian(table, outfile=None, **kwargs):
Specify the region of interest.
outfile : str
Required if ``table`` is a file. The file name for the output ASCII
file.
The file name for the output ASCII file.
{V}
{a}
Expand Down
2 changes: 1 addition & 1 deletion pygmt/src/wiggle.py
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ def wiggle(self, x=None, y=None, z=None, data=None, **kwargs):
{B}
position : str
[**g**\|\ **j**\|\ **J**\|\ **n**\|\ **x**]\ *refpoint*\
**+w**\ *length*\ [**+j**\ *justify*]\ [**+al**\ |\ **r**]\
**+w**\ *length*\ [**+j**\ *justify*]\ [**+al**\|\ **r**]\
[**+o**\ *dx*\ [/*dy*]][**+l**\ [*label*]].
Defines the reference point on the map for the vertical scale bar.
color : str
Expand Down
36 changes: 20 additions & 16 deletions pygmt/tests/test_blockmean.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,38 +12,42 @@
from pygmt.helpers import GMTTempFile, data_kind


def test_blockmean_input_dataframe():
@pytest.fixture(scope="module", name="dataframe")
def fixture_dataframe():
"""
Load the grid data from the sample earth_relief file.
"""
return load_sample_bathymetry()


def test_blockmean_input_dataframe(dataframe):
"""
Run blockmean by passing in a pandas.DataFrame as input.
"""
dataframe = load_sample_bathymetry()
output = blockmean(table=dataframe, spacing="5m", region=[245, 255, 20, 30])
assert isinstance(output, pd.DataFrame)
assert all(dataframe.columns == output.columns)
assert output.shape == (5849, 3)
npt.assert_allclose(output.iloc[0], [245.888877, 29.978707, -384.0])

return output


def test_blockmean_wrong_kind_of_input_table_matrix():
def test_blockmean_input_table_matrix(dataframe):
"""
Run blockmean using table input that is not a pandas.DataFrame but still a
matrix.
"""
dataframe = load_sample_bathymetry()
invalid_table = dataframe.values
assert data_kind(invalid_table) == "matrix"
with pytest.raises(GMTInvalidInput):
blockmean(table=invalid_table, spacing="5m", region=[245, 255, 20, 30])
table = dataframe.values
output = blockmean(table=table, spacing="5m", region=[245, 255, 20, 30])
assert isinstance(output, pd.DataFrame)
assert output.shape == (5849, 3)
npt.assert_allclose(output.iloc[0], [245.888877, 29.978707, -384.0])


def test_blockmean_wrong_kind_of_input_table_grid():
def test_blockmean_wrong_kind_of_input_table_grid(dataframe):
"""
Run blockmean using table input that is not a pandas.DataFrame or file but
a grid.
"""
dataframe = load_sample_bathymetry()
invalid_table = dataframe.bathymetry.to_xarray()
assert data_kind(invalid_table) == "grid"
with pytest.raises(GMTInvalidInput):
Expand All @@ -67,12 +71,12 @@ def test_blockmean_input_filename():
assert output.shape == (5849, 3)
npt.assert_allclose(output.iloc[0], [245.888877, 29.978707, -384.0])

return output


def test_blockmean_without_outfile_setting():
"""
Run blockmean by not passing in outfile parameter setting.
"""
with pytest.raises(GMTInvalidInput):
blockmean(table="@tut_ship.xyz", spacing="5m", region=[245, 255, 20, 30])
output = blockmean(table="@tut_ship.xyz", spacing="5m", region=[245, 255, 20, 30])
assert isinstance(output, pd.DataFrame)
assert output.shape == (5849, 3)
npt.assert_allclose(output.iloc[0], [245.888877, 29.978707, -384.0])

0 comments on commit 377ba97

Please sign in to comment.