Contributions are highly welcomed and appreciated. Every little help counts,
so do not hesitate! You can make a high impact on xbitinfo
just by using
it and reporting issues.
The following sections cover some general guidelines
regarding development in xbitinfo
for maintainers and contributors.
Nothing here is set in stone and can't be changed. Feel free to suggest improvements or changes in the workflow.
We are eager to hear about your requests for new features and any suggestions
about the API, infrastructure, and so on. Feel free to submit these as
issues with the label
"enhancement"
.
Please make sure to explain in detail how the feature should work and keep the scope as narrow as possible. This will make it easier to implement in small PRs.
Report bugs for xbitinfo
in the
issue tracker with the
label "bug".
If you are reporting a bug, please include:
- Any details about your local setup that might be helpful in troubleshooting,
specifically the Python interpreter version, installed libraries, and
xbitinfo
version. - Detailed steps how to reproduce the bug <https://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports>__
If you can write a demonstration test that currently fails but should pass, that is a very useful commit to make as well, even if you cannot fix the bug itself.
Look through the GitHub issues for bugs.
Talk to developers to find out how you can fix specific bugs.
Fork the xbitinfo GitHub repository. It's fine to use
xbitinfo
as your fork repository name because it will live under your user.Clone your fork locally using git, connect your repository to the upstream (main project), and create a branch:
$ git clone git@github.com:YOUR_GITHUB_USERNAME/xbitinfo.git $ cd xbitinfo $ git remote add upstream git@github.com:observingClouds/xbitinfo.git # now, to fix a bug or add feature create your own branch off "main": $ git checkout -b your-bugfix-feature-branch-name main
If you need some help with Git, follow this quick start guide.
Install dependencies into a new conda environment:
$ conda env create -f environment.yml $ conda activate bitinfo
Make an editable install of
xbitinfo
by running:$ pip install -e .
Install pre-commit and its hook on the
xbitinfo
repo:$ pip install --user pre-commit $ pre-commit install
pre-commit
automatically beautifies the code, makes it more maintainable and catches syntax errors. Afterwardspre-commit
will run whenever you commit.Now you have an environment called
bitinfo
that you can work in. You'll need to make sure to activate that environment next time you want to use it after closing the terminal or your system.You can now edit your local working copy and run/add tests as necessary. Please try to follow PEP-8 for naming. When committing,
pre-commit
will modify the files as needed, or will generally be quite clear about what you need to do to pass the commit test.pre-commit
also runs:* `ruff <https://docs.astral.sh/ruff/>`_ code formatter. * `black <https://black.readthedocs.io/en/stable/>`_ code formatting * `flake8 <https://flake8.pycqa.org/en/latest/>`_ code linting .. * `blackdoc <https://blackdoc.readthedocs.io/en/latest/>`_ docstring code formatter
Break your edits up into reasonably sized commits:
$ git commit -m "<commit message>" $ git push -u
Run all tests
Once commits are pushed to
origin
, GitHub Actions runs continuous integration of all tests with pytest on all new commits. However, you can already run tests locally:$ pytest # all $ pytest tests/test_bitround.py::test_xr_bitround_dask # specific tests
Check that doctests are passing:
$ pytest --doctest-modules xbitinfo
Please stick to xarray's testing recommendations.
Running the performance test suite
If you considerably changed to core of code of
xbitinfo
, it is worth considering whether your code has introduced performance regressions.xbitinfo
has a suite of benchmarking tests using asv to enable easy monitoring of the performance of criticalxbitinfo
operations. These benchmarks are all found in theasv_bench
directory.If you need to run a benchmark, change your directory to
asv_bench/
and run:$ asv continuous -f 1.1 upstream/main HEAD
You can replace
HEAD
with the name of the branch you are working on, and report benchmarks that changed by more than 10%. The command usesconda
by default for creating the benchmark environments.Running the full benchmark suite can take some time and use up a few GBs of RAM. Usually it is sufficient to paste only a subset of the results into the pull request to show that the committed changes do not cause unexpected performance regressions. If you want to only run a specific group of tests from a file, you can do it using
.
as a separator. For example:$ asv continuous -f 1.1 upstream/main HEAD -b benchmarks_bitround.rasm.time_xr_bitround
will only run the
time_xr_bitround
benchmark of classrasm
loading thexr.tutorial.load_dataset("rasm")
defined inbenchmarks_bitround.py
.Create a new changelog entry in CHANGELOG.rst:
The entry should be entered as:
<description>
(:pr:`#<pull request number>`
)`<author's names>`_
where
<description>
is the description of the PR related to the change and<pull request number>
is the pull request number and<author's names>
are your first and last names.Add yourself to list of authors at the end of CHANGELOG.rst file if not there yet, in alphabetical order.
Add yourself to the authors.
Finally, submit a Pull Request through the GitHub website using this data:
head-fork: YOUR_GITHUB_USERNAME/xbitinfo compare: your-branch-name base-fork: observingClouds/xbitinfo base: main
Note that you can create the Pull Request
while you're working on this.
The PR will update as you add more commits. xbitinfo
developers and
contributors can then review your code and offer suggestions.