Skip to content

Commit

Permalink
update org Lightning-AI
Browse files Browse the repository at this point in the history
  • Loading branch information
Borda committed Jun 21, 2022
1 parent 08d588f commit ba42363
Show file tree
Hide file tree
Showing 20 changed files with 55 additions and 56 deletions.
2 changes: 1 addition & 1 deletion .github/CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ help you or finish it with you :\]_

1. Add/update the relevant tests!

- [This PR](https://github.com/PyTorchLightning/metrics/pull/98) is a good example for adding a new metric
- [This PR](https://github.com/Lightning-AI/metrics/pull/98) is a good example for adding a new metric

### Test cases:

Expand Down
2 changes: 1 addition & 1 deletion .github/ISSUE_TEMPLATE/config.yml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
blank_issues_enabled: false
contact_links:
- name: Ask a Question
url: https://github.com/PyTorchLightning/metrics/discussions/new
url: https://github.com/Lightning-AI/metrics/discussions/new
about: Ask and answer TorchMetrics related questions
- name: 💬 Slack
url: https://app.slack.com/client/TR9DVT48M/CQXV8BRH9/thread/CQXV8BRH9-1591382895.254600
Expand Down
2 changes: 1 addition & 1 deletion .github/PULL_REQUEST_TEMPLATE.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ Fixes #\<issue_number>
## Before submitting

- [ ] Was this **discussed/approved** via a Github issue? (no need for typos and docs improvements)
- [ ] Did you read the [contributor guideline](https://github.com/PyTorchLightning/metrics/blob/master/.github/CONTRIBUTING.md), Pull Request section?
- [ ] Did you read the [contributor guideline](https://github.com/Lightning-AI/metrics/blob/master/.github/CONTRIBUTING.md), Pull Request section?
- [ ] Did you make sure to **update the docs**?
- [ ] Did you write any new **necessary tests**?

Expand Down
2 changes: 1 addition & 1 deletion .github/assistant.py
Original file line number Diff line number Diff line change
Expand Up @@ -99,7 +99,7 @@ def changed_domains(
"""Determine what domains were changed in particular PR."""
if not pr:
return "unittests"
url = f"https://api.github.com/repos/PyTorchLightning/metrics/pulls/{pr}/files"
url = f"https://api.github.com/repos/Lightning-AI/metrics/pulls/{pr}/files"
logging.debug(url)
data = request_url(url, auth_token)
if not data:
Expand Down
2 changes: 1 addition & 1 deletion .github/mergify.yml
Original file line number Diff line number Diff line change
Expand Up @@ -78,4 +78,4 @@ pull_request_rules:
actions:
request_reviews:
teams:
- "@PyTorchLightning/core-metrics"
- "@Lightning-AI/core-metrics"
52 changes: 26 additions & 26 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,63 +40,63 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0

### Fixed

- Fixed mAP calculation for areas with 0 predictions ([#1080](https://github.com/PyTorchLightning/metrics/pull/1080))
- Fixed mAP calculation for areas with 0 predictions ([#1080](https://github.com/Lightning-AI/metrics/pull/1080))


- Fixed bug where avg precision state and auroc state was not merge when using MetricCollections ([#1086](https://github.com/PyTorchLightning/metrics/pull/1086))
- Fixed bug where avg precision state and auroc state was not merge when using MetricCollections ([#1086](https://github.com/Lightning-AI/metrics/pull/1086))


- Skip box conversion if no boxes are present in `MeanAveragePrecision` ([#1097](https://github.com/PyTorchLightning/metrics/pull/1097))
- Skip box conversion if no boxes are present in `MeanAveragePrecision` ([#1097](https://github.com/Lightning-AI/metrics/pull/1097))


## [0.9.1] - 2022-06-08

### Added

- Added specific `RuntimeError` when metric object is on the wrong device ([#1056](https://github.com/PyTorchLightning/metrics/pull/1056))
- Added an option to specify own n-gram weights for `BLEUScore` and `SacreBLEUScore` instead of using uniform weights only. ([#1075](https://github.com/PyTorchLightning/metrics/pull/1075))
- Added specific `RuntimeError` when metric object is on the wrong device ([#1056](https://github.com/Lightning-AI/metrics/pull/1056))
- Added an option to specify own n-gram weights for `BLEUScore` and `SacreBLEUScore` instead of using uniform weights only. ([#1075](https://github.com/Lightning-AI/metrics/pull/1075))

### Fixed

- Fixed aggregation metrics when input only contains zero ([#1070](https://github.com/PyTorchLightning/metrics/pull/1070))
- Fixed `TypeError` when providing superclass arguments as `kwargs` ([#1069](https://github.com/PyTorchLightning/metrics/pull/1069))
- Fixed bug related to state reference in metric collection when using compute groups ([#1076](https://github.com/PyTorchLightning/metrics/pull/1076))
- Fixed aggregation metrics when input only contains zero ([#1070](https://github.com/Lightning-AI/metrics/pull/1070))
- Fixed `TypeError` when providing superclass arguments as `kwargs` ([#1069](https://github.com/Lightning-AI/metrics/pull/1069))
- Fixed bug related to state reference in metric collection when using compute groups ([#1076](https://github.com/Lightning-AI/metrics/pull/1076))


## [0.9.0] - 2022-05-30

### Added

- Added `RetrievalPrecisionRecallCurve` and `RetrievalRecallAtFixedPrecision` to retrieval package ([#951](https://github.com/PyTorchLightning/metrics/pull/951))
- Added `RetrievalPrecisionRecallCurve` and `RetrievalRecallAtFixedPrecision` to retrieval package ([#951](https://github.com/Lightning-AI/metrics/pull/951))
- Added class property `full_state_update` that determines `forward` should call `update` once or twice (
[#984](https://github.com/PyTorchLightning/metrics/pull/984),
[#1033](https://github.com/PyTorchLightning/metrics/pull/1033))
- Added support for nested metric collections ([#1003](https://github.com/PyTorchLightning/metrics/pull/1003))
- Added `Dice` to classification package ([#1021](https://github.com/PyTorchLightning/metrics/pull/1021))
- Added support to segmentation type `segm` as IOU for mean average precision ([#822](https://github.com/PyTorchLightning/metrics/pull/822))
[#984](https://github.com/Lightning-AI/metrics/pull/984),
[#1033](https://github.com/Lightning-AI/metrics/pull/1033))
- Added support for nested metric collections ([#1003](https://github.com/Lightning-AI/metrics/pull/1003))
- Added `Dice` to classification package ([#1021](https://github.com/Lightning-AI/metrics/pull/1021))
- Added support to segmentation type `segm` as IOU for mean average precision ([#822](https://github.com/Lightning-AI/metrics/pull/822))

### Changed

- Renamed `reduction` argument to `average` in Jaccard score and added additional options ([#874](https://github.com/PyTorchLightning/metrics/pull/874))
- Renamed `reduction` argument to `average` in Jaccard score and added additional options ([#874](https://github.com/Lightning-AI/metrics/pull/874))

### Removed

- Removed deprecated `compute_on_step` argument (
[#962](https://github.com/PyTorchLightning/metrics/pull/962),
[#967](https://github.com/PyTorchLightning/metrics/pull/967),
[#979](https://github.com/PyTorchLightning/metrics/pull/979),
[#990](https://github.com/PyTorchLightning/metrics/pull/990),
[#991](https://github.com/PyTorchLightning/metrics/pull/991),
[#993](https://github.com/PyTorchLightning/metrics/pull/993),
[#1005](https://github.com/PyTorchLightning/metrics/pull/1005),
[#1004](https://github.com/PyTorchLightning/metrics/pull/1004),
[#1007](https://github.com/PyTorchLightning/metrics/pull/1007)
[#962](https://github.com/Lightning-AI/metrics/pull/962),
[#967](https://github.com/Lightning-AI/metrics/pull/967),
[#979](https://github.com/Lightning-AI/metrics/pull/979),
[#990](https://github.com/Lightning-AI/metrics/pull/990),
[#991](https://github.com/Lightning-AI/metrics/pull/991),
[#993](https://github.com/Lightning-AI/metrics/pull/993),
[#1005](https://github.com/Lightning-AI/metrics/pull/1005),
[#1004](https://github.com/Lightning-AI/metrics/pull/1004),
[#1007](https://github.com/Lightning-AI/metrics/pull/1007)
)

### Fixed

- Fixed non-empty state dict for a few metrics ([#1012](https://github.com/PyTorchLightning/metrics/pull/1012))
- Fixed bug when comparing states while finding compute groups ([#1022](https://github.com/PyTorchLightning/metrics/pull/1022))
- Fixed non-empty state dict for a few metrics ([#1012](https://github.com/Lightning-AI/metrics/pull/1012))
- Fixed bug when comparing states while finding compute groups ([#1022](https://github.com/Lightning-AI/metrics/pull/1022))
- Fixed `torch.double` support in stat score metrics ([#1023](https://github.com/PyTorchLightning/metrics/pull/1023))
- Fixed `FID` calculation for non-equal size real and fake input ([#1028](https://github.com/PyTorchLightning/metrics/pull/1028))
- Fixed case where `KLDivergence` could output `Nan` ([#1030](https://github.com/PyTorchLightning/metrics/pull/1030))
Expand Down
2 changes: 1 addition & 1 deletion CITATION.cff
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ authors:
doi: 10.21105/joss.04101
license: "Apache-2.0"
url: "https://www.pytorchlightning.ai"
repository-code: "https://github.com/PyTorchLightning/metrics"
repository-code: "https://github.com/Lightning-AI/metrics"
date-released: 2022-02-11
keywords:
- machine learning
Expand Down
15 changes: 7 additions & 8 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,16 +24,15 @@ ______________________________________________________________________
![Conda](https://img.shields.io/conda/dn/conda-forge/torchmetrics)
[![license](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://github.com/Lightning-AI/metrics/blob/master/LICENSE)

[![CI testing - base](https://github.com/PyTorchLightning/metrics/actions/workflows/ci_test-base.yml/badge.svg?branch=master&event=push)](https://github.com/PyTorchLightning/metrics/actions/workflows/ci_test-base.yml)
[![PyTorch & Conda](https://github.com/PyTorchLightning/metrics/actions/workflows/ci_test-conda.yml/badge.svg?branch=master&event=push)](https://github.com/PyTorchLightning/metrics/actions/workflows/ci_test-conda.yml)
[![Build Status](https://dev.azure.com/Lightning-AI/Metrics/_apis/build/status/PyTorchLightning.metrics?branchName=master)](https://dev.azure.com/Lightning-AI/Metrics/_build/latest?definitionId=3&branchName=master)
[![codecov](https://codecov.io/gh/PyTorchLightning/metrics/branch/master/graph/badge.svg?token=NER6LPI3HS)](https://codecov.io/gh/PyTorchLightning/metrics)
[![PyTorch & Conda](https://github.com/Lightning-AI/metrics/actions/workflows/ci_test-conda.yml/badge.svg?branch=master&event=push)](https://github.com/Lightning-AI/metrics/actions/workflows/ci_test-conda.yml)
[![Build Status](https://dev.azure.com/Lightning-AI/Metrics/_apis/build/status/Lightning-AI.metrics?branchName=master)](https://dev.azure.com/Lightning-AI/Metrics/_build/latest?definitionId=3&branchName=master)
[![codecov](https://codecov.io/gh/Lightning-AI/metrics/branch/master/graph/badge.svg?token=NER6LPI3HS)](https://codecov.io/gh/Lightning-AI/metrics)

[![Slack](https://img.shields.io/badge/slack-chat-green.svg?logo=slack)](https://www.pytorchlightning.ai/community)
[![Documentation Status](https://readthedocs.org/projects/torchmetrics/badge/?version=latest)](https://torchmetrics.readthedocs.io/en/latest/?badge=latest)
[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.5844769.svg)](https://doi.org/10.5281/zenodo.5844769)
[![JOSS status](https://joss.theoj.org/papers/561d9bb59b400158bc8204e2639dca43/status.svg)](https://joss.theoj.org/papers/561d9bb59b400158bc8204e2639dca43)
[![pre-commit.ci status](https://results.pre-commit.ci/badge/github/PyTorchLightning/metrics/master.svg)](https://results.pre-commit.ci/latest/github/PyTorchLightning/metrics/master)
[![pre-commit.ci status](https://results.pre-commit.ci/badge/github/Lightning-AI/metrics/master.svg)](https://results.pre-commit.ci/latest/github/Lightning-AI/metrics/master)

______________________________________________________________________

Expand Down Expand Up @@ -66,7 +65,7 @@ pip install git+https://github.com/Lightning-AI/metrics.git@release/latest
Pip from archive

```bash
pip install https://github.com/PyTorchLightning/metrics/archive/refs/heads/release/latest.zip
pip install https://github.com/Lightning-AI/metrics/archive/refs/heads/release/latest.zip
```

Extra dependencies for specialized metrics:
Expand All @@ -81,7 +80,7 @@ pip install torchmetrics[all] # install all of the above
Install latest developer version

```bash
pip install https://github.com/PyTorchLightning/metrics/archive/master.zip
pip install https://github.com/Lightning-AI/metrics/archive/master.zip
```

</details>
Expand Down Expand Up @@ -329,7 +328,7 @@ For help or questions, join our huge community on [Slack](https://www.pytorchlig
We’re excited to continue the strong legacy of open source software and have been inspired
over the years by Caffe, Theano, Keras, PyTorch, torchbearer, ignite, sklearn and fast.ai.

If you want to cite this framework feel free to use GitHub's built-in citation option to generate a bibtex or APA-Style citation based on [this file](https://github.com/PyTorchLightning/metrics/blob/master/CITATION.cff) (but only if you loved it 😊).
If you want to cite this framework feel free to use GitHub's built-in citation option to generate a bibtex or APA-Style citation based on [this file](https://github.com/Lightning-AI/metrics/blob/master/CITATION.cff) (but only if you loved it 😊).

## License

Expand Down
2 changes: 1 addition & 1 deletion docs/paper_JOSS/paper.md
Original file line number Diff line number Diff line change
Expand Up @@ -99,7 +99,7 @@ In addition to stateful metrics (called modular metrics in TorchMetrics), we als

TorchMetrics exhibits high test coverage on the various configurations, including all three major OS platforms (Linux, macOS, and Windows), and various Python, CUDA, and PyTorch versions. We test both minimum and latest package requirements for all combinations of OS and Python versions and include additional tests for each PyTorch version from 1.3 up to future development versions. On every pull request and merge to master, we run a full test suite. All standard tests run on CPU. In addition, we run all tests on a multi-GPU setting which reflects realistic Deep Learning workloads. For usability, we have auto-generated HTML documentation (hosted at [readthedocs](https://torchmetrics.readthedocs.io/en/stable/)) from the source code which updates in real-time with new merged pull requests.

TorchMetrics is released under the Apache 2.0 license. The source code is available at https://github.com/PyTorchLightning/metrics.
TorchMetrics is released under the Apache 2.0 license. The source code is available at https://github.com/Lightning-AI/metrics.

# Acknowledgement

Expand Down
2 changes: 1 addition & 1 deletion docs/source/_templates/theme_variables.jinja
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
{%- set external_urls = {
'github': 'https://github.com/Lightning-AI/metrics',
'github_issues': 'https://github.com/Lightning-AI/metrics/issues',
'contributing': 'https://github.com/PyTorchLightning/metrics/blob/master/.github/CONTRIBUTING.md',
'contributing': 'https://github.com/Lightning-AI/metrics/blob/master/.github/CONTRIBUTING.md',
'docs': 'https://torchmetrics.readthedocs.io/en/latest',
'twitter': 'https://twitter.com/PyTorchLightnin',
'discuss': 'https://pytorch-lightning.slack.com',
Expand Down
2 changes: 1 addition & 1 deletion docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@

# Options for the linkcode extension
# ----------------------------------
github_user = "PyTorchLightning"
github_user = "Lightning-AI"
github_repo = "metrics"

# -- Project documents -------------------------------------------------------
Expand Down
2 changes: 1 addition & 1 deletion docs/source/governance.rst
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ and start tracking the development. It is possible that priorities change over t

Commits to the project are exclusively to be added by pull requests on GitHub and anyone in the community is welcome to review them.
However, reviews submitted by
`code owners <https://github.com/PyTorchLightning/metrics/blob/master/.github/CODEOWNERS>`_
`code owners <https://github.com/Lightning-AI/metrics/blob/master/.github/CODEOWNERS>`_
have higher weight and it is necessary to get the approval of code owners before a pull request can be merged.
Additional requirements may apply case by case.

Expand Down
2 changes: 1 addition & 1 deletion docs/source/links.rst
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@
.. _Demystifying MMD GANs: https://arxiv.org/abs/1801.01401
.. _Computes Peak Signal-to-Noise Ratio: https://en.wikipedia.org/wiki/Peak_signal-to-noise_ratio
.. _Turn a Metric into a Bootstrapped: https://en.wikipedia.org/wiki/Bootstrapping_(statistics)
.. _Metric Test for Reset: https://github.com/PyTorchLightning/pytorch-lightning/pull/7055
.. _Metric Test for Reset: https://github.com/Lightning-AI/pytorch-lightning/pull/7055
.. _Computes Mean Absolute Error: https://en.wikipedia.org/wiki/Mean_absolute_error
.. _Mean Absolute Percentage Error: https://en.wikipedia.org/wiki/Mean_absolute_percentage_error
.. _mean squared error: https://en.wikipedia.org/wiki/Mean_squared_error
Expand Down
6 changes: 3 additions & 3 deletions docs/source/pages/implement.rst
Original file line number Diff line number Diff line change
Expand Up @@ -137,7 +137,7 @@ and tests gets formatted in the following way:
makes up the functional interface for the metric.

.. note::
The `functional accuracy <https://github.com/PyTorchLightning/metrics/blob/master/src/torchmetrics/functional/classification/accuracy.py>`_
The `functional accuracy <https://github.com/Lightning-AI/metrics/blob/master/src/torchmetrics/functional/classification/accuracy.py>`_
metric is a great example of this division of logic.

3. In a corresponding file placed in ``torchmetrics/"domain"/"new_metric".py`` create the module interface:
Expand All @@ -150,7 +150,7 @@ and tests gets formatted in the following way:
We do this to not have duplicate code to maintain.

.. note::
The module `Accuracy <https://github.com/PyTorchLightning/metrics/blob/master/src/torchmetrics/classification/accuracy.py>`_
The module `Accuracy <https://github.com/Lightning-AI/metrics/blob/master/src/torchmetrics/classification/accuracy.py>`_
metric that corresponds to the above functional example showcases these steps.

4. Remember to add binding to the different relevant ``__init__`` files.
Expand All @@ -171,7 +171,7 @@ and tests gets formatted in the following way:
5. (optional) If your metric raises any exception, please add tests that showcase this.

.. note::
The `test file for accuracy <https://github.com/PyTorchLightning/metrics/blob/master/tests/unittests/classification/test_accuracy.py>`_ metric
The `test file for accuracy <https://github.com/Lightning-AI/metrics/blob/master/tests/unittests/classification/test_accuracy.py>`_ metric
shows how to implement such tests.

If you only can figure out part of the steps, do not fear to send a PR. We will much rather receive working
Expand Down
2 changes: 1 addition & 1 deletion docs/source/pages/lightning.rst
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@
TorchMetrics in PyTorch Lightning
#################################

TorchMetrics was originally created as part of `PyTorch Lightning <https://github.com/PyTorchLightning/pytorch-lightning>`_, a powerful deep learning research
TorchMetrics was originally created as part of `PyTorch Lightning <https://github.com/Lightning-AI/pytorch-lightning>`_, a powerful deep learning research
framework designed for scaling models without boilerplate.

.. note::
Expand Down
2 changes: 1 addition & 1 deletion requirements/docs.txt
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ pandoc>=1.0
docutils>=0.16
sphinxcontrib-fulltoc>=1.0
sphinxcontrib-mockautodoc
https://github.com/PyTorchLightning/lightning_sphinx_theme/archive/master.zip#egg=pt-lightning-sphinx-theme
https://github.com/Lightning-AI/lightning_sphinx_theme/archive/master.zip#egg=pt-lightning-sphinx-theme
sphinx-autodoc-typehints>=1.0
sphinx-paramlinks>=0.5.1
sphinx-togglebutton>=0.2
Expand Down
2 changes: 1 addition & 1 deletion setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ def _load_readme_description(path_dir: str, homepage: str, version: str) -> str:
with open(path_readme, encoding="utf-8") as fp:
text = fp.read()

# https://github.com/PyTorchLightning/torchmetrics/raw/master/docs/source/_static/images/lightning_module/pt_to_pl.png
# https://github.com/Lightning-AI/torchmetrics/raw/master/docs/source/_static/images/lightning_module/pt_to_pl.png
github_source_url = os.path.join(homepage, "raw", version)
# replace relative repository path to absolute link to the release
# do not replace all "docs" as in the readme we replace some other sources with particular path to docs
Expand Down
4 changes: 2 additions & 2 deletions src/torchmetrics/__about__.py
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
__version__ = "0.10.0dev"
__author__ = "PyTorchLightning et al."
__author__ = "Lightning-AI et al."
__author_email__ = "name@pytorchlightning.ai"
__license__ = "Apache-2.0"
__copyright__ = f"Copyright (c) 2020-2022, {__author__}."
__homepage__ = "https://github.com/PyTorchLightning/metrics"
__homepage__ = "https://github.com/Lightning-AI/metrics"
__docs__ = "PyTorch native Metrics"
__docs_url__ = "https://torchmetrics.readthedocs.io/en/stable/"
__long_doc__ = """
Expand Down
2 changes: 1 addition & 1 deletion src/torchmetrics/detection/mean_ap.py
Original file line number Diff line number Diff line change
Expand Up @@ -208,7 +208,7 @@ class MeanAveragePrecision(Metric):
See the :meth:`update` method for more information about the input format to this metric.
For an example on how to use this metric check the `torchmetrics examples
<https://github.com/PyTorchLightning/metrics/blob/master/examples/detection_map.py>`_
<https://github.com/Lightning-AI/metrics/blob/master/examples/detection_map.py>`_
.. note::
This metric is following the mAP implementation of
Expand Down
Loading

0 comments on commit ba42363

Please sign in to comment.