Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

develop -> release for Release 0.4.0 #175

Merged
merged 88 commits into from
May 17, 2023
Merged

develop -> release for Release 0.4.0 #175

merged 88 commits into from
May 17, 2023

Conversation

gomezzz
Copy link
Collaborator

@gomezzz gomezzz commented May 10, 2023

Description

Following process in #172

Proposed changelog:

Changelog

Major

  • Support for vectorized multiple integrand compute with one call
  • Better support for custom integrators
  • GaussLegendre integration

Minor

  • Various changes to test
  • Addtional examples in docs and various docstring changes
  • Workflow improvements for repo
  • Added automatic test coverage investigation

Resolved Issues

How Has This Been Tested?

TBD

Related Pull Requests

gomezzz and others added 30 commits May 5, 2022 13:47
This affects situations where a user explicitly specifies the `backend` argument and uses a tensor for `integration_domain` at the same time.
* If the `integration_domain`'s backend does not correspond to the `backend` argument value, prefer the `backend` argument and show a warning.
  In this case the `integration_domain`'s dtype is ignored, so a globally configured dtype is used instead.
* If the `integration_domain`'s backend corresponds to the `backend` argument value, ignore the `backend` argument.
  In this case the `integration_domain`'s dtype is used and a globally configured dtype is ignored.
Tests with github actions work again after this change.
The JAX developers have recently moved the wheel files for `jaxlib` with CUDA to a different webpage: jax-ml/jax@e780a40
htoftevaag and others added 14 commits March 8, 2023 10:34
* (fix): change to `static_argnums` to work around decorator

* WORKFLOW: Format Python code with black

---------

Co-authored-by: htoftevaag <htoftevaag@users.noreply.github.com>
* (fix): change to `static_argnums` to work around decorator

* WORKFLOW: Format Python code with black

* (feat): add newton-cotest integrator tests

* WORKFLOW: Format Python code with black

* (fix): re-compile per integrand because of different shapes

* WORKFLOW: Format Python code with black

* (chore): remove erroneous boole comments

* (fix): add mc test

* (chore): add note about `integrand` shape

* (feat): switch to re-use fo the jit integral

* (chore): address comments

* (fix): increase test tolerance

* (feat): add newton-cotest integrator tests

* (fix): re-compile per integrand because of different shapes

* WORKFLOW: Format Python code with black

* (chore): remove erroneous boole comments

* (fix): add mc test

* (chore): add note about `integrand` shape

* (feat): switch to re-use fo the jit integral

* (chore): address comments

* (fix): increase test tolerance

---------

Co-authored-by: htoftevaag <htoftevaag@users.noreply.github.com>
Display test coverage reports on PRs
* basic version of gauss-legendre

* fstrings for my sanity

* fstrings for my sanity

* weights and points multidimensional

* transform xi,wi correctly

* basic version of gauss-legendre

* fstrings for my sanity

* fstrings for my sanity

* weights and points multidimensional

* transform xi,wi correctly

* let function to integrate accept args, c.f. scipy.nquad

* any edits

* add numpy import

* autoray

* add Gaussian quadrature methods

* fix import

* change anp.inf to numpy.inf

* fix interval transformation and clean up

* make sure tensors are on same device

* make sure tensors are on same devicepart 2

* make sure tensors are on same devicepart 3

* make sure tensors are on same devicepart 4

* make sure tensors are on same devicepart 5

* add special import

* add tests to /tests

* run autopep8, add docstring

* (feat): cache for roots.

* (feat): refactor out grid integration procedure

* (feat): gaussian integration refactored, some tests passing

* (fix): scaling constant

* (chore): higher dim integrals testing

* (feat): weights correct for multi-dim integrands.

* (fix): correct number of argument.

* (fix): remove non-legendre tests.

* (fix): import GaussLegendre

* (fix): ensure grid and weights are correct type

* (style): docstrings.

* (fix): default `grid_func`

* (fix): `_cached_poi...` returns tuple, not ndarray

* (fix): propagate `backend` correctly.

* (chore): export base types

* (feat): add jit for gausssian

* (feat): backward diff

* (fix): env issue

* Fixed tests badge

* (chore): cleanup

* (fix): `intergal` -> `integral`

* (chore): add tutorial

* (fix): change to `argnums` to work around decorator

* (fix): add fix from other PR

* (feat): add (broken) tests for gauss jit

* (chore): remove unused import

* (fix): use `item` for `N` when `jit` with `jax`

* (fix): `domain` for jit gauss `calculate_result`

* (chore): `black`

* (chore): erroneous diff

* (chore): remove erroneous print

* (fix): correct comment

* (fix): clean up gaussian tests

* (chore): add comments.

* (chore): formatting

* (fix): error of 1D integral

* (fix): increase bounds.

---------

Co-authored-by: ilan-gold <ilanbassgold@gmail.com>
Co-authored-by: Pablo Gómez <contact@pablo-gomez.net>
@github-actions
Copy link

github-actions bot commented May 10, 2023

ilan-gold and others added 5 commits May 10, 2023 15:41
* basic version of gauss-legendre

* fstrings for my sanity

* fstrings for my sanity

* weights and points multidimensional

* transform xi,wi correctly

* basic version of gauss-legendre

* fstrings for my sanity

* fstrings for my sanity

* weights and points multidimensional

* transform xi,wi correctly

* let function to integrate accept args, c.f. scipy.nquad

* any edits

* add numpy import

* autoray

* add Gaussian quadrature methods

* fix import

* change anp.inf to numpy.inf

* fix interval transformation and clean up

* make sure tensors are on same device

* make sure tensors are on same devicepart 2

* make sure tensors are on same devicepart 3

* make sure tensors are on same devicepart 4

* make sure tensors are on same devicepart 5

* add special import

* add tests to /tests

* run autopep8, add docstring

* (feat): cache for roots.

* (feat): refactor out grid integration procedure

* (feat): gaussian integration refactored, some tests passing

* (fix): scaling constant

* (chore): higher dim integrals testing

* (feat): weights correct for multi-dim integrands.

* (fix): correct number of argument.

* (fix): remove non-legendre tests.

* (fix): import GaussLegendre

* (fix): ensure grid and weights are correct type

* (style): docstrings.

* (fix): default `grid_func`

* (fix): `_cached_poi...` returns tuple, not ndarray

* (fix): propagate `backend` correctly.

* (chore): export base types

* (feat): add jit for gausssian

* (feat): backward diff

* (fix): env issue

* (feat): start small with special grid for thesis

* (fix): correct `_grid_func` for `GridIntegrator`

* (fix): use `static_argnums`

* (feat): add boolean check

* (fix): small dev changes.

* (chore): add docs

* (feat): add basic test

* (style): change integration domain check location

* Update torchquad/integration/utils.py

Co-authored-by: Pablo Gómez <contact@pablo-gomez.net>

* (fix): `linspace` doesn't take arrays for all backends

* (fix): grid creation from flat array

* (fix): no all for `tf`?

* (fix): tf has no flatten

* (chore): black

* (fix): tf has no flatten (!?!?!?!)

* (fix): try `reshape` with -1

* (chore): formatting

---------

Co-authored-by: Erica <elastufka@gmail.com>
Co-authored-by: Pablo Gómez <contact@pablo-gomez.net>
main -> develop for Release 0.4.0
(chore): add and/or clean up `Gaussian`/`GaussLegendre` docs
@gomezzz gomezzz marked this pull request as ready for review May 15, 2023 09:04
@gomezzz gomezzz requested a review from ilan-gold May 15, 2023 09:04
@gomezzz
Copy link
Collaborator Author

gomezzz commented May 15, 2023

@ilan-gold no need to look in detail at the diff, these are your changes mostly anyways. But feedback on the proposed changelog above would be appreciated ✌️ quite possible that I missed something :)

@ilan-gold
Copy link
Collaborator

Support for custom integrators

Maybe just "better support" - we always supported custom integrators to some degree.

@gomezzz
Copy link
Collaborator Author

gomezzz commented May 15, 2023

Support for custom integrators

Maybe just "better support" - we always supported custom integrators to some degree.

Updated :)

@gomezzz gomezzz merged commit bdbc84a into Release May 17, 2023
@gomezzz gomezzz mentioned this pull request Jun 14, 2023
3 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants