Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added double adjoint #49

Merged
merged 8 commits into from
Sep 22, 2020
Merged

Added double adjoint #49

merged 8 commits into from
Sep 22, 2020

Conversation

patrick-kidger
Copy link
Collaborator

@patrick-kidger patrick-kidger commented Sep 6, 2020

Just creating a draft PR to let you know I'm working on double-backward through the adjoint.

In informal tests it seems to work as expected. The only thing left to do is write some formal tests. I'll put them in test_adjoint.py and switch that file over to pytest while I'm at it.

torchsde/_core/adjoint.py Outdated Show resolved Hide resolved
torchsde/_core/adjoint_sde.py Outdated Show resolved Hide resolved
torchsde/_core/adjoint_sde.py Show resolved Hide resolved
torchsde/_core/adjoint_sde.py Show resolved Hide resolved
torchsde/_core/adjoint_sde.py Outdated Show resolved Hide resolved
torchsde/_core/base_sde.py Outdated Show resolved Hide resolved
torchsde/_core/adjoint_sde.py Outdated Show resolved Hide resolved
torchsde/_core/adjoint_sde.py Outdated Show resolved Hide resolved
@lxuechen
Copy link
Collaborator

lxuechen commented Sep 7, 2020

Thanks for pulling off a draft!

Some general comments

  • Do we need to include t and v in _unpack_y_aug. My understanding is that we shouldn't unless 1) there's an explicit use-case where we need to take derivative w.r.t. t, and 2) there's an explicit use-case where v is not from the Brownian motion but some other tensor that requires grad. I haven't seen cases for 1), but I'm not entirely sure there isn't such a case. I don't think we'll ever see cases for 2).

  • It seems to me that since _SdeintAdjointMethod.apply is called in backward, the context returned by torch.is_grad_enabled in the adjoint functions (e.g. f_uncorr) should be False due to the implicit context setting with forward of torch.autograd.Function.

  • The current paradigm is adjoints-through-adjoints, and doesn't support naive backprop through adjoints. We should probably document this, even though this may seem trivial to us.

In informal tests it seems to work as expected.

What are the informal tests? Are there finite-difference tests? Or did we directly use torch.autograd.gradgradcheck? Does the old tests still pass? If you could also push the companion tests, then I think I'd be able to help more from running and tweaking the code.

Hopefully asking all of these question at once doesn't seem too annoying, but I just want to gain a better understanding of what's happening here. Also, we don't have to do everything at once or address all of the questions above at once, but it's good to know what works and what doesn't so that we can place meaningful TODOs.

@patrick-kidger
Copy link
Collaborator Author

  • Including t and v: AFAIK both t and v should never require gradients in the current set-up.

But that's a detail that's beyond AdjointSDE's scope. AdjointSDE requires that certain conditions are satisfied, so it should enforce them. Practically: if that detail ever changes down the line then I'd prefer to know about it without things silently going wrong. The logic here can get quite hairy when you start getting into adjoints-of-adjoints.

  • implicit contexts: this isn't true in the double-adjoint case, when f_uncorrected of the double adjoint sets enable_grad and then calls f_uncorrected of the first adjoint.

  • Naive backprop: yeah, it should probably be added to the docstring for sdeint_adjoint. I'll do that.

Old single-adjoint tests still pass.

Merge branch 'dev' into dev-adjoint-double
@lxuechen lxuechen added this to the v0.2.0 milestone Sep 7, 2020
@lxuechen
Copy link
Collaborator

lxuechen commented Sep 8, 2020

Including t and v: AFAIK both t and v should never require gradients in the current set-up.

Per current usage, I can't really come up with a case where t and v are not leaf variables, so I would prefer to not let _get_state check these just to reduce redundancy. The idea is that this PR adds the requires grad checks for ts upfront, and that v should always be generated by the Brownian motion. I think I would leave some comments documenting the situation here if we're really feeling unsure.

implicit contexts: this isn't true in the double-adjoint case, when f_uncorrected of the double adjoint sets enable_grad and then calls f_uncorrected of the first adjoint.

You're right, thanks for explaining.

Naive backprop: yeah, it should probably be added to the docstring for sdeint_adjoint. I'll do that.

Thanks for the fix for documentation!

Additionally, could we do a numerical test? Ideally, just replacing the torch.autograd.gradcheck here with torch.autograd.gradgradcheck (and slightly modifying the surrounding) should tell us something.

As a side note, I'll be quite busy Tues and Wed this week. But if you're willing to wait a little, then I can do the tests on Thursdays and Friday. In any case, I think testing is something that must be accomplished. Obviously you're a pretty good coder and software engineer, but I wouldn't feel the code is complete or absolutely trustworthy before we get those tests.

torchsde/_core/adjoint_sde.py Show resolved Hide resolved
torchsde/_core/adjoint_sde.py Outdated Show resolved Hide resolved
torchsde/_core/adjoint_sde.py Show resolved Hide resolved
torchsde/_core/adjoint_sde.py Outdated Show resolved Hide resolved

########################################
# gdg_prod #
########################################

# Computes: sum_{j, l} g_{j, l} d g_{j, l} d x_i v_l.
def gdg_prod_default(self, t, y, v):
requires_grad = torch.is_grad_enabled()
requires_grad = torch.is_grad_enabled() and (t.requires_grad or y.requires_grad or v.requires_grad)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm a little worried about this line. Consider the scenario where we set y0 to not require gradients. By current standards, neither t nor v should require gradients. So in the backprop-through-solver setting, requires_grad evaluates to False.

The problematic scenario is then solving an SDE with parameters that require gradients. Since requires_grad is False, we don't create the extra graph backpropping through vg_dg_vjp towards the parameters. However, in the ideal situation, we should still backprop through this node so that our models get better gradients.

Overall, I think this possible failure case indicates that we should have a numerical test for backprop-through-solver comparing against finite differences.

Copy link
Collaborator Author

@patrick-kidger patrick-kidger Sep 8, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's worth noting that we currently don't (a) backprop-through-the-solver with the adjoint, nor do we (b) differentiate wrt parameters without also differentiating wrt y, so we at least shouldn't be any running into any issues with the current codebase.

That said, this is a subtle point that we'd probably never catch in the future if this scenario does come up, so I agree that I'd prefer to do something about it now.

I'm not completely sure about a fix.
The current requires_grad is meant to correspond to "if I hadn't forced y to have a gradient, would some_func(t, y, v) require gradient?"
I'm -1 on any approach that just fixes it to True, as else we'll be leaking graphs from the function.
I think my proposed fix would be to:

  • If y requires gradient, set requires_grad to True. (As before.)
  • If y doesn't require gradient, then we're going to be forcing it to require gradient. Then we call some_func(t, y, v) as before. Prior to calling vjp we manually traverse its autograd history (via .grad_fn, .next_functions) and set requires_grad = not (is y the only leaf).

Not a huge fan of that but I don't have any better ideas. Thoughts?

Copy link
Collaborator

@lxuechen lxuechen Sep 9, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's worth noting that we currently don't (a) backprop-through-the-solver with the adjoint, nor do we (b) differentiate wrt parameters without also differentiating wrt y, so we at least shouldn't be any running into any issues with the current codebase.

I don't think the line I'm specifically referring to has much to do with adjoints. gdg_prod is part of ForwardSDE, so in the backprop through solver case, it would get called when we choose, say, the Milstein method.

More specifically, I think this will cause a problem in the first step of sdeint (w/o adjoints), where y0 doesn't require gradient. The later steps shouldn't be affected by too much, since the typical y beyond y0 should require gradients due to being sums of tensors that require gradient with tensors that don't.

I'm not completely sure about a fix.

I'll think about this in more detail tomorrow, when I actually may have a chunk of sparse time.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh sorry, you're commenting on base_sde rather than adjoint_sde. You're right, this would break Milstein.

I think the analysis for the forward case is the same as the backward case that I actually discussed, though, so I think I'd end up proposing the same fix. I'll see what you say.

Unrelatedly, I've updated test_adjoint.py to pytest. Scalar noise+Ito SDE seems to be failing the tests btw. (It wasn't tested previously.) I've not dug into why that is - I know this isn't a supported case on master but I thought it was in now?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the issue I mentioned is mostly about when sde.parameters() is non-empty. So I would perhaps add a literal that checks whether ForwardSDE has any parameters that require gradients.

I'll fix the Scalar noise + Ito case.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So that relies on the implicit assumption that everything going into the function is a parameter. That needn't be the case, c.f. our discussion a bit back on contextualisation.

@patrick-kidger
Copy link
Collaborator Author

I agree that every current case involves t and v being leaf variables already. (Moreover non-gradient-requiring variables.)
I'm quite strongly in favour of maintaining the checks though, for what I think is the same reason that you brought up the discussion on requires_grad: we might not be doing it now, but it's an implicit assumption that will silently do the wrong thing if we ever change that assumption in the future.

Regarding tests - I feel like you think I'm a bit cavalier about tests! Don't worry, I appreciate their importance for any software project; I just tend to put them in a bit later than you.
For this PR specifically, it sounds like you're offering to write the tests? If you get time and are happy to do it, that would be great.

patrick-kidger and others added 3 commits September 8, 2020 19:51
* Add gradgrad check for adjoint.

* Relax tolerance.

* Refactor numerical tests.

* Remove unused import.

* Fix bug.

* Fixes from comments.

* Rename for consistency.

* Refactor comment.

* Minor refactor.
@lxuechen
Copy link
Collaborator

I agree that every current case involves t and v being leaf variables already. (Moreover non-gradient-requiring variables.)
I'm quite strongly in favour of maintaining the checks though, for what I think is the same reason that you brought up the discussion on requires_grad: we might not be doing it now, but it's an implicit assumption that will silently do the wrong thing if we ever change that assumption in the future.

Regarding tests - I feel like you think I'm a bit cavalier about tests! Don't worry, I appreciate their importance for any software project; I just tend to put them in a bit later than you.
For this PR specifically, it sounds like you're offering to write the tests? If you get time and are happy to do it, that would be great.

I appreciate the detailed thoughts, though I still believe there's a concrete difference between the issue I mentioned and the argument about t and v. I think we currently do have a use case of backprop through solver, and the issue I mentioned could affect this behavior. On the other hand, I couldn't really come up with a case where gradients wrt t and v need to be taken.

@patrick-kidger
Copy link
Collaborator Author

Hmm I don't quite follow. Gradients wrt t and v are a separate issue to this breaking Milstein. (If that's what you're saying then I agree!)
I agree that we never need gradients wrt t or v, I'm just quite assert-happy as a way of making sure things aren't silently misbehaving.
I agree that this PR introduces an issue wrt Milstein (whilst fixing the issue wrt leaking graphs).

Reflecting on the main (gradient) issue, I am actually inclined to just set requires_grad = torch.is_grad_enabled() and leave it at that. This does leak graphs, but I don't think the overhead should be too large. I think the alternate fix, or manually checking the graph, is probably a bit too magic.

@lxuechen
Copy link
Collaborator

Hmm I don't quite follow. Gradients wrt t and v are a separate issue to this breaking Milstein. (If that's what you're saying then I agree!)

Yes, I'm trying to argue that these are separate situations.

I agree that we never need gradients wrt t or v, I'm just quite assert-happy as a way of making sure things aren't silently misbehaving.

I still don't think it's necessary to check t or v, as I'm unable to come up with any likely scenario where these variables would require grad. Given that this check also adds complexity to the code and a tiny overhead, I'm not sure of the advantage.

I agree that this PR introduces an issue wrt Milstein (whilst fixing the issue wrt leaking graphs).

Reflecting on the main (gradient) issue, I am actually inclined to just set requires_grad = torch.is_grad_enabled() and leave it at that. This does leak graphs, but I don't think the overhead should be too large. I think the alternate fix, or manually checking the graph, is probably a bit too magic.

I agree with this fix, i.e. just leaving requires_grad = torch.is_grad_enabled().

Side note, I think this PR might be ready to be converted to review-mode from draft mode modulo the t v grad check discussion.

@patrick-kidger patrick-kidger marked this pull request as ready for review September 22, 2020 18:28
@patrick-kidger
Copy link
Collaborator Author

Changed requires_grad.

Regarding the asserts: as I understand it, you want to remove the assert t.is_leaf and assert v.is_leaf lines from adjoint_sde.py? I think it's important that AdjointSDE (or indeed any other library component) is self-contained; explicitly checked assumptions are better than implicit assumptions, especially in cases like this where a failure of that assumption will fail silently rather than fail loudly.

I can't think of a use case for t or v needing gradients either - but that could change not just because of some unforeseen feature, but because of a bug. (And the cost/complexity of two asserts is very low.)

@lxuechen
Copy link
Collaborator

Ok, let's leave the checks as they are for now. Happy to have this merged and thanks again for the great work in getting the adjoint-adjoints working!

@patrick-kidger patrick-kidger merged commit d1db41a into dev Sep 22, 2020
@patrick-kidger patrick-kidger deleted the dev-adjoint-double branch September 22, 2020 20:41
@patrick-kidger patrick-kidger mentioned this pull request Sep 22, 2020
15 tasks
lxuechen added a commit that referenced this pull request Oct 22, 2020
* Added BrownianInterval

* Unified base solvers

* Updated solvers to use interval interface

* Unified solvers.

* Required Python 3.6. Bumped version.

* Updated benchmarks. Fixed several bugs.

* Tweaked BrownianInterval to accept queries outside its range

* tweaked benchmark

* Added midpoint. Tweaked and fixed things.

* Tided up adjoint.

* Bye bye Python 2; fixes #14.

* tweaks from feedback

* Fix typing.

* changed version

* Rename.

* refactored settings up a level. Fixed bug in BTree.

* fixed bug with non-srk methods

* Fixed? srk noise

* Fixed SRK properly, hopefully

* fixed mistake in adjoint

* Fix broken tests and refresh documentation.

* Output type annotation.

* Rename to reverse bm.

* Fix typo.

* minor refactors in response to feedback

* Tided solvers a little further.

* Fixed strong order for midpoint

* removed unused code

* Dev kidger2 (#19)

* Many fixes.

Updated diagnostics.
Removed trapezoidal_approx
Fixed error messages for wrong methods etc.
Can now call BrownianInterval with a single value.
Fixed bug in BrownianInterval that it was always returning 0!
There's now a 2-part way of getting levy area: it has to be set as
available during __init__, and then specified that it's wanted during
__call__. This allows one to create general Brownian motions that can be
used in multiple solvers, and have each solver call just the bits it
wants.
Bugfix spacetime -> space-time
Improved checking in check_contract
Various minor tidy-ups
Can use Brownian* with any Levy area sufficient for the solver, rather
than just the minimum the solver needs.
Fixed using bm=None in sdeint and sdeint_adjoint, so that it creates an
appropriate BrownianInterval. This also makes method='srk' easy.

* Fixed ReverseBrownian

* bugfix for midpoint

* Tided base SDE classes slightly.

* spacetime->space-time; small tidy up; fix latent_sde.py example

* Add efficient gdg_jvp term for log-ODE schemes. (#20)

* Add efficient jvp for general noise and refactor surrounding.

* Add test for gdg_jvp.

* Simplify requires grad logic.

* Add more rigorous numerical tests.

* Fix all issues

* Heun's method (#24)

* Implemented Heun method

* Refactor after review

* Added docstring

* Updated heun docstring

* BrownianInterval tests + bugfixes (#28)

* In progress commit on branch dev-kidger3.

* Added tests+bugfixes for BrownianInterval

* fixed typo in docstring

* Corrections from review

* Refactor tests for

* Refactor tests for BrownianInterval.

* Refactor tests for Brownian path and Brownian tree.

* use default CPU

* Remove loop.

Co-authored-by: Xuechen Li <12689993+lxuechen@users.noreply.github.com>

* bumped numpy version (#32)

* Milstein (Strat), Milstein grad-free (Ito + Strat) (#31)

* Added milstein_grad_free, milstein_strat and milstein_strat_grad_free

* Refactor after first review

* Changes after second review

* Formatted imports

* Changed used Ex. Reversed g_prod

* Add support for Stratonovich adjoint (#21)

* Add efficient jvp for general noise and refactor surrounding.

* Add test for gdg_jvp.

* Simplify requires grad logic.

* Add more rigorous numerical tests.

* Minor refactor.

* Simplify adjoints.

* Add general noise version.

* Refactor adjoint code.

* Fix new interface.

* Add adjoint method checking.

* Fix bug in not indexing the dict.

* Fix broken tests for sdeint.

* Fix bug in selection.

* Fix flatten bug in adjoint.

* Fix zero filling bug in jvp.

* Fix bug.

* Refactor.

* Simplify tuple logic in modified Brownian.

* Remove np.searchsorted in BrownianPath.

* Make init more consistent.

* Replace np.searchsorted with bisect for speed; also fixes #29.

* Prepare levy area support for BrownianPath.

* Use torch.Generator to move to torch 1.6.0.

* Prepare space-time Levy area support for BrownianPath.

* Enable all levy area approximations for BrownianPath.

* Fix for test_sdeint.

* Fix all broken tests; all tests pass.

* Add numerical test for gradient using midpoint method for Strat.

* Support float/int time list.

* Fixes from comments.

* Additional fixes from comments.

* Fix documentation.

* Remove to for BrownianPath.

* Fix insert.

* Use none for default levy area.

* Refactor check tensor info to reduce boilerplate.

* Add a todo regarding get noise.

* Remove type checks in adjoint.

* Fixes from comments.

* Added BrownianReturn (#34)

* Added BrownianReturn

* Update utils.py

* Binterval improvements (#35)

* Tweaked to not hang on adaptive solvers.

* Updated adaptive fix

* Several fixes for tests and adjoint.

Removed some broken tests.
Added error-raising `g` to the adjoint SDE.
Fixed Milstein for adjoint.
Fixed running adjoint at all.

* fixed bug in SRK

* tided up BInterval

* variable name tweak

* Improved heuristic for BrownianInterval's dependency tree. (#40)

* [On dev branch] Tuple rewrite (#37)

* Rename plot folders from diagnostics.

* Complete tuple rewrite.

* Remove inaccurate comments.

* Minor fixes.

* Fixes.

* Remove comment.

* Fix docstring.

* Fix noise type for problem.

* Binterval recursion fix (#42)

* Improved heuristic for BrownianInterval's dependency tree.

* Inlined the recursive code to reduce number of stack frames

* Add version number.

Co-authored-by: Xuechen <12689993+lxuechen@users.noreply.github.com>

* Refactor.

* Euler-Heun method (#39)

* Implemented euler-heun

* After refactor

* Applied refactor. Added more diagnostics

* Refactor after review

* Corrected order

* Formatting

* Formatting

* BInterval - U fix (#44)

* Improved heuristic for BrownianInterval's dependency tree.

* fixed H aggregation

* Added consistency test

* test fixes

* put seed back

* from comments

* Add log-ODE scheme and simplify typing. (#43)

* Add log-ODE scheme and simplify typing.

* Register log-ODE method.

* Refactor diagnostics and examples.

* Refactor plotting.

* Move btree profile to benchmarks.

* Refactor all ito diagnostics.

* Refactor.

* Split imports.

* Refactor the Stratonovich diagnostics.

* Fix documentation.

* Minor typing fix.

* Remove redundant imports.

* Fixes from comment.

* Simplify.

* Simplify.

* Fix typo caused bug.

* Fix directory issue.

* Fix order issue.

* Change back weak order.

* Fix test problem.

* Add weak order inspection.

* Bugfixes for log-ODE (#45)

* fixed rate diagnostics

* tweak

* adjusted test_strat

* fixed logODE default.

* Fix typo.

Co-authored-by: Xuechen Li <12689993+lxuechen@users.noreply.github.com>

* Default to loop-based. Fixes #46.

* Minor tweak of settings.

* Fix directory structure.

* Speed up experiments.

* Cycle through the possible line styles.

Co-authored-by: Patrick Kidger <33688385+patrick-kidger@users.noreply.github.com>

* Simplify and fix documentation.

* Minor fixes.

- Simplify strong order assignment for euler.
- Fix bug with "space_time".

* Simplify strong order assignment for euler.

* Fix bug with space-time naming.

* Make tensors for grad for adjoint specifiable. (#52)

* Copy of #55 | Created pyproject.toml (#56)

* Skip tests if the optional C++ implementations don't compile; fixes #51.

* Create pyproject.toml

* Version add 1.6.0 and up

Co-authored-by: Xuechen <12689993+lxuechen@users.noreply.github.com>

* Latent experiment (#48)

* Latent experiment

* Refactor after review

* Fixed y0

* Added stable div

* Minor refactor

* Simplify latent sde even further.

* Added double adjoint (#49)

* Added double adjoint

* tweaks

* Updated adjoint tests

* Dev adjoint double test (#57)

* Add gradgrad check for adjoint.

* Relax tolerance.

* Refactor numerical tests.

* Remove unused import.

* Fix bug.

* Fixes from comments.

* Rename for consistency.

* Refactor comment.

* Minor refactor.

* Add adjoint support for general/scalar noise in the Ito case. (#58)

* adjusted requires_grad

Co-authored-by: Xuechen Li <12689993+lxuechen@users.noreply.github.com>

* Dev minor (#63)

* Add requirements and update latent sde.

* Fix requirements.

* Fix.

* Update documentation.

* Use split to speed things up slightly.

* Remove jit standalone.

* Enable no value arguments.

* Fix bug in args.

* Dev adjoint strat (#67)

* Remove logqp test.

* Tide examples.

* Refactor to class attribute.

* Fix gradcheck.

* Reenable adjoints.

* Typo.

* Simplify tests

* Deprecate this test.

* Add back f ito and strat.

* Simplify.

* Skip more.

* Simplify.

* Disable adaptive.

* Refactor due to change of problems.

* Reduce problem size to prevent general noise test case run for ever.

* Continuous Integration.  (#68)

* Skip tests if the optional C++ implementations don't compile; fixes #51.

* Continuous integration.

* Fix os.

* Install package before test.

* Add torch to dependency list.

* Reduce trials.

* Restrict max number of parallel runs.

* Add scipy.

* Fixes from comment.

* Reduce frequency.

* Fixes.

* Make sure run installed package.

* Add check version on pr towards master.

* Separate with blank lines.

* Loosen tolerance.

* Add badge.

* Brownian unification (#61)

* Added tol. Reduced number of generator creations. Spawn keys now of
finite length. Tidied code.

* Added BrownianPath and BrownianTree as BrownianInterval wrappers

* added trampolining

* Made Path and Tree wrappers on Interval.

* Updated tests. Fixed BrownianTree determinism. Allowed cache_size=0

* done benchmarks. Fixed adjoint bug. Removed C++ from setup.py

* fixes for benchmark

* added base brownian

* BrownianPath/Tree now with the same interface as before

* BInterval(shape->size), changed BPath and BTree to composition-over-inheritance.

* tweaks

* Fixes for CI. (#69)

* Fixes for CI.

* Tweaks to support windows.

* Patch for windows.

* Update patch for windows.

* Fix flaky tests of BInterval.

* Add fail-fast: false (#72)

* Dev methods fixes (#73)

* Fixed adaptivity checks. Improved default method selection.

* Fixes+updated sdeint tests

* adjoint method fixes

* Fixed for Py3.6

* assert->ValueError; tweaks

* Dev logqp (#75)

* Simplify.

* Add stable div utility.

* Deprecate.

* Refactor problems.

* Sync adjoint tests.

* Fix style.

* Fix import style.

* Add h to test problems.

* Add logqp.

* Logqp backwards compatibility.

* Add type annotation.

* Better documentation.

* Fixes.

* Fix notebook. (#74)

* Fix notebook.

* Remove trivial stuff.

* Fixes from comments.

* Fixes.

* More fixes.

* Outputs.

* Clean up.

* Fixes.

* fixed BInterval flakiness+slowness (#76)

* Added documentation (#71)

* Added documentation

* tweaks

* Fix significance level.

* Fix check version.

* Skip confirmation.

* Fix indentation errors.

* Update README.md

Co-authored-by: Patrick Kidger <33688385+patrick-kidger@users.noreply.github.com>
Co-authored-by: Mateusz Sokół <8431159+mtsokol@users.noreply.github.com>
Co-authored-by: Sayantan Das <36279638+ucalyptus@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants