-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Draft: Added computation of derivatives #5
Merged
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Single-variable derivatives seem to work, and so do mixed, second order, central, partial derivatives; however, when using forward or backward stencils, they do not appear to be very accurate yet.
Included the tests as well
Now it uses the `D` class, which makes it possible to specify the kind of derivative we wish to compute (center, forward, of backward).
Codecov Report
@@ Coverage Diff @@
## master #5 +/- ##
==========================================
+ Coverage 92.70% 93.83% +1.13%
==========================================
Files 6 8 +2
Lines 1233 1459 +226
==========================================
+ Hits 1143 1369 +226
Misses 90 90
📣 Codecov can now indicate which changes are the most critical in Pull Requests. Learn more |
The computation can now handle arbitrary order derivatives with arbitrary accuracy, and is limited only by standard floating-point math issues. The tests have been modified accordingly to reflect this change.
Since the numerical errors start to show up at orders much smaller than 10, it's best to limit the computation to that order, and raise an error otherwise.
Now it raises `NotImplementedError` instead of returning `NotImplemented`, since the former should be more clear to users if attempting to compute the Fisher matrix without implementing it first.
I forgot to take the inverse of the covariance in the 1D case.
This is where all of the "official" interfaces to the derivatives should be placed.
According to PEP 585, the use of `Tuple`, `Dict`, and `collections.abc.*` will be deprecated in future versions. On the other hand, owing to the `annotations` from the `__future__` module, it's possible to use `dict`, `tuple`, etc. as type hints in Python 3.7, and this commit implements this change.
Forgot to change a handful of instances of `Tuple` and `abc.Mapping` in the last commit
Forgot to put `from __future__ import annotations` in the tests
The `Collection`, `Mapping`, etc. from the `typing` module has also been deprecated, and we now use their `collections.abc` versions instead.
The parameter-dependent covariance was not handled properly in all cases, now it should work correctly as long as the return values of `signal` and `covariance` can be matrix-multiplied.
Also enabled passing of arbitrary `kwargs` when calling `fisher_tensor`, to be consistent with the signature of `__call__`.
It makes sense for now since higher order corrections have not been implemented yet.
Also added tests for it
The implementation is using the Cython interface of COFFE. Note that it has been purposely excluded from the coverage since the code may be difficult to install on the remote.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR adds some basic functionality to compute derivatives in
fitk
using finite differences (see for instance this Wiki article).In principle, the interface is completely generic, and any code can be made to compute Fisher matrices (and higher order corrections) by just inheriting from
FisherDerivative
, and implementing thesignal
andcovariance
methods with appropriate return types, without having to touch any other aspect of eitherfitk
, or the other code.TODO:
D
to mimic Mathematica's D) for specifying details about the parameter (its name/label, the point at which we wish to compute the derivative, the order of the derivative, the absolute step size); this is nice because defaults can be set in the class, but has the disadvantage that it's possible to forget to call it. The actual computation is defined in a dunder__call__
, maybe calling itderivative
would be more intuitive (and it'd be easier to display and explain it in the docs)kind
when callingfisher_tensor
(for a concrete example, it's impossible to use central differences in cosmological codes for neutrino masses if we set the fiducial value to zero); currently it uses the default, which iscenter
fisher_tensor
, figure out if that is the most appropriate name for the method (resolved for now: current name isfisher_matrix
)signal
should return the tuple(coords, values)
? Or a separate (data) class with propertiescoords
andvalues
). Note that the coordinates are not needed anywhere in the code, they would just be there as a convenience for the user, in case they want to construct the derivatives themselves, or if they want to easily plot the derivatives without re-computing the coordinates.