-
Notifications
You must be signed in to change notification settings - Fork 27
WIP: Rework and Simplify Internals #109
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Co-authored-by: Lyndon White <oxinabox@ucc.asn.au>
Co-authored-by: Lyndon White <oxinabox@ucc.asn.au>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I haven't finished reviewing but i like it so far.
Did we get rid of the history field?
That that was problemantic so i would be glad to rid of it.
Can we do some tests on number of allocations
I deally anything that doesn't allocate from a single call should not allocate from having FDM called on it
Yeah, I deleted it entirely. Perhaps a bit of a radical change, but I'm not sure that this had a use anywhere.
Do you mean by this that everything FDM does has to be without any allocation? |
Yeah, more or less. |
Co-authored-by: Lyndon White <oxinabox@ucc.asn.au>
Co-authored-by: Lyndon White <oxinabox@ucc.asn.au>
I updated Richardson.jl to add a new I added an example to the documentation showing how this fixes the problem you pointed out above with the derivative of So, I would require Richardson.jl 1.2.0 and set |
Nice! That sounds great; thanks for adding this. I'll default to I am away for another week, but will then get back to this PR to wrap it up. |
|
@wesselb how close are we getting to this being mergable? What deprecations do we need? |
Allocation performance is already better than what it was before: using Current release:julia> @btime $_fdm( sin, 0.2);
9.843 μs (254 allocations: 6.81 KiB) this PRjulia> @btime $_fdm(sin, 0.2);
4.767 μs (146 allocations: 4.23 KiB) |
I spent an hour or so digging into the remaining allocations They are all in |
Co-authored-by: Lyndon White <oxinabox@ucc.asn.au>
promoted to a suitable floating point type. | ||
""" | ||
add_tiny(x::T) where T<:AbstractFloat = x + convert(T, 1e-40) | ||
add_tiny(x::Integer) = add_tiny(float(x)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For this to work with arbitrary real types, including dimensionful types, you should probably use something like
add_tiny(x::T) where T<:Number = x + oneunit(T) * oftype(float(real(one(T))), 1e-40)
However, philosophically I'm very suspicious of this — not only are you assuming an absolute scale for x
, but also you are using 1e-40
independent of the precision.
Furthermore, in most cases this code will be equivalent to float(x)
, since the 1e-40
will be rounded off. What are you trying to accomplish here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps it's better to leave support for dimensionful types for a future PR. I'm not familiar with the Unitful
package, so I'm not sure what other things would need to change. The typing should currently be restrictive enough to throw method errors if things are not converted to AbstractFloat
s, so there should be no infinite loops like there were before.
The purpose of add_tiny
is to add a number that is small in the absolute sense. An example where this is used is in the step size estimation. The absolute value on the roundoff error on x
is computed by eps
. If x = 0.0
, then eps(x)
is very small, and things can NaN out. To prevent this from happening, we use add_tiny(eps(x))
instead.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we make some minimal change, if the above works we can do that,
and then later investigate unitful support more broadly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The above change would add for complex numbers a tiny real number to only the real part. I'm not sure what the behaviour for complex numbers should be, as tiny would then (most logically) refer to the modulus, and there are many complex numbers with modulus 1e-40
. I'm happy to go with the above change if you think that this is not important. Alternatively, we can restrict the scope of the function change the docstring of the function to only work on AbstractFloat
s and Integer
s; the function is only used internally anyway.
EDIT: For now, I've restricted the docstring. Let me know if you want it otherwise.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The purpose of add_tiny is to add a number that is small in the absolute sense.
There is no such number — there is no such thing as “small in the absolute sense”. Smallness is always relative to something else. You are implicitly assuming some absolute scale here, which cannot possibly hold for arbitrary inputs.
For example, if the function I’m differentiating is log10(x)
and I’m calculating the derivative at x=1e-50
, then 1e-40
is not a small number in comparison to x
.
The whole point of floating-point arithmetic is that it is scale-invariant (until you reach underflow/overflow). I suspect that you are breaking that here. The dimensionful incorrectness is a symptom of that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Printf = "de0858da-6303-5e67-8744-51eddeeeb8d7" | ||
Random = "9a3f8284-a2c9-5f02-9a11-845980a1fd5c" | ||
Richardson = "708f8203-808e-40c0-ba2d-98a6953ed40d" | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Need to bump the version at the top of this file
Are we happy enough with the public API to call this v1?
I think we are.
Or we might want to do a last release with deprecations,
and then remove those as a follow up so v1. has no deprecations
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure what's best. For now, I've changed the version to 0.11.0
. Let me know if you want to go for a major release instead.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Basically LGTM
See last few comments.
Address those and should be good to go
Do we want deprecations?
What is this release going to be numbers as
This PR simplifies the internals, which have become a little convoluted over time. Also adds
extrapolate_fdm
(#107).Important changes:
FiniteDifferenceMethod
, which implements an FDM on an arbitrary grid.fdm
anymore. All estimates are executed by calling the FDM directly.TODO: