-
Notifications
You must be signed in to change notification settings - Fork 4
Use expansion in terms of Bessel series and large order expansion #13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
|
Interesting. This approach seems a bit delicate — as long as it works I am okay with it though. The speed up is of course impressive. Do you have any idea if this is more or less robust / efficient than (non-adaptive) Gaussian quadrature? Also, I am fine with depending on |
Codecov ReportBase: 64.03% // Head: 77.46% // Increases project coverage by
Additional details and impacted files@@ Coverage Diff @@
## master #13 +/- ##
===========================================
+ Coverage 64.03% 77.46% +13.43%
===========================================
Files 4 4
Lines 228 324 +96
===========================================
+ Hits 146 251 +105
+ Misses 82 73 -9
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. ☔ View full report at Codecov. |
|
Let me add the large order asymptotic expansions here to this as well. The bessel series works very well though for orders above 1,000 it will need more terms (>300) which is fine we just have to add the log series for it but at that point the large order expansion should be very convergent so we should have that in here instead. (We should really really favor that expansion in general over the bessel series for large orders). Edit: The large order expansion also requires |
|
Ok I've added the large order and large argument expansion. This expression is slightly different than the large argument expansion for julia> @benchmark struveh_expansion((100.0), (x)) setup=(x=rand()*10)
BenchmarkTools.Trial: 10000 samples with 976 evaluations.
Range (min … max): 68.007 ns … 116.376 ns ┊ GC (min … max): 0.00% … 0.00%
Time (median): 69.117 ns ┊ GC (median): 0.00%
Time (mean ± σ): 69.335 ns ± 1.241 ns ┊ GC (mean ± σ): 0.00% ± 0.00%
▃▂ ▄██▇▆▆▃▂▁ ▂▂▁ ▂
▄▁▄██▆▅▆▄██████████▇▅▃▄▃▄▅▅▃▅▄▃▄▅▆▃▅▇▅▇██████▆▅▄▄▄▁▁▃▅▃▃▄▅▅▆ █
68 ns Histogram: log(frequency) by time 73.6 ns <
Memory estimate: 0 bytes, allocs estimate: 0.but of course only used for large orders. This is not auto-vectorizing which will decrease the speed by half but hopefully I can figure out why in the future and we can make those changes. Not really sure how often any user hits this range and it's already much faster than anything else... but it takes care of the region where the bessel series is slow to converge and puts a nice upper bound on where we employ that so the number of terms we use is more robust there. |
Of cocurse, that makes perfect sense. Thanks for pointing that out |
Yeah, I don't know what's up with that. There is a release in General registry but tagbot is not reacting |
alright |
|
Ya CI is finicky but I think it looks ok now at least on the general registry? I've added a few more tests so that we fix the coverage issues so that all branches are hit and updated the docs. But ya - there's some movement on like moving gamma functions out of Bessels.jl and SpecialFunctions.jl so I'll make the PR here when that settles down a little bit! It'll also allow me to test better those changes. |
|
@heltonmc Are you okay to merge this? (Sorry, I was distracted for a while) |
|
Not a problem! I still wasn't super happy with the errors on this anyway in that one region I showed above. To try to alleviate that I derived more terms in the expansion which helped decrease the range where a slight loss in accuracy occurred. Here is the updated relative error. So there is a small region there where the function goes to zero where we only get about 5 digits... The issue is that the bessel series can't converge there because the Bessel function underflows to quickly so we can't use enough terms. This can be solved by having a scaled version of the bessel function for large orders that doesn't underflow. I would probably need access to the internals of the |
|
For the record, I'm perfectly fine with merging this functionality into a larger package where code sharing is possible. I only started it because back then, there wasn't an effort like Bessels.jl |
|
Sorry this has really slipped by while I worked on my dissertation. I used to be more supportive of having separate packages because of package load and compilation times so each could be a lightweight dependency. But now with v1.9 dropping soon and pkgimages this kind of math code (with only a few types) can be generated and precompiled. Now load times are negligible, these special functions can share already compiled code, so it might make sense to have this under one repo where we can better control package load time, precompilation, and of course maintenance with invalidations etc. I would be in favor of moving this code over as well to take advantage of the bessel subroutines needed to move this forward. I did think about this problem some more and I think the best way to do this is to actually do a Miller type scheme with downward recurrence then have a normalization condition at the end. That way we are normalizing to the lowest order which will avoid overflow instead of starting at terms that could potentially underflow. It will need some care but I think that is the best way. |



This PR avoids the fallback to numerical integration in the transition region of nu~x for
struvehandstruvek. It works by instead using the expansion of bessel series http://dlmf.nist.gov/11.4.E19.This formula is kind of tricky because it involves computing the Bessel function within the loop and it requires a large amount of terms to converge (50-150). So if we naively do this our function will not be very fast at all. The other challenge is that the
besseljprovided bySpecialFunctions.jlactually allocates so if we use it within the loop we will get many allocations.One thing we can exploit is because we only need k and then k +1 order of the bessel function is we can use recurrence. The problem is that forward recurrence with

besseljis unstable when nu > x which is the dominant region that we will use this method. So to make recurrence always work we need to use backward recurrence. Though if we work backward through the loop we need to know where to start. I picked three regions that determines how many terms to use to reach good convergence and went from there. Here are the plot of the relative errors of this method.So as long as x is not much larger than nu (covered by the large argument expansions) this will give us good convergence.
One thing is we probably want to use
Bessels.jlhere which will be much faster and doesn't allocate forbesselj.Here is a benchmark