-
Notifications
You must be signed in to change notification settings - Fork 36
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Division by zero in hansenlaw implementation? #342
Comments
Thanks for the quick response, yes it is clear that the recursion has to stop at some point. But are we computing one point to much (=should the n array run only over [cols-1:2]) If the boundary values are fixed afterwards my rant can probably be labelled as nitpicking The last line states that in the case of Abel inversion the recursion stops at n=N-2 (giving us 1 in the denominator), but aren't we hitting the same issue for the forward transform. brgds |
It is has been a while since I coded this algorithm. The iteration does stop early, and so the first column (inverse) and last column (forward) is missing. Hence, the column duplication code: # missing column at each side of image
aim[:, 0] = aim[:, 1]
aim[:, -1] = aim[:, -2] |
If you look at the forward abel transform in Seismic Unix there is a interesting conclusion about the transformed value at the origin. As the Abel transform is defined by: When y=0 this becomes and when we implement this with a trapezoid summation (for hold-order=1) the transform at x=0 becomes By looking at the above implementation you will also see:
but this is just observations.... main thing I think is the definition of g(0), that could be potential be useful?! -Morten |
Thanks, I will take a look at the code. The bottom line is that for real images the image noise accumulates to the last pixel, so the extra effort to determine |
That is true, but the noise level will propably vary between different use cases... i.e sometimes I could imagine this being applied to synthetic "images". However in my current use case I also don't have the need to get the g(0) correct, it was more because I tried to understand the code and stumbled over the alternative definition |
…arameters for hₖ and λₖ, from abel.c of Dale and Deng. See issue PyAbel#342. Update unit test.
Hale and Deng, or someone else, may have refitted the ninth-order model parameters. The π factor scales the Abel transform, and may be placed elsewhere. I have added the import abel
import matplotlib.pyplot as plt
n = 201 # more points better fit
r_max = 51
ref = abel.tools.analytical.GaussianAnalytical(n,
r_max, symmetric=False, sigma=20)
recon = abel.hansenlaw.hansenlaw_transform(ref.func, ref.dr,
direction='forward')
recon_alt = abel.hansenlaw.hansenlaw_transform(ref.func, ref.dr,
direction='forward',
alt_params=True)
print(f'first value difference = {recon_alt[0] - recon[0]:g}')
gdef = 2*ref.func.sum() - ref.func[0]
print(f'g[0] = {gdef:g}, recon_alt[0] = {recon_alt[0]:g}')
# plots ---------------------
fig, (ax0, ax1, ax2) = plt.subplots(1, 3)
ax0.set_title(r'default $h_k \lambda_k$')
ax0.plot(ref.abel)
ax0.plot(recon)
ax1.set_title(r'Hale & Deng $h_k \lambda_k$')
ax1.plot(ref.abel)
ax1.plot(recon_alt)
ax2.set_title(r'difference')
ax2.plot(recon_alt-recon)
plt.tight_layout(w_pad=0.2)
plt.show() Not sure whether this parameter choice is worth merging into
Easy to code, Edit: Just an additional comment, the "error" is less for larger Edit2: relative error! |
I guess, they've fixed h₀ = 1 and then fitted the rest... No idea why, because if this is not to satisfy some reasonable constraint, fixing h₀ reduces the number of free parameters and thus cannot improve the fit quality. Is it possible to contact any of the authors of that code and ask where all these numbers came from? (They cite a different article by Hansen alone, but it gives exactly the same approximation as the Hansen & Law article that we cite.) @stggh, it's hard to tell visually, but are these “alternative” parameters reproducing the analytical result any better? I mean, could you please plot the differences between each result and the exact solution? No idea about the centerline, sorry... :–) |
Nothing of significant merit for the alternative parameters. Not worth a PR.
We also cite and use the same article # Adapted from (see also PR #211):
# [1] E. W. Hansen "Fast Hankel Transform"
# IEEE Trans. Acoust. Speech, Signal Proc. 33(3), 666-671 (1985)
# doi: 10.1109/TASSP.1985.1164579
#
# and:
#
# [2] E. W. Hansen and P-L. Law
# "Recursive methods for computing the Abel transform and its inverse"
# J. Opt. Soc. Am A2, 510-520 (1985)
# doi: 10.1364/JOSAA.2.000510 The parameters of Ref. [1] Table I, are as coded. |
Oops, didn't pay attention (I was just looking at the articles that I had already downloaded)... :–)
Yes, it's interesting that their error at small radii is actually ~20% larger. However, the average error across the whole range is probably slightly smaller than with the original parameters. I don't know whether this is universal or just an accident for this Gaussian. In any case, at least some justification would be needed (H&L do describe how they've obtained their approximation, although admitting that it was not without problems). |
Despite that the error is larger at the small radii, you see that the difference between H&D and the analytical solution is smoother than what is seen at the H&L at r==0. (There is a small kink in difference curve for H&L) When you compare gdef = 35.4384 with recon_alt[0] = 35.6889. recon_alt[0] is really just a copy of recon_alt[1], I also agree I currently don't see a selling point for the alternative parameters, but it would have been nice to have had the H&L parameters with higher precision... :/ I found the Master thesis of Deng at the libray.mines.edu , that has a self reference to Deng, H. L., 1991, Fast Hankel transform and its applications: CWP -107 , Project That could contain some more info, I have had a look at the CWP web site) to see if the report was available online but its a bit sparse prior to 1992, and only two papers from the CWP-107 report is available here |
I suspect that in most cases continuity is more desirable than accuracy at a single point. From this point of view, both approaches look wrong: switching to numerical integration for r = 0 cannot provide continuity, and simply interpolating r = 1 to r = 0 completely ignores the data at r = 0. Unfortunately, I don't understand the method enough to suggest any reasonable solution...
As I understand, Hansen and Law used successive approximations, so essentially all errors due to rounding of coefficients at each approximation order are transferred to the next order and accounted there. Thus only the “precision” of the last-order coefficients really matters, but those do have enough significant figures. The only concern is that the 0-order amplitude must be 1/π = 0.31830988618 instead of “0.318”, but I'm not sure that this makes any noticeable difference besides the small “hook” visible near the origin. (This, by the way, rebuff my previous comment about fixing h₀ = 1 by Hale and Deng — there is a good reason to do so, because it's the limit of the approximated function at ∞.)
@DanHickstein is the best candidate (I'm currently away from there). However, relying on something that is not easily accessible to our users is a questionable idea. :–) I'm wondering whether it can be requested through interlibrary loan — this usually makes the document scanned, if not already, and then there is a hope that once scanned, they would also put it online... |
Good discussion! I put in an inter-library loan request to the University of Colorado Library for the Deng CWP article. They have never failed me in the past. But, I've never given them a request this esoteric. We'll see if they are up to the challenge! If we succeed in getting an electronic version of the article, then we can likely send it to the School of Mines CWP and ask that they put it on their webpage. Alternatively, I suppose we could just ask them to put this article on their page... |
I think CWP-107 refers to their yearly consortium report from 1991, where all the work that has been done at the department is presented to consortium sponsors. If the request via Colorado Library is not working out, I can try to contact some CWP affiliated people as suggested |
Yeah, I think that's correct.
You may as well go ahead and ask if they can make it available on their webpage. No harm in trying two methods. If all else fails, I can drive down to Golden and scan the hard-copy. |
@DanHickstein : Contacted people at CWP, they will check their basement within the next few days (a bit delay to be expected due to Spring Break)... lets see who gets it first |
@DanHickstein, just a reminder that the message above says "PICKUP BY: 04-09-22"... |
Thanks for the reminder! I made it to the library this morning and scanned the article. Let me know if this PDF looks alright: https://github.com/PyAbel/abel_info/blob/master/Deng%20CWP.pdf |
LOL! :–D So they indeed just brought the original book instead of making any copies?.. The PDF looks good enough, everything is perfectly readable. Thanks! On topic: The relevant parts are on p. 6 (“The parameters of the Abel transform filter” — the description) and p. 7 (Table 1 — the results). By the way, the results are also rounded to ~3 decimal places and are slightly different from the C file mentioned above (h8 = 77.6009 and 77.6000213494180286; λ3 = −5.793 and −5.78928630565552371). The description sheds some light on how this was obtained, but not enough. The approximated function has a singularity at zero (h → ∞ as ξ → 0), its domain is infinite, and the loss function that is minimized is not defined — it could be the integral of the squared residual (~RMS error), the same in the logarithmic scale (as H & L did), or maybe one of these with some coordinate transform (since ξ itself is a nonlinear transform of the initial coordinate, it's not obvious that uniform weighting in ξ is meaningful). These issues (especially the singularity at 0) are even more important for numerical fitting, which has been performed. And I also strongly suspect that the parameters are strongly correlated. In such situations, doing a global fit and then truncating the numbers is usually a bad idea. The sequential process used by H & L is more robust (see my comment above). I would say, if anybody is interested in repeating this fitting process, can reproduce the results, and also explain why the particular loss function is suitable for the considered problem, then it would be useful to include these “alternative” parameters or at least describe the findings here for the record. |
I have a question on the hansen law implementation, that I hope someone will be able to answer.
The hansen law implementation sets up an n-array with values in the range cols-1 to 1.
This array is used to compute phi, ratio and ra (the latter two inside I() ). For phi and ratio we actually use the ratio (n)/(n-1).
This ratio must be ill defined for the range given above that includes "1"?
Is this the reason for the copy the values close to the border applied just before the transform is returned?
The text was updated successfully, but these errors were encountered: