-
Notifications
You must be signed in to change notification settings - Fork 116
Zernike enhancements #1327
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Zernike enhancements #1327
Changes from all commits
Commits
Show all changes
6 commits
Select commit
Hold shift + click to select a range
1553081
Add xycoef method to DoubleZernike
jmeyers314 9dff874
Enable higher order annular Zernikes
jmeyers314 65f072f
Remove py3.8 from CI
jmeyers314 b3f8aba
Mike's comments on test_large_j
jmeyers314 24f9bcc
Rewrite xy_contribution as cached recursive
jmeyers314 fee7132
Add robust Zernike evaluation
jmeyers314 File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we expect this to converge particularly well when r>1, right? So should probably limit this to have
x**2 + y**2 < 1.When I tried it, I was able to use significantly lower tolerances:
(10, 1.e-12)
(20, 1.e-12)
(40, 1.e-10)
(60, 1.e-8)
(80, 1.e-6)
(100, 2.e-3)
So it is still becoming less accurate as j get very large. But not as catastrophically bad as this test implies.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The ones that are least accurate are always ones with r close to 1. Like 0.95 or so. This is probably a clue about where the recursion is going unstable. I might play around with the recursion formula to see if I can harden it up at all.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, limiting to 0.5 < r < 1.0 might even be appropriate. Though, OTOH, these are just 2d polynomials. They're orthogonal over an annulus but defined everywhere.
The particular polynomials here are proportional to r^n, so that might explain why the large r are trickier. Other values of j will presumably produce polynomials with larger amplitudes near r=0.
In case it's helpful while you're looking at recursions, here's the closed-form table I got out of Mathematica up to Z28:

There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the problem is the conversion of the coef array from rrsq format to xy format. The xy version has lots of very large coefficients with alternating sign. The native rho/rhosq version is much more stable numerically.
If I change evalCartesian to:
then all the tests pass at 1.e-12 tolerance, except Z5151, which needed 1.e-11. (It's also massively faster than the current implementation -- I guess the rrsq_to_xy function must be a slow step?)
I don't know if this means we should actually make the above change though. There might be other downsides (e.g. not getting to use the C++ layer horner2d implementation, at least without a little more work). But I suppose we could at least provide another method that would evaluate this way for users who want more accuracy at very high Zernike order.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Interesting. I sped up rrsq_to_xy significantly by rewriting it as a cached recursive function. Now the tradeoff seems to be roughly:
The xy array is about 2x the effort to compute as the rrsq array, which makes sense since the xy is derived from the rrsq. However, most of this effort is cached so I don't think there's a significant difference after the initial Zernike constructions.
The xy approach (so c++ horner) is about 10x faster when there are ~100 evaluation points. However, on my machine (M3 Pro) the complex-python approach actually runs faster when the number of evaluation points is ~10_000 or more (!). Maybe this is because the rrsq array is ~2x smaller than the xy array?
Given the improved accuracy and the above, I tentatively vote make the rrsq approach the default. I want to benchmark some donut fitting first though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmmm.... Maybe the horner step depends on jmax too. I just ran some donut image forward models (jmax ~65 and npoints ~10000) and they were ~4x faster using the xy approach. So maybe need to leave that as an expert option.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Either way is fine with me. As long as there is a public API way to get the calculation that is accurate for high order.
I think most use cases won't care about the accuracy as much as the speed. But it seems important to have a way to get the accurate one.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added an
evalCartesianRobustmethod and a kwarg to__call__to optionally branch that way, but left the old method as the default.