Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for efficiently reprojecting multiple images with the same wcs #332

Merged
merged 6 commits into from
Jan 26, 2023

Conversation

svank
Copy link
Contributor

@svank svank commented Jan 5, 2023

This addresses #296. @entaylor, want to take a look?

This PR adds support for reprojecting multiple images that share the exact same coordinate information---e.g. a data array and a corresponding error array, or an L1 and L2 image pair. By providing multiple images in one reproject call, the coordinate transformation work can be done only once before reprojecting each image, offering a speedup.

I chose the syntax of

data, header = fits.getdata('data.fits', header=True)
uncertainty = fits.getdata('uncertainty.fits')
header_out = [...]

image_stack = np.stack((data, uncertainty))
reproject.reproject_adaptive((image_stack, header), header_out)

i.e. multiple images passed as an extra input array dimension. There's no support for doing this by passing some sort of mult-HDU FITS files directly to the reproject function---it seems like it would be too much work to verify each HDU has the same WCS.

I tried to mimic the way numpy handles broadcasting operations between two arrays---the trailing dimensions are matched up, while extra leading dimensions are looped over. For reproject, I'm taking the trailing dimensions of the input and output arrays to correspond to the WCS dimensions, and any extra leading dimensions beyond what the WCS contains are looped over. Any number of extra leading dimensions is supported---maybe a user with data and uncertainty arrays from both L1 and L2 wants to do a 2 × 2 × nx × ny input array.

When shape_out is provided, it can contain the extra dimensions that the input array contains (and they must match exactly), or shape_out can be just the shape of a single output image, and the extra dimensions are prepended automatically.

The implementation is a bit different for each of the interpolation, adaptive, and exact algorithms:

For interpolation, I'm computing the pixel-to-pixel mapping once, and then looping over the extra dimensions, calling map_coordinates once for each image. To support the new blocked mode (which is only in this algorithm), I did change how that mode approaches 3+ dimensional WCSes. It breaks up the data in only two dimensions---those were originally the first two dimensions, but I changed it to be the last two dimensions, since those will always be WCS dimensions. (The alternatives would be "the first two WCS dimensions, which are not necessarily the first two array dimensions" or "the first two array dimensions, which are not necessarily WCS dimensions", which would both complicate the logic a bit.)

For adaptive, I made the changes at the Cython level, since that's where some of the coordinate work occurs. The few spots that update the output array (the inner-most part of the loop) are modified to do everything along the extra dimension.

For exact, the changes are at the Python level, similar to interpolation with a loop over the extra dimensions after calculating the coordinates once. This algorithm has a parallel-processing mode which could in principle parallelize across the extra dimensions, but then there would have to be separate parallel modes depending on whether there are extra dimensions, so I've left the parallelization inside the loop over extra dimensions.

My speed test results with two PSP/WISPR images, each about 1k x 1k:

>>> two_images = np.stack((d1, d2))

>>> %%timeit
... dout = reproject.reproject_interp((two_images, w1), wout, d1.shape, return_footprint=False)
831 ms ± 8.38 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

>>> %%timeit
... dout1 = reproject.reproject_interp((d1, w1), wout, d1.shape, return_footprint=False)
... dout2 = reproject.reproject_interp((d2, w1), wout, d1.shape, return_footprint=False)
1.75 s ± 21.2 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)


>>> %%time
... dout = reproject.reproject_adaptive((two_images, w1), wout, d1.shape, return_footprint=False)
1.58 s ± 23.1 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

>>> %%timeit
... dout1 = reproject.reproject_adaptive((d1, w1), wout, d1.shape, return_footprint=False)
... dout2 = reproject.reproject_adaptive((d2, w1), wout, d1.shape, return_footprint=False)
3.27 s ± 45.9 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)


>>> %%timeit
... dout = reproject.reproject_exact((two_images, w1), wout, d1.shape, return_footprint=False)
19 s ± 961 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

>>> %%timeit
... dout1 = reproject.reproject_exact((d1, w1), wout, d1.shape, return_footprint=False)
... dout2 = reproject.reproject_exact((d2, w1), wout, d1.shape, return_footprint=False)
19.7 s ± 733 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

tl;dr A 2x speedup for two images in interpolating or adaptive reprojection

(There's one pre-existing test failure that's gonna tank CI.)

@codecov
Copy link

codecov bot commented Jan 5, 2023

Codecov Report

Merging #332 (1650c89) into main (06cab0d) will increase coverage by 0.57%.
The diff coverage is 98.34%.

@@            Coverage Diff             @@
##             main     #332      +/-   ##
==========================================
+ Coverage   93.58%   94.15%   +0.57%     
==========================================
  Files          24       24              
  Lines         717      787      +70     
==========================================
+ Hits          671      741      +70     
  Misses         46       46              
Impacted Files Coverage Δ
reproject/interpolation/core.py 95.31% <96.15%> (-0.53%) ⬇️
reproject/spherical_intersect/core.py 98.07% <98.30%> (+0.97%) ⬆️
reproject/adaptive/core.py 97.77% <100.00%> (+4.67%) ⬆️
reproject/adaptive/high_level.py 100.00% <100.00%> (ø)
reproject/interpolation/high_level.py 100.00% <100.00%> (ø)
reproject/spherical_intersect/high_level.py 100.00% <100.00%> (ø)
reproject/utils.py 88.07% <100.00%> (+0.24%) ⬆️

📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more

Copy link
Member

@astrofrog astrofrog left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is great @svank, thank you!

I agree with putting the onus on the user to construct the stacked array otherwise we would indeed need to somehow compare WCSes etc. However I wonder if you could put an example in the docs showing how to do this?

Once this is done I would be happy to merge. I'll wait a few days anyway in case @entaylor wants to try this out.

@svank - please ping me once you've added the docs!

@astrofrog
Copy link
Member

@svank - when you work on this next, could you rebase? (the CI issues should be fixed)

@svank svank force-pushed the multiple-images-same-wcs branch 3 times, most recently from 7ad360d to 93064af Compare January 10, 2023 00:18
@svank
Copy link
Contributor Author

svank commented Jan 10, 2023

@astrofrog Done! Thanks for reviewing!

@astrofrog
Copy link
Member

Thanks!

After a quick discussion with @Cadair, I wanted to bring up a point related to the API - at the moment, this PR might tie us in to having leading dimensions being the ones to ignore, but there are some other use cases we might want to consider. For example, one might want to pass a 3D spectral+celestial cube, and reprojecting just the celestial or spectral part - in this case, we might want to have a way to specify the input WCS as being 3D and then giving the part of the WCS we want to reproject as the target WCS and then having reproject figure out internally which dimensions to ignore/loop over. This might not be in conflict with what is done in this PR because in this PR I think you are saying that leading dimensions are ignored compared to the dimensionality of the input WCS. However I wonder if we should perhaps allow the dimensions to be ignored to be configurable (defaulting to extra leading dimensions). Although again maybe as long as we know we can add that option in future maybe what is here is fine. At least I think we should make it clear that what matters here is dimensionality of the data compared to the dimensionality of the input WCS.

@svank
Copy link
Contributor Author

svank commented Jan 10, 2023

Glad the API choice is getting some review. I think you're right that if we're comfortable defaulting to taking the leading dimensions as the extra loop dimensions (and that feels sensible to me), this PR doesn't preclude adding additional flexibility later. Right now, the input and output WCS are required to have the same number of dimensions---this was already explicitly checked for in interpolation/core.py, adaptive/core.py, and spherical_intersect/high_level.py---so there's no practical distinction on which WCS we're comparing the input array shape to, but we could document it as the input WCS to offer flexibility later.

What I'm interpreting as the future API you're envisioning is matching up the output WCS dimensions to input WCS dimensions to determine the transformation and looping over any other input dimensions, whether 'extra' dimensions not in the input WCS or input WCS dimensions not present in the output WCS, and maybe with some sort of wcs_to_array_axis argument that specifies the input array dimension corresponding to each input WCS dimension and which defaults to mapping the WCS dimensions in the normal order to the last array dimensions. If that's indeed what you're imagining, I think that's out-of-scope for this PR but builds naturally on this PR, with the right future default behavior being the current behavior. The things enabled by those future changes should still be achievable with this PR through suitable edits by the user to the input and output WCSes and array dimensions, so that flexibility would be about convenience. I think this PR also creates a natural point in each algorithm at which to implement that flexibility---for each one there's a spot where I adjust the input array to always have exactly one leading loop dimension (of size one if no looping is actually needed), and then I impose the correct output shape at the end, so those would be good places to do more conversions between what the user provided and some standardized axis order the core algorithms can work with.

So if I make sure the docs and code refer specifically to the input WCS dimensionality, do you think this is in a good current state?

@astrofrog
Copy link
Member

I agree with your assessment and indeed the PR will be ready to go after those tweaks.

@entaylor
Copy link

@svank : Thank you for this! I will be very excited to give this a spin when i come back from leave early next week.

The API question is worth considering carefully. Indeed, IFS data is one potentially very interesting use case for this functionality.

From my perspective, for what it's worth: Typically, IFS data will have 3 WCS dimensions; two spatial and one spectral. If one wanted to reproject IFS data, then i think the way this could be done with the current structure would be to pass the 3D IFS data array, and then with only two of the three WCS dimensions. And if i wanted to reproject both IFS data and weight/variance arrays, then i would pass in a 4D array; e.g. of shape (2, nwl, nx, ny).

Does that make sense from your point of view? And/or does it impact at all the structure that you are envisaging?

@svank
Copy link
Contributor Author

svank commented Jan 11, 2023

Yes, you have it right. With this PR you can remove the spectral axis from the WCS and ensure the spectral array axis is first in order to do a spatial resampling with optimized looping (i.e. treat the xy slice at every lambda value as a separate image, but all with the same coordinates), and you can also add in a data/variance axis and pass in a (2, nwl, nx, ny) array, with looping over the first two dimensions. The future API goals would make this more convenient, and I think the IFS case fits well.

IFS data is indeed a really compelling use case for this optimization, with so many slices with identical x/y coordinates. In some quick testing, I'm seeing a 20-25x speedup for what I assume is a typical IFS data cube size:
image

@svank
Copy link
Contributor Author

svank commented Jan 11, 2023

Docs should now specifically refer to the input WCS

Copy link
Member

@astrofrog astrofrog left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, happy to approve this now!

However I will wait for @entaylor to try this out early next week before going ahead and merging in case there are any issues.

@astrofrog
Copy link
Member

@entaylor - did you get a chance to look at this?

@astrofrog
Copy link
Member

I am going to go ahead and merge this as I am going to be preparing a new release shortly - but @entaylor if you get to try this out and run into any issues please post a comment here!

@astrofrog astrofrog changed the title Reproject multiple images with the same wcs Add support for efficiently reprojecting multiple images with the same wcs Jan 26, 2023
@astrofrog astrofrog merged commit 829202a into astropy:main Jan 26, 2023
@svank svank deleted the multiple-images-same-wcs branch June 23, 2023 21:39
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants