-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ndim fft #113
Conversation
While writing tests I found that the `BoundingBox` API was clunky and supported features that were inconsistent. The new implementation is cleaner and uses a shortened `Box` class name. The `resize` method was also removed, because it didn't add anything to the functionality of trim.
Generating a PSF image for given function in a given shape is common enough that a convenience method has been added to generate the x, y pixel grid for the shape and create a PSF image using the function parameters.
While creating the tests a few inconsistencies/sloppy coding were fixed: - `Scene.psfs` is now always 3D, even the `target_psf` for the model scene. This makes the API more uniform for `Scene` and its inherited classes. - PSF matching and convolutions now use real FFTs. This has the advantage that it runs faster, fixes a potential bug in the PSF matching code, and is more similar to the `scipy.signal.convolve` method.
In addition to creating the units tests for the classes in component.py, the commit also fixes the `Prior` API to use the form `sed_grad` and `morph_grad` (instead of `grad_sed`, ...) to match the attributes of `Component`.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me, I didn't see anything that Peter didn't already comment on.
Besides the three open comments, it would be nice to have a simple comparison of the time for
The model can be simple zeros. |
Sorry, I saw your latest comment just now. I had similar stuffs done, so I'll do that quickly tomorrow. In the mean time, here is a new commit with Ndim_fft in place where it can be done without losing anything |
OK, this looks good in the regime where we need it. The drop off at the right is probably because of memory thrashing / cache misses. |
Finally, I figured it out! So: for some values of the psf size, the fast shape of the fft returns an even value, which autograd.fft cannot deal with. To have autograd run smooth and clean, we tested the fast shape and made it uneven, but only along one axis, which, apparently, for the 2-d fft was ok, but not when going Ndim (this is still a little weird to me). So now I make both axes uneven whenever things fast shapes are even and we the Fourrier operations run nominally. |
Weird. It might be worth creating an issue with autograd or even fixing this ourselves upstream because I doubt that we're the only ones to get dinged by this. |
Is this a problem from autograd or from numpy FFTs? |
It's a problem with autograd. Numpy.rfftn has no problem with even shapes, in fact I think that it prefers them and this code (without the check for an even last dimension) is the same basic algorithm that But with autograd they raised an Exception when using rfftn with an even numbered last dimension, hence the fix that is in master (https://github.com/fred3m/scarlet/blob/master/scarlet/observation.py#L190-L194). But I had never run across a problem with PSF sizes, so I didn't notice that I needed to use the same fix when we calculated the optimal shape for PSF matching. |
So, it looks like we are in a good state now. @herjy please confirm that we can merge this PR. |
To answer Fred's last comment, It is somewhat strange that the problem did not arise earlier, but did now in the ndim. It could be related to the ndim implementation and the use of |
I just spotted this: cb8bd40 deletes the file |
All right, I am mildly fed up with this, so I will submit it for review. The critical part, which is the ndimensional fft for
Observation
works marvelously well. The same trick does not work at all forLowResObservation
. If anything it makes everything slower. Here is a recap of what you'll find:blend().fit
drops from 88 ms to 18 ms. I will do more tests to see hoe the various dimensions of the arrays affect this, but it looks nice!make_operator
is now a method ofLowResObsevation
, which makes more sense.resampling.match_patches
to a method of that class too, but its use inmatch
andmatch_psfs
made me think twice.LowRes
and the run times forblend.fit
in the notebooks. For comparison, the same thing is done on the current implementation in a branch I called 'speed_test' which I will use to investigate further the gain that the new implementation gives us.On the bad news side (yes this was all good news):
LowRes
, the slower it gets. I tried an implementation full-on ndim fft (it was beautiful, not a single for loop), but after running (building the resampling operator) for MANY minutes I lost patience and gave up. You will find a hybrid implementation that still uses scipy.Since I already spent days on trying to make
LowRes
faster without success, Idon't want to delay this any further, I suggest that you guys review the part inObservation
. If I can find whyLowRes
is slow, I'll update everything. Writing this makes me think that it could be a problem offftpack_shapes
but I am not convinced.Good luck.