Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

New init / box resizing / ... #227

Merged
merged 49 commits into from
Jan 14, 2021
Merged

New init / box resizing / ... #227

merged 49 commits into from
Jan 14, 2021

Conversation

pmelchior
Copy link
Owner

There's a lot on this one, but the main ones are

  • setting the spectrum from a linear solver before running Blend.fit
  • increasing and decreasing the box sizes
  • adds SNR and moment measurements
  • has the ability to deconvolve the observation for initialization

The validation tests are now faster and more accurate.

properly use deconv detection image (#207); hacks for frame.bbox.origin (#219)
Copy link
Collaborator

@herjy herjy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This review introduces a few nice things. New initialisations, updates of components that enables to resize boxes and init_morph methods for sources among the main features.
I haven't seen anything that would break except maybe for making sure that the parity changes here and there don't break with fft_shapes. This could lead to shifts, especially when dealing with LRO.

docs/0-quickstart.ipynb Outdated Show resolved Hide resolved
@@ -86,7 +86,7 @@ def test_surveys(self):
coverage="union",
)
interp_scar = obs_lr.render(data_hr[None, :, :])
assert SDR(interp_scar, data_lr) > 10
# assert SDR(interp_scar, data_lr) > 10
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So this is temporary to make the tests pass and it's up to me to fix it right?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, this means that there is a problem. Most likely misalignment causing shifts.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, this meansthat there is a problem. Most likely observations are not aligned and there is a shift.

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree, but I think the problem is not in code but in the test data. It uses a custom WCS stored in the file, which I think you told me is chosen to work well with Frame.from_observations. In this branch, I have replaced the offset of the mode frame by a WCS with a shifted reference point. This is the only way of making a consistent model frame with more than two observations. I'm pretty sure I got it right and the multi-resolution tutorial seems to agree with me, but this test fails. Can you confirm how the WCS for TestLowResObservation have been constructed?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It uses my simulations tool, mr_tools which is part of my multi-resolution package
. I'd have to check exactly what you have done in your modifications to wcs. I'll also "look at the damn" by checking the residuals.

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I have. I am confused by what I see and I don't like it. Here's the test data (last image, which throws the assert): with coverage="union":
download
First panel: data_lr directly from the test set, second panel the rendered image from scarlet, third the difference. This is a clear dipole. It's present in some of the other test images as well, but this one is the largest residual, and the only one with SDR < 10. This indicates tow things to me: SDR 10 is not sharp enough, and this is likely related to the parity discussion from below!.

Now with coverage="intersection":
download-1
What confuses me is that both intersection and coverage yield identical cutouts, which cannot generally be the case. What confuses me even more is that despite the same shape, the residuals are different. Something is very weird here. I will keep the line commented out, but this need to get fixed!

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This sounds like headaches for me. I'll address this.

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another wrinkle. The test data have some WCSs with non-integer CRPIX. Was that intended? It's not illegal but uncommon to shift the reference.

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So the problem was caused by an rounding error and integer conversion in from_observations. It's test-specific in that the tests takes an image in HR with its own WCS and seeks to render it to LR to compare it to a LR image with its WCS. But render goes from model frame to LR frame, not from HR frame to LR frame. If the model frame isn't identical to the HR frame, this will go wrong. The test triggered it, and it's now fixed, but the problem is somewhat artifical.

scarlet/spectrum.py Show resolved Hide resolved
"""Step size set at `factor` times the mean of `X` in direction `axis`
"""
return factor * X.mean(axis=axis)
return np.maximum(minimum, factor * X.mean(axis=axis))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you explain the deep reason why you put a minimum step? It could be dangerous.

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought about this for a while, but it's the only way to initialize a source for zero (or very small) spectral elements without shutting it out of the optimization. Step sizes are mostly relative, so if you happen to initialize something with 0, it cannot change. The new method set_spectra_to_match is a linear solver, which for very faint or very blended galaxies can produce negative or zero spectra.

The spectra are set with a minimum step size proportional to the noise level in each band, which is fundamentally the uncertainty we should expect for them.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about initialising the source with random values of order 3 sigma of the noise or. It won't matter at this point and gradients will take off even so slightly.

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sure, we could do that, but having a minimum absolute step for a relative step size is a good idea.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just to be clear, this minimum step size triggers only for faint sources right? I can live with this, but otherwise, I don't see a problem with letting the algorithm take very small steps if it is not converged.

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, only for faint sources.

The problem is related to #211, namely that we should have a way to determine when step sizes are too large or too small. This here one way of getting out of this problem when the initial value (and thus the step) size is initialized with or close to zero.

scarlet/interpolation.py Show resolved Hide resolved
scarlet/morphology.py Show resolved Hide resolved
if self._diff_kernels is None:
return np.mean(self.noise_rms, axis=(1, 2))
else:
# deconvolve noise field to estimate its noise level
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why?? This has chances to backfire. why not just use:
model_rms = noise_rms/np.sqrt(np.sum(psf**2))??!

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because if self._diff_kernels is None, there is no PSF in the frame

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yup, that what the if statement is for. My comment is about what below the comment.

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see. I'm not familiar with the equation above, can you say how/where it is derived?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Quadratic sum of noise. if you take a uniform noise map and convolve it by a psf, the std of the result is the initial nosie times the quadratic sum of the psf pixels. So just revert the equation and you have the amplitude of dedconvolved noise.

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Neat, that's just error propagation. How did I not know this?

We have non-uniform noise in general, but the equation can be generalized.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yup! In most of our examples, we consider that noise is uniform (though it is strictly not). If the noise is not unform we are back to solving a deconvolution problem though since the noise of a convolved noisy image is has for amplitude the map of the rms convolved by the square of the psf and taken to the root (I think). I think for the purpose of initialisation, assuming the noise is uniform will be fine.

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This also seems to be more stable than the current code! Nice!

scarlet/observation.py Show resolved Hide resolved
scarlet/source.py Show resolved Hide resolved
scarlet/source.py Show resolved Hide resolved
Copy link
Collaborator

@fred3m fred3m left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall this looks really good. A lot of the changes were a long time coming and while there are a number of changes to the user facing API, I think that they are justified. But that also means that when this PR is merged we should update to a new version (1.1.0) to reflect the API changes.

docs/0-quickstart.ipynb Outdated Show resolved Hide resolved
docs/tutorials/multiresolution.ipynb Show resolved Hide resolved
scarlet/morphology.py Show resolved Hide resolved
)

# 0.1 compared to 1 at center
if np.any(edge_pull > 0.1):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with Remy.

I also worry about large, high eccentricity sources, where the mean update along a row or column might be low, even when a substantial number of pixels are above the threshold. So we might want to implement a threshold property to allow us to test for the best threshold (which might be image/SNR dependent and need to remain a property) and instead of using the mean value of a column/row, come up with a better check (for example the percentage of pixels in a column or row above the threshold needed to grow the image).

scarlet/morphology.py Show resolved Hide resolved
scarlet/source.py Show resolved Hide resolved
scarlet/frame.py Show resolved Hide resolved
scarlet/blend.py Outdated Show resolved Hide resolved
scarlet/blend.py Show resolved Hide resolved
scarlet/measure.py Show resolved Hide resolved
@pmelchior pmelchior merged commit 8f462eb into master Jan 14, 2021
@pmelchior pmelchior deleted the deconv-init branch January 14, 2021 02:21
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants