Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multi-scale deblending #238

Open
fred3m opened this issue Mar 1, 2021 · 4 comments
Open

Multi-scale deblending #238

fred3m opened this issue Mar 1, 2021 · 4 comments

Comments

@fred3m
Copy link
Collaborator

fred3m commented Mar 1, 2021

At the end of last week I started pursuing some alternative approaches to applying our constraints. The one that shows the most promise is a multi-scale approach, which I have since discovered is related to the ideas given in Starck et al. 2014, but with a few tweaks that allow us to take advantage of scarlet.

So we start out with an image, in this case an HSC image that is the sum of all bands (giving maximum blending to illustrate the effectiveness of this algorithm).

Screen Shot 2021-03-01 at 8 20 23 AM

Next we calculate the wavelet coefficients at the first 5 scales using the Starlet class:
Screen Shot 2021-03-01 at 8 20 35 AM
Screen Shot 2021-03-01 at 8 20 45 AM
Screen Shot 2021-03-01 at 8 20 54 AM
Screen Shot 2021-03-01 at 8 21 04 AM
Screen Shot 2021-03-01 at 8 21 12 AM

Note that at most scales, the almost every source is isolated. So we can use a procedure similar to the one outlined in Stack et al. 2014 4.2.3 (see Figure 10), identifying structures that are isolated at each scale and connecting them into objects and/or blends at higher scales. We can then deblend each scale separately in scarlet, fitting all bands simultaneously, but only for objects blended at that scale. All sources isolated at a given scale will have their flux subtracted from the observed coefficients, so we are only fitting the blended structures at each scale. One small technicality is that the current implementation of starlets in scarlet does not guarantee positive coefficients, and in practice there is a ring of negative coefficients around each source that may or may not be an issue. So we would need to either follow section 3.6 of Starck et al. 2014 to create a set of positive coefficients or allow our source models to be negative.

At first this may still seem enormously inefficient, since a naive implementation would multiply our problem by the number of different scales (5 in this case) of the wavelet decomposition. But as I have already pointed out, at most scales, most sources are not blended. Furthermore, for scales past the first 2 or 3 levels, the models are essentially isotropic and their morphologies do not have to be pixel models (ImageMorphology models). Instead we could use something much simpler, like a Gaussian mixture but limited in frequency to the valid frequencies at the given scale, so fitting a given source morphology will require a very limited number of parameters.

For the first two or three scales there are few different ways to proceed. If we only care about generating our models in the observation frame then we only need to fit the sources blended at those scales, which appears to be < 5-10% of the total number of sources, and the blends containing them are extremely small (so using an ImageMorphology for each of them is feasible). Then for each source we can build the total model in the observed frame by combining the isolated structures at lower scales with the deblended (but convolved) structures at higher scales. This seems like the easiest way to proceed, and may actually be faster than the current implementation since it will likely have far fewer degrees of freedom.

If we still want to have a scarlet model for each source in the model_frame then we'll still need to fit the first few levels for each source as an ImageMorphology . Note that this may still take less overall processing time because all of the sources are in much smaller boxes, and there is no blending, so just a few iterations in very small boxes (convolved with smaller PSFs since we would use the difference kernal for a given scale) would allow us to model the higher frequency features of all of our sources.

This should solve/incorporate ideas from several other scarlet tickets, including #235, and #222.

@herjy I know that this is now very similar to MuSCADeT, so perhaps you can comment on any problems or strengths of the above procedure. A key difference is that my concern is mainly with modeling sources in ground based images where sources not connect at the lowest scale are most likely to be different sources, the exception being resolved spirals. So I'm willing to give up shredding spirals (with future thoughts of using ML to identify shredded spirals and reconstruct them) to have an algorithm that defaults to deblending as opposed to merging.

@herjy
Copy link
Collaborator

herjy commented Mar 1, 2021

I'm not sure I understand exactly how that would work. It sounds like you want to build a model for galaxies in the starlet space, which I am not sure would work given that starlet coefficients look different than galaxies (negative rings for instance). But I'm looking forward to hear more about that.

@herjy
Copy link
Collaborator

herjy commented Mar 1, 2021

Just a quick thought on independent fits across scales. We said that each scale could have a different colour. This might end up being tricky. Again, positive decomposition is necessary, but if the scale cut does not correspond to the colour cut (and it has no reason to) we might run into trouble. We might have just the right flexibility to model colour gradients, but if we have troubles, it could come from there.

@fred3m
Copy link
Collaborator Author

fred3m commented Mar 2, 2021

Yes, one of the things that I am slightly concerned about is two component sources where both components have features at the same scale. Here's an example:
Screen Shot 2021-03-01 at 9 16 58 PM
Screen Shot 2021-03-01 at 9 17 08 PM

@fred3m
Copy link
Collaborator Author

fred3m commented Mar 2, 2021

But in the above case it shouldn't be so bad, since the spiral will actually be detected as multiple sources, each with its own color. So that particular example might not be a problem, but for more sersic-like galaxies we could run certainly run into trouble.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants