-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Select bounding box for each peak #14
Comments
I have an idea about how to implement number The basic idea is that the SED in neighboring pixels changes very little, and in fact differ very little from a peak except in blended regions. So we can mask out the image except for the region that encloses a peak, and then allow that region to grow by some reasonable factor to ensure that all of the flux is enclosed. This works well for the complicated blend I tested it on but more testing needs to be done to test the validity of the method. For example, some of the objects are given significantly larger regions because nearby sources have similar colors, but the important thing is that (at least in this example) none of the objects appear to be missing any flux. It would be interesting to see how this method works on spiral galaxies that have multiple colors. |
That looks quite good already. I don't fully understand the logic that decides the size/shape of the bounding box. However, this is not really what this ticket is about. Even if we had bounding boxes, we wouldn't have a mechanism to use them. |
Here's a different thought that should solve all of the problems we've had about separable objects This requires a few changes.
Second item will be most important. There are several redundant computations for the likelihood gradients. By pulling those together in a class, we can store the results of previous calculations and control when updates to internal variables (such as the fully assembled S matrix, or the A matrix) are done. It would also be the interface for the user to inspect and render the deblender outputs. So, the action of the Lastly, we already have a class |
Another piece of logic for item 2), and how we deal with the boxes:
The question is: which of the elements do we need to store/update for the A update: A - 1/L_a (Y-APS) (PS)^T? Note that the convolved sources appear in the last term. Formally, all occurrence of S needs to be replace with S', so we'd have to compute the model twice per iteration. I think that the differences in gradient direction (with and without update of S) will be rather modest, at least after a few iterations. Given that the convolution operation to build the entire model is expensive, this suggest that we keep either PS or R fixed for the updates of S and A. The simplest thing would this be to use the old S matrix for the A update. This is a deviation from the coordinate descent method, but we'd only have to build the model APS once. |
Currently the model of each component covers all pixels in the image. That's overkill for the evaluation of the likelihood gradients. Instead one could work with a subset of pixels, loosely based on previously detected footprints.
One can think of a few ways of doing that:
scipy.sparse
.In both cases, we'd automatically prevent the degeneracy between two remote objects having the same color.
The text was updated successfully, but these errors were encountered: