You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have been working on a pipeline for photometry and recently came across this issue (reproducible code has been attached at the end):
convolution enhancement = convolved data - data
convolved data residual = convolved data [i] - convolved data [i-1]
colorbar: +- 5 * lowest amplitude
green pixels: above threshold
For the Gaussian kernel returned by segmentation.make_2dgaussian_kernel, the convolved data loses intensity in the central region as a result of convolution with a volume normalized kernel, though further surrounding pixels are enhanced. This could still be helpful in increasing the number of connected pixels above a low enough threshold, but may not always provide the required enhancement for image segmentation. Like in the first two cases the faintest object loses 8-connectedness as a result of convolution.
I believe it would be good to have an option to create (return) unnormalized kernels, with something like astropy.convolution.discretize_model to allow more control in cases where appropriate. I understand that using unnormalized kernels could also result in modifying the threshold for the convolved data accordingly, if significant background residual remains after background subtraction.
There are some tasks, such as source finding, where you want to apply a filter with a kernel that is not normalized.
For data that are well-behaved (contain no missing or infinite values), this can be done in one step:
convolve(image, kernel)
The text makes sense but their aforementioned "one step" contradicts default implementation:
The text was updated successfully, but these errors were encountered:
astqx
changed the title
Convolution with segmentation.make_2dgaussian_kernel does not produce desired results for image segmentation
Convolution with segmentation.make_2dgaussian_kernel may not produce desired results for image segmentation
Jan 20, 2024
astqx
changed the title
Convolution with segmentation.make_2dgaussian_kernel may not produce desired results for image segmentation
Convolution with segmentation.make_2dgaussian_kernel for image segmentation
Jan 20, 2024
@astqx You can input your convolved image into any of the source detection tools (detect_sources ,deblend_sources, SourceFinder). It's completely up to you how you want to make that convolved image. The make_2dgaussian_kernel is a simple convenience function for making a normalized 2D Gaussian kernel based on FWHM. This is by far the most common choice. If you want to use something else, there is a large list of pre-defined kernels in astropy.convolution: https://docs.astropy.org/en/stable/convolution/kernels.html#available-kernels. You can also use your own custom kernel. Just convolve the data with your kernel (convolved_data = convolve(data, kernel)) and pass the convolved image into the source detection tools. As you've noted, you may need adjust the detection threshold based on your convolution kernel.
Hi,
I have been working on a pipeline for photometry and recently came across this issue (reproducible code has been attached at the end):
convolution enhancement = convolved data - data
convolved data residual = convolved data [i] - convolved data [i-1]
colorbar: +- 5 * lowest amplitude
green pixels: above threshold
For the Gaussian kernel returned by
segmentation.make_2dgaussian_kernel
, the convolved data loses intensity in the central region as a result of convolution with a volume normalized kernel, though further surrounding pixels are enhanced. This could still be helpful in increasing the number of connected pixels above a low enough threshold, but may not always provide the required enhancement for image segmentation. Like in the first two cases the faintest object loses 8-connectedness as a result of convolution.I believe it would be good to have an option to create (return) unnormalized kernels, with something like
astropy.convolution.discretize_model
to allow more control in cases where appropriate. I understand that using unnormalized kernels could also result in modifying the threshold for the convolved data accordingly, if significant background residual remains after background subtraction.Aside, an article (Convolving with Unnormalized Kernels) in
astropy.convolution
documentation discusses:The text makes sense but their aforementioned "one step" contradicts default implementation:
Interestingly, the reproducible example provided in that article calls:
to achieve reasonable results:
Reproducible code
The text was updated successfully, but these errors were encountered: