You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The point spread function and background intensity vary significantly among different images. The BLISS encoder, however, currently is trained based on the PSF and background for a single image. We don't want to have to retrain BLISS for every new image we process because that would be slow, negating the advantage of using an amortized approach to Bayesian inference. Instead, we'd like to change the encoder architecture to it takes not just an image as input, but also, as "side information", a background and a PSF. We'd then train this encode with simulated data created with a variety of backgrounds and PSFs.
Steps:
in the decoder, use random backgrounds sourced from many real images, not just one image
in the decoder, use a random PSF sourced from many real images, not just one image
Verify that the encoder now works with arbitrary backgrounds and characterize how much, if any, performance we lose by using an encoder that works for any background, rather than an encoder that is specialized to a particular background
train the encoder with images generated with random PSFs. Do not explicitly provide the encoder with the correct PSF. Benchmark the performance of this encoder (the "unaware" encoder) and compare it to encoders that are trained solely with the correct PSF ("specialized" encoders) on several fields
Benchmark a "PSF-aware" encoder that concatenates the 5 SDSS PSF parameters to the input as a new channel
Benchmark a "PSF-aware" encoder that includes a deconvolved image in the encoder input
Benchmark a "PSF-aware" encoder that leverages a low-dimensional representation of the PSF
Benchmark combinations of the techniques above to get the best performing encoder. Ideally in amortizing across PSFs we wouldn't give up more than 2% in terms of detection performance, in relation to the performance of a "specialized" encoder.
Create a pytest test case that showing variable PSF works: illustrate performance with “cloudy vs. clear” night in the ground based mock images to show that variable PSF has impact on galaxy size (flux first check) in sky
Create a Jupyter notebook that contains the results we'd need for a publication about "Amortized Bayesian Inference for Ground-based Astronomical Images"
The text was updated successfully, but these errors were encountered:
The point spread function and background intensity vary significantly among different images. The BLISS encoder, however, currently is trained based on the PSF and background for a single image. We don't want to have to retrain BLISS for every new image we process because that would be slow, negating the advantage of using an amortized approach to Bayesian inference. Instead, we'd like to change the encoder architecture to it takes not just an image as input, but also, as "side information", a background and a PSF. We'd then train this encode with simulated data created with a variety of backgrounds and PSFs.
Steps:
The text was updated successfully, but these errors were encountered: