-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
New Observation structure #220
Comments
I should mention that the In other words, the possible optimization parameters of an Observation are contained in the renderer. This can include e.g. #212 if we don't trust the photometric calibration. |
I like this idea and think that it's a good path forward, but I'm not crazy about the name If you use pycharm (or a similar IDE) it will greatly simplify the refactoring process, as it can find all instances of the refactored classes/functions and change the code so that it still works exactly the same with a lot less debugging. |
Good tip re pycharm. As for |
I guess the renderer assumes that the content of the match is moved to the frame making this decision as to how to generate diff kernels, which is ok. I like making observations frames, I think we used to have that and it makes sense. |
I also agree that |
There are some shortcomings in the current treatment of multi-Observation modeling. Are primary example is that a user has to decide what kind of Observation they are specifying (e.g
Observation
vsLowResObservation
), and we code to perform such a conversion if needed. This is silly.I propose a few changes, which simplify the interface and make the development more modular, which we'll need for adding different types of observations.
Renderer
, which will automatically be instantiated by callingObservation.match(model_frame)
. The advantage is that we can replace the renderer without changing the type of the observation (so, no more LowRes...). The renderer are responsible for the mapping between model pixels and observed pixels. This approach will work for any data that is essentially a bunch of images with Gaussian noise on them (including 1D or 2D spectra, grism). They all use the same log likelihood code, etc. So, when do we need a different type of Observation? E.g. for photon counters (X-ray, gamma-ray) or weirder things (MKIDS, compressed sensing, [insert other scify here]).Observation
derives fromFrame
. So, observation is a frame with a data payload and and log likelihood function. This allows for shorter writing of the many places in the code, where we currently write e.g.obs.frame.wcs
...Observation
payloads are calleddata
instead ofimages
(because it could be a spectrum), and singularpsf
instead ofpsfs
, etc (because all of them are data cubes). We currently have an inconsistency thatObservation
is called withpsfs
, which setsFrame.psf
(an instance ofPSF
).Let me know what you think.
The text was updated successfully, but these errors were encountered: