Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Script to evaluate Celeste on a single frame #95

Closed
jeff-regier opened this issue Dec 6, 2015 · 14 comments
Closed

Script to evaluate Celeste on a single frame #95

jeff-regier opened this issue Dec 6, 2015 · 14 comments
Assignees

Comments

@jeff-regier
Copy link
Owner

Through pre-processing, transform the SDSS dataset into "tasks", 1 per known astronomical object. The tasks are bundled into JLD files. Each bulkan/brick gets its own JLD file. For each astronomical object, the corresponding task contains everything necessary for processing that object, including

  • the tiles/pixels relevant to it
  • the parameters for this astronomical object from an existing SDSS catalog (for initialization)
  • the catalog entries for all neighboring astronomical objects (treated as fixed, for now)
  • the point spread functions for the tiles
  • the gain and calibration constants for the tiles
  • local linearized world coordinates for each tile / subimage

For large galaxies, we probably want downsampled pixels rather than the original pixels.

Neighboring astronomical objects are those that may contribute photons to any of the included tiles.

In the future, we may want to include just identifiers for the neighboring astronomical objects, rather than the SDSS catalog entries for those neighbors. That would allow for an iterative procedure, where better estimates of an object's neighbors' parameters also improve estimates of the object's parameters.

@jeff-regier jeff-regier changed the title Downsample image for very large galaxies Pre-processing for each astronomical object Dec 6, 2015
@rgiordan
Copy link
Contributor

rgiordan commented Dec 7, 2015

You'll also want to include a local linearized world coordinate system.

I think we may need to treat large galaxies differently in more ways than subsampling. Take Andromeda, for example, and imagine hypothetically that we had enough computing power to actually fit the ELBO jointly for it and all the celestial objects in front of it. Our galaxy model is pretty primitive, and would not capture most of the variation in light from Andromeda. The ELBO would then try to explain deviations from our primitive model for Andromeda and the actual light in part with parameters from the objects in front of it.

In other words, by trying to fit the residuals from Andromeda, which will be highly structured, Celeste will bias the objects in front of it.

For truly large galaxies like this, we might be better off treating the large galaxy as background noise, which we would have to estimate ourselves, possibly iteratively. One way to think of this is as our "model" of Andromeda is just a rasterized pixel map, which we would then fit jointly with all the objects in front of it. Andromeda's catalog entry would then be derived from this pixel map, rather than fit jointly with everything else. I'd be interested to hear other ideas, though. How does the current catalog handle this problem?

@jeff-regier
Copy link
Owner Author

I added linearized world coordinates to the list.

It's hard to predict exactly how large galaxies are going to give us trouble. How about we first try to optimize each object independently, with the data encoded as described in this issue, to get familiar with running at scale and to see where the model make mistakes? @rcthomas , is encoding the data this way something you might be up for working on, along with me and ryan?

@rcthomas
Copy link
Collaborator

Oops sorry I missed the mention. Yeah I think I can help with this; let's talk about it at the meeting tomorrow.

@rgiordan
Copy link
Contributor

I put the current version of relevant files in bin/staging.

@jeff-regier
Copy link
Owner Author

Hi @kbarbary -- I've been thinking a bit more about how this issue. You might start by consolidating a lot of the scripts in the bin directory into a single script the supports the following kind of sample usage:

./run_celeste.jl --stage preprocess --run  3900 --camcol 4 --field 829

Stages might include "preprocess" (generates a JLD file of Task objects), "infer" (runs OptimizeElbo for each Task, and outputting a fits catalog) , and "score" (compares outputted catalog to a catalog build for "coadd" images). Each stage reads the previous stage's output from disk and writes its output to disk, in a file named after its stage, run, camcol, and field.

Operating on run-camcol-field triples, rather than bricks/bulkans, should get us started generating big catalogs more quickly.

@rgiordan
Copy link
Contributor

rgiordan commented Feb 4, 2016

To reiterate a point that Rollin made now that Kyle is in the room, running
tractor in NERSC is tricky, so we should plan on something like
download_fits_files.py having been run to get all the necessary files in
one place before julia gets invoked.

On Thu, Feb 4, 2016 at 10:30 AM, Jeffrey Regier notifications@github.com
wrote:

Hi @kbarbary https://github.com/kbarbary -- I've been thinking a bit
more about how this issue. You might start by consolidating a lot of the
scripts in the bin directory into a single script the supports the
following kind of sample usage:

./run_celeste.jl --stage preprocess --run 3900 --camcol 4 --field 829

Stages might include "preprocess" (generates a JLD file of Task objects),
"infer" (runs OptimizeElbo for each Task, and outputting a fits catalog) ,
and "score" (compares outputted catalog to a catalog build for "coadd"
images). Each stage reads the previous stage's output from disk and writes
its output to disk, in a file named after its stage, run, camcol, and field.

Operating on run-camcol-field triples, rather than bricks/bulkans, should
get us started generating big catalogs more quickly.


Reply to this email directly or view it on GitHub
#95 (comment)
.

@jeff-regier
Copy link
Owner Author

The fits files are mostly (all?) already in the cosmo repository at
NERSC---we shouldn't need to download anything.

On Thu, Feb 4, 2016 at 10:38 AM Ryan notifications@github.com wrote:

To reiterate a point that Rollin made now that Kyle is in the room, running
tractor in NERSC is tricky, so we should plan on something like
download_fits_files.py having been run to get all the necessary files in
one place before julia gets invoked.

On Thu, Feb 4, 2016 at 10:30 AM, Jeffrey Regier notifications@github.com
wrote:

Hi @kbarbary https://github.com/kbarbary -- I've been thinking a bit
more about how this issue. You might start by consolidating a lot of the
scripts in the bin directory into a single script the supports the
following kind of sample usage:

./run_celeste.jl --stage preprocess --run 3900 --camcol 4 --field 829

Stages might include "preprocess" (generates a JLD file of Task objects),
"infer" (runs OptimizeElbo for each Task, and outputting a fits catalog)
,
and "score" (compares outputted catalog to a catalog build for "coadd"
images). Each stage reads the previous stage's output from disk and
writes
its output to disk, in a file named after its stage, run, camcol, and
field.

Operating on run-camcol-field triples, rather than bricks/bulkans, should
get us started generating big catalogs more quickly.


Reply to this email directly or view it on GitHub
<
#95 (comment)

.


Reply to this email directly or view it on GitHub
#95 (comment)
.

@rcthomas
Copy link
Collaborator

rcthomas commented Feb 4, 2016

I have mapped the files that you are getting via download_fits_files.py to where they are on the NERSC global file system. See e.g. from Cori:

/global/projecta/projectdirs/sdss/data/sdss/dr8/boss/photoObj/301/1000/1

This corresponds to looking here:

http://data.sdss3.org/sas/dr8/groups/boss/photoObj/301/1000/1/

Note also that data.sdss3.org is sdss3data.lbl.gov; so this is where they are serving the data up from that you are getting with download_fits_files.py.

@kbarbary
Copy link
Collaborator

kbarbary commented Feb 4, 2016

Thanks for the additional pointers.

Off-topic: It's not the highest priority, but I started working on a Julia replacement for the tractor bits, based on the Python astroquery.sdss module. This module is BSD-licensed, so vastly preferred over porting tractor, which is GPL (and therefore incompatible with the Celeste and SloanDigitalSkySurvey licenses). It's not much code so should be pretty quick.

@rcthomas
Copy link
Collaborator

rcthomas commented Feb 4, 2016

Ryan's point is that tractor doesn't have to be a "dependency" at all. You generate the inputs once as csv and you are done. That simplifies the problem a lot. There is a setup that works on Cori under hpcports, you run the setup once and store that as file data.

@jeff-regier
Copy link
Owner Author

Were we using tractor for something other than downloading fits files?
Because that's a pretty heavyweight way to go, to put it mildly, for
downloading fits files---you can just fetch those files with wget from
sdss3.org, right? Or was tractor doing something else?

On Thu, Feb 4, 2016 at 11:11 AM R. C. Thomas notifications@github.com
wrote:

Ryan's point is that tractor doesn't have to be a "dependency" at all. You
generate the inputs once as csv and you are done. That simplifies the
problem a lot. There is a setup that works on Cori under hpcports, you run
the setup once and store that as file data.


Reply to this email directly or view it on GitHub
#95 (comment)
.

@rcthomas
Copy link
Collaborator

rcthomas commented Feb 4, 2016

It's a bit more than wget but it's a limited subset of functionality.

@jeff-regier jeff-regier changed the title Pre-processing for each astronomical object Script to evaluate Celeste on a single frame Feb 8, 2016
@jeff-regier
Copy link
Owner Author

This issue is getting unwieldy, so I broke this up into a multiple issues (#114, #115, #116, and #117).

@kbarbary
Copy link
Collaborator

Thanks, those smaller more detailed issues are helpful. Now that the WCS dust has settled, I can move on to working on those.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants