Skip to content

Commit

Permalink
Merge pull request #10 from lsst/tickets/DM-21531
Browse files Browse the repository at this point in the history
tickets/DM-21531
  • Loading branch information
fred3m committed Oct 10, 2019
2 parents 15dae0f + ac25809 commit 169b417
Show file tree
Hide file tree
Showing 32 changed files with 1,330 additions and 719 deletions.
3 changes: 0 additions & 3 deletions .travis.yml
Original file line number Diff line number Diff line change
Expand Up @@ -11,9 +11,6 @@ before_install:
- ./miniconda.sh -b -p $HOME/miniconda
- export PATH=/$HOME/miniconda/bin:$PATH
- conda update --yes conda
# The next couple lines fix a crash with multiprocessing on Travis and are not specific to using Miniconda
- sudo rm -rf /dev/shm
- sudo ln -s /run/shm /dev/shm
# Install packages
install:
# Useful for debugging any issues with conda
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@

This package performs source separation (aka "deblending") on multi-band images. It's geared towards optical astronomy, where scenes are composed of stars and galaxies, but it is straightforward to apply it to other imaging data.

**For the full documentation see [the docs](https://fred3m.github.io/scarlet/).**
**For the full documentation see [the docs](https://pmelchior.github.io/scarlet/).**

Separation is achieved through a constrained matrix factorization, which models each source with a Spectral Energy Distribution (SED) and a non-parametric morphology, or multiple such components per source. In astronomy jargon, the code performs forced photometry (with PSF matching if needed) using an optimal weight function given by the signal-to-noise weighted morphology across bands. The approach works well if the sources in the scene have different colors and can be further strengthened by imposing various additional constraints/priors on each source.

Expand Down
Binary file modified data/test_resampling/Cut_HSC.fits
Binary file not shown.
Binary file modified data/test_resampling/Cut_HSC1.fits
Binary file not shown.
Binary file modified data/test_resampling/PSF_HSC.fits
Binary file not shown.
47 changes: 46 additions & 1 deletion docs/changes.rst
Original file line number Diff line number Diff line change
@@ -1,3 +1,46 @@
0.6 (in development)
--------------------

General
^^^^^^^
None yet

New Features
^^^^^^^^^^^^
- `Fourier` class is introduced to do PSF convolutions.
This class makes it more efficient to do the bookkeeping involved with calculating FFTs to
different shapes. It should be relatively transparent to the user except for the changes mentioned below
in API Changes.

- `psf.generate_psf_image` now returns an actual integrated image.
Previous versions of scarlet generated a sampled image, meaning the specified PSF function was sampled
at the pixel locations centered on the central pixel. The new behavior is to use the 2D trapezoid rule
to integrate over a subsampled version of each pixel and return an integrated PSF, similar to one
that would be generated from a CCD image. This also changes the meaning of the morphology and model,
which is now a true image as opposed to a sampled version of the image.

- `ExtendedSource`s have an improved initialization created by deconvolving the initial morphology
used by the previous version of scarlet in an attempt to better match the model PSF.
The previous morphology was initialized with the observation PSFs, making it too wide in each band
and taking extra time to converge. The new initialization gives better convergence in fewer steps.

API Changes
^^^^^^^^^^^
- When looking at `Frame.psfs` (or `Observation.Frame.psfs`)
the result will now be a `Fourier` object and the `Fourier.image` method needs to be called
in order to access the PSF image cube.

- `psf.moffat`, `psf.gaussian`, and `psf.double_gaussian` now accept
`y` and `x`, the coordinates in the y-direction and
coordinates in the x-direction instead of the 2D coordinate matrices `coords=Y, X`.

- `generate_psf_image` now accepts an additional `normalization` parameter to optionally normalize
an image.

- `ExtendedSource` now accepts a `sn_weighted_psf` parameter to decide whether to use the PSF with
the best signal to noise or use the narrowest PSF. This parameter will likely be removed in the
future if one of the two methods proves to work better in all/most circumstances.


0.5 (2019-06-26)
---------------
Expand All @@ -8,7 +51,6 @@ General
- Completely restructured code, including using `autograd` package to calculate gradients.
- PSF convolutions are now performed on the model of the entire blend as opposed to
individually for each source.
- It is possible to perform deblending on images with different orientations and resolutions.
- `Blend` no longer fits for source positions. Instead it is up to the user to implement a
centering algorithm, such as centroiding or the `scarlet.update.fix_pixel_center`.
- Updates to all of the docs and tutorials to match the new API.
Expand All @@ -34,6 +76,7 @@ New Features
resolutions.
- `Frame` issues warnings when PSF is not specified or not normalized.
- `Frame.channels` is used to identify channels in multiple observations.
- PSF convolutions and ffts of image cubes are performed using ndimensional fft along selected axes for better performance.

API Changes
^^^^^^^^^^^
Expand All @@ -57,6 +100,8 @@ API Changes
- Sources and components are no longer centered in a small patch that is reprojected
into the model frame. Instead components can exist anywhere on an image and constraints that
require a center, such as symmetry and monotonicity, can use the new `uncentered_operator` method.
- `make_operator` is now a method of the `LowResObservation` class as should be.


0.45 (2019-3-27)
----------------
Expand Down
2 changes: 1 addition & 1 deletion docs/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@
'numpydoc',
]

nbsphinx_timeout = 180
nbsphinx_timeout = 1200

class_members_toctree = False

Expand Down
4 changes: 1 addition & 3 deletions docs/operator.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -11,9 +11,7 @@
},
{
"cell_type": "raw",
"metadata": {
"raw_mimetype": "text/restructuredtext"
},
"metadata": {},
"source": [
"Reference/API\n",
"-------------\n",
Expand Down
66 changes: 30 additions & 36 deletions docs/quickstart.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@
"source": [
"from astropy.visualization.lupton_rgb import AsinhMapping, LinearMapping\n",
"\n",
"stretch = 1\n",
"stretch = 0.1\n",
"Q = 10\n",
"norm = AsinhMapping(minimum=0, stretch=stretch, Q=Q)\n",
"img_rgb = scarlet.display.img_to_rgb(images, norm=norm)\n",
Expand Down Expand Up @@ -102,14 +102,7 @@
"metadata": {},
"outputs": [],
"source": [
"X = np.arange(psfs.shape[2])\n",
"Y = np.arange(psfs.shape[1])\n",
"X, Y = np.meshgrid(X, Y)\n",
"coords = np.stack([Y, X])\n",
"y0, x0 = (psfs.shape[1]-1) // 2, (psfs.shape[2]-1) // 2\n",
"model_psf = scarlet.psf.gaussian(coords, y0, x0, 1, .9)\n",
"model_psf /= model_psf.sum()\n",
"model_psf = model_psf[None,:,:]"
"model_psf = scarlet.psf.generate_psf_image(scarlet.psf.gaussian, psfs.shape[1:])[None]"
]
},
{
Expand Down Expand Up @@ -189,8 +182,9 @@
"outputs": [],
"source": [
"blend = scarlet.Blend(sources, observation)\n",
"blend.fit(200, e_rel=1e-3)\n",
"print(\"scarlet ran for {0} iterations\".format(blend.it))"
"%time blend.fit(200, e_rel=1e-3)\n",
"print(\"scarlet ran for {0} iterations to MSE = {1}\".format(len(blend.mse), blend.mse[-1]))\n",
"plt.semilogy(blend.mse)"
]
},
{
Expand All @@ -205,7 +199,7 @@
"metadata": {},
"source": [
"### View the full model\n",
"First we load the model for the entire blend and its residual. Then we display the model using the same $sinh^{-1}$ stretch as the full image and a linear stretch for the residual."
"First we load the model for the entire blend, render it in the observation frame, and compute its residuals. We then show each using the same $sinh^{-1}$ stretch as the full image and a linear stretch for the residual."
]
},
{
Expand Down Expand Up @@ -234,8 +228,8 @@
"\n",
"for k,component in enumerate(blend.components):\n",
" y,x = component.pixel_center\n",
" ax[0].text(x, y, k, color=\"b\")\n",
" ax[1].text(x, y, k, color=\"b\")\n",
" ax[0].text(x, y, k, color=\"w\")\n",
" ax[1].text(x, y, k, color=\"w\")\n",
"plt.show()"
]
},
Expand All @@ -244,7 +238,7 @@
"metadata": {},
"source": [
"### View the source models\n",
"It can also be useful to view the model for each source. For each source we extract the portion of the image contained in the sources bounding box, the true simulated source flux, and the model of the source, scaled so that all of the images have roughly the same pixel scale."
"It can also be useful to view the model for each source, in its original frame and in its observed frame. In this example, the two frames differ by an extra convolution from the minimal `model_psf` constructed above to the observed psfs."
]
},
{
Expand All @@ -255,21 +249,6 @@
},
"outputs": [],
"source": [
"def get_true_image(m, catalog, filters):\n",
" \"\"\"Create the true multiband image for a source\n",
" \"\"\"\n",
" img = np.array([np.sum(catalog[catalog[\"index\"]==m][\"intensity_\"+f], axis=0) for f in filters])\n",
" return img\n",
"\n",
"# We can only show the true values if the input catalog has the true intensity data for the sources\n",
"# in other words, if you used SEP to build your catalog you do not have the true data.\n",
"if \"intensity_\"+filters[0] in catalog.dtype.names:\n",
" has_truth = True\n",
" axes = 3\n",
"else:\n",
" has_truth = False\n",
" axes = 2\n",
"\n",
"# Set the stretch based on the model\n",
"stretch = .3\n",
"Q = 10\n",
Expand All @@ -279,24 +258,39 @@
" # Get the model for a single source\n",
" model = src.get_model()\n",
" model_ = observation.render(model)\n",
" model_rgb = scarlet.display.img_to_rgb(model_, norm=norm)\n",
" # Get the patch from the original image\n",
" \n",
" # Convert observation and models to RGB\n",
" img_rgb = scarlet.display.img_to_rgb(images, norm=norm)\n",
" model_rgb = scarlet.display.img_to_rgb(model, norm=norm)\n",
" model_rgb_ = scarlet.display.img_to_rgb(model_, norm=norm)\n",
"\n",
" # Set the figure size\n",
" ratio = src.shape[2]/src.shape[1]\n",
" fig_height = 3*src.shape[1]/20\n",
" fig_width = max(2*fig_height*ratio,2)\n",
" fig = plt.figure(figsize=(fig_width, fig_height))\n",
" \n",
" # Generate and show the figure\n",
" ax = [fig.add_subplot(1,2,n+1) for n in range(2)]\n",
" ax = [fig.add_subplot(1,3,n+1) for n in range(3)]\n",
" ax[0].imshow(img_rgb)\n",
" ax[0].set_title(\"Data\")\n",
" ax[1].imshow(model_rgb)\n",
" ax[1].set_title(\"model {0}\".format(k))\n",
" ax[1].imshow(model_rgb_)\n",
" ax[1].set_title(\"Observed model {0}\".format(k))\n",
" ax[2].imshow(model_rgb)\n",
" ax[2].set_title(\"Model {0}\".format(k))\n",
" # Mark the source in the data image\n",
" ax[0].plot(src.pixel_center[1], src.pixel_center[0], \"rx\", mew=2, ms=10)\n",
" ax[1].plot(src.pixel_center[1], src.pixel_center[0], \"rx\", mew=2, ms=10)\n",
" ax[2].plot(src.pixel_center[1], src.pixel_center[0], \"rx\", mew=2, ms=10)\n",
" plt.show()"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
Expand All @@ -316,7 +310,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.3"
"version": "3.7.2"
}
},
"nbformat": 4,
Expand Down
2 changes: 1 addition & 1 deletion docs/resampling.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.2"
"version": "3.7.1"
}
},
"nbformat": 4,
Expand Down

0 comments on commit 169b417

Please sign in to comment.