Skip to content

Commit

Permalink
Merge pull request #639 from larrybradley/np114_style
Browse files Browse the repository at this point in the history
Fix doctests for numpy 1.14 style changes
  • Loading branch information
larrybradley committed Feb 1, 2018
2 parents 572ef6f + 3806ff6 commit 5a06059
Show file tree
Hide file tree
Showing 11 changed files with 304 additions and 212 deletions.
84 changes: 49 additions & 35 deletions docs/aperture.rst
Expand Up @@ -122,12 +122,13 @@ with the data and the apertures::
>>> from photutils import aperture_photometry
>>> data = np.ones((100, 100))
>>> phot_table = aperture_photometry(data, apertures)
>>> print(phot_table) # doctest: +SKIP
id xcenter ycenter aperture_sum
>>> phot_table['aperture_sum'].info.format = '%.8g' # for consistent table output
>>> print(phot_table)
id xcenter ycenter aperture_sum
pix pix
--- ------- ------- -------------
1 30.0 30.0 28.2743338823
2 40.0 40.0 28.2743338823
--- ------- ------- ------------
1 30.0 30.0 28.274334
2 40.0 40.0 28.274334

This function returns the results of the photometry in an Astropy
`~astropy.table.QTable`. In this example, the table has four columns,
Expand Down Expand Up @@ -159,7 +160,7 @@ by a factor of 5 (``subpixels=5``) in each dimension::

>>> phot_table = aperture_photometry(data, apertures, method='subpixel',
... subpixels=5)
>>> print(phot_table) # doctest: +SKIP
>>> print(phot_table)
id xcenter ycenter aperture_sum
pix pix
--- ------- ------- ------------
Expand Down Expand Up @@ -192,12 +193,14 @@ Suppose that we wish to use three circular apertures, with radii of 3,
>>> radii = [3., 4., 5.]
>>> apertures = [CircularAperture(positions, r=r) for r in radii]
>>> phot_table = aperture_photometry(data, apertures)
>>> print(phot_table) # doctest: +SKIP
>>> for col in phot_table.colnames:
... phot_table[col].info.format = '%.8g' # for consistent table output
>>> print(phot_table)
id xcenter ycenter aperture_sum_0 aperture_sum_1 aperture_sum_2
pix pix
--- ------- ------- -------------- -------------- --------------
1 30.0 30.0 28.2743338823 50.2654824574 78.5398163397
2 40.0 40.0 28.2743338823 50.2654824574 78.5398163397
1 30 30 28.274334 50.265482 78.539816
2 40 40 28.274334 50.265482 78.539816

For multiple apertures, the output table column names are appended
with the ``positions`` index.
Expand All @@ -212,12 +215,14 @@ specify ``a``, ``b``, and ``theta``::
>>> theta = np.pi / 4.
>>> apertures = EllipticalAperture(positions, a, b, theta)
>>> phot_table = aperture_photometry(data, apertures)
>>> print(phot_table) # doctest: +SKIP
id xcenter ycenter aperture_sum
>>> for col in phot_table.colnames:
... phot_table[col].info.format = '%.8g' # for consistent table output
>>> print(phot_table)
id xcenter ycenter aperture_sum
pix pix
--- ------- ------- -------------
1 30.0 30.0 47.1238898038
2 40.0 40.0 47.1238898038
--- ------- ------- ------------
1 30 30 47.12389
2 40 40 47.12389

Again, for multiple apertures one should input a list of aperture
objects, each with identical positions::
Expand All @@ -228,12 +233,14 @@ objects, each with identical positions::
>>> apertures = [EllipticalAperture(positions, a=ai, b=bi, theta=theta)
... for (ai, bi) in zip(a, b)]
>>> phot_table = aperture_photometry(data, apertures)
>>> print(phot_table) # doctest: +SKIP
>>> for col in phot_table.colnames:
... phot_table[col].info.format = '%.8g' # for consistent table output
>>> print(phot_table)
id xcenter ycenter aperture_sum_0 aperture_sum_1 aperture_sum_2
pix pix
--- ------- ------- -------------- -------------- --------------
1 30.0 30.0 47.1238898038 75.3982236862 109.955742876
2 40.0 40.0 47.1238898038 75.3982236862 109.955742876
1 30 30 47.12389 75.398224 109.95574
2 40 40 47.12389 75.398224 109.95574


Background Subtraction
Expand Down Expand Up @@ -267,12 +274,14 @@ We then perform the photometry in both apertures::

>>> apers = [apertures, annulus_apertures]
>>> phot_table = aperture_photometry(data, apers)
>>> print(phot_table) # doctest: +SKIP
>>> for col in phot_table.colnames:
... phot_table[col].info.format = '%.8g' # for consistent table output
>>> print(phot_table)
id xcenter ycenter aperture_sum_0 aperture_sum_1
pix pix
--- ------- ------- -------------- --------------
1 30.0 30.0 28.2743338823 87.9645943005
2 40.0 40.0 28.2743338823 87.9645943005
1 30 30 28.274334 87.964594
2 40 40 28.274334 87.964594

Note that we cannot simply subtract the aperture sums because the
apertures have different areas.
Expand All @@ -289,11 +298,12 @@ background times the circular aperture area::
>>> bkg_sum = bkg_mean * apertures.area()
>>> final_sum = phot_table['aperture_sum_0'] - bkg_sum
>>> phot_table['residual_aperture_sum'] = final_sum
>>> print(phot_table['residual_aperture_sum']) # doctest: +FLOAT_CMP
>>> phot_table['residual_aperture_sum'].info.format = '%.8g' # for consistent table output
>>> print(phot_table['residual_aperture_sum'])
residual_aperture_sum
---------------------
-7.1054273576e-15
-7.1054273576e-15
-7.1054274e-15
-7.1054274e-15

The result here should be zero because all of the data values are 1.0
(the tiny difference from 0.0 is due to numerical precision).
Expand All @@ -316,12 +326,14 @@ pixel's value and saved it in the array ``error``::

>>> error = 0.1 * data
>>> phot_table = aperture_photometry(data, apertures, error=error)
>>> print(phot_table) # doctest: +SKIP
id xcenter ycenter aperture_sum aperture_sum_err
>>> for col in phot_table.colnames:
... phot_table[col].info.format = '%.8g' # for consistent table output
>>> print(phot_table)
id xcenter ycenter aperture_sum aperture_sum_err
pix pix
--- ------- ------- ------------- ----------------
1 30.0 30.0 28.2743338823 0.531736155272
2 40.0 40.0 28.2743338823 0.531736155272
--- ------- ------- ------------ ----------------
1 30 30 28.274334 0.53173616
2 40 40 28.274334 0.53173616

``'aperture_sum_err'`` values are given by:

Expand Down Expand Up @@ -363,18 +375,20 @@ photometry by providing an image mask via the ``mask`` keyword::
>>> data[2, 2] = 100. # bad pixel
>>> mask[2, 2] = True
>>> t1 = aperture_photometry(data, aperture, mask=mask)
>>> print(t1['aperture_sum']) # doctest: +FLOAT_CMP
aperture_sum
-------------
11.5663706144
>>> t1['aperture_sum'].info.format = '%.8g' # for consistent table output
>>> print(t1['aperture_sum'])
aperture_sum
------------
11.566371

The result is very different if a ``mask`` image is not provided::

>>> t2 = aperture_photometry(data, aperture)
>>> print(t2['aperture_sum']) # doctest: +FLOAT_CMP
>>> t2['aperture_sum'].info.format = '%.8g' # for consistent table output
>>> print(t2['aperture_sum'])
aperture_sum
-------------
111.566370614
------------
111.56637


Aperture Photometry Using Sky Coordinates
Expand Down
38 changes: 19 additions & 19 deletions docs/background.rst
Expand Up @@ -76,18 +76,18 @@ background level of 5::

>>> import numpy as np
>>> from astropy.stats import biweight_location
>>> print(np.median(data))
5.2255295184
>>> print(biweight_location(data))
5.1867597555
>>> print(np.median(data)) # doctest: +FLOAT_CMP
5.225529518399048
>>> print(biweight_location(data)) # doctest: +FLOAT_CMP
5.186759755495727

Similarly, using the median absolute deviation to estimate the
background noise level gives a value that is larger than the true
value of 2::

>>> from astropy.stats import mad_std
>>> print(mad_std(data)) # doctest: +FLOAT_CMP
2.1443728009
>>> print(mad_std(data)) # doctest: +FLOAT_CMP
2.1443760096598914


Sigma Clipping Sources
Expand All @@ -103,8 +103,8 @@ levels::

>>> from astropy.stats import sigma_clipped_stats
>>> mean, median, std = sigma_clipped_stats(data, sigma=3.0, iters=5)
>>> print((mean, median, std)) # doctest: +FLOAT_CMP
(5.1991386516217908, 5.1555874333582912, 2.0942752121329691)
>>> print((mean, median, std)) # doctest: +FLOAT_CMP
(5.199138651621793, 5.155587433358291, 2.094275212132969)


Masking Sources
Expand Down Expand Up @@ -132,8 +132,8 @@ source detections and dilate using a 11x11 box:
>>> from photutils import make_source_mask
>>> mask = make_source_mask(data, snr=2, npixels=5, dilate_size=11)
>>> mean, median, std = sigma_clipped_stats(data, sigma=3.0, mask=mask)
>>> print((mean, median, std)) # doctest: +FLOAT_CMP
(5.0010134754755695, 5.0005849056043763, 1.970887100626572)
>>> print((mean, median, std)) # doctest: +FLOAT_CMP
(5.001013475475569, 5.000584905604376, 1.970887100626572)

Of course, the source detection and masking procedure can be iterated
further. Even with one iteration we are within 0.02% of the true
Expand Down Expand Up @@ -229,7 +229,7 @@ background gradient to the image defined above::
>>> y, x = np.mgrid[:ny, :nx]
>>> gradient = x * y / 5000.
>>> data2 = data + gradient
>>> plt.imshow(data2, norm=norm, origin='lower', cmap='Greys_r') # doctest: +SKIP
>>> plt.imshow(data2, norm=norm, origin='lower', cmap='Greys_r') # doctest: +SKIP

.. plot::

Expand Down Expand Up @@ -271,10 +271,10 @@ attributes, respectively:

.. doctest-requires:: scipy

>>> print(bkg.background_median)
10.8219978626
>>> print(bkg.background_rms_median)
2.29882053968
>>> print(bkg.background_median) # doctest: +FLOAT_CMP
10.821997862561792
>>> print(bkg.background_rms_median) # doctest: +FLOAT_CMP
2.298820539683762

Let's plot the background image:

Expand Down Expand Up @@ -348,8 +348,8 @@ Let's create such an image and plot it (NOTE: this example requires

>>> from scipy.ndimage import rotate
>>> data3 = rotate(data2, -45.)
>>> norm = ImageNormalize(stretch=SqrtStretch()) # doctest: +SKIP
>>> plt.imshow(data3, origin='lower', cmap='Greys_r', norm=norm) # doctest: +SKIP
>>> norm = ImageNormalize(stretch=SqrtStretch()) # doctest: +SKIP
>>> plt.imshow(data3, origin='lower', cmap='Greys_r', norm=norm) # doctest: +SKIP

.. plot::

Expand Down Expand Up @@ -386,8 +386,8 @@ apply the coverage mask to the returned background image:
.. doctest-requires:: scipy

>>> back3 = bkg3.background * ~mask
>>> norm = ImageNormalize(stretch=SqrtStretch()) # doctest: +SKIP
>>> plt.imshow(back3, origin='lower', cmap='Greys_r', norm=norm) # doctest: +SKIP
>>> norm = ImageNormalize(stretch=SqrtStretch()) # doctest: +SKIP
>>> plt.imshow(back3, origin='lower', cmap='Greys_r', norm=norm) # doctest: +SKIP

.. plot::

Expand Down
54 changes: 29 additions & 25 deletions docs/detection.rst
Expand Up @@ -62,20 +62,23 @@ above the background. Running this class on the data yields an astropy
>>> from photutils import DAOStarFinder
>>> daofind = DAOStarFinder(fwhm=3.0, threshold=5.*std) # doctest: +REMOTE_DATA
>>> sources = daofind(data - median) # doctest: +REMOTE_DATA
>>> for col in sources.colnames: # doctest: +REMOTE_DATA
... sources[col].info.format = '%.8g' # for consistent table output
>>> print(sources) # doctest: +REMOTE_DATA
id xcentroid ycentroid ... peak flux mag
--- ------------- ------------- ... ------ ------------- ---------------
1 144.247567164 6.37979042704 ... 6903.0 5.70143033038 -1.88995955438
2 208.669068628 6.82058053777 ... 7896.0 6.72306730455 -2.06891864748
3 216.926136655 6.5775933198 ... 2195.0 1.66737467591 -0.555083002864
4 351.625190383 8.5459013233 ... 6977.0 5.90092548147 -1.92730032571
5 377.519909958 12.0655009987 ... 1260.0 1.11856203781 -0.121650189969
... ... ... ... ... ... ...
281 268.049236979 397.925371446 ... 9299.0 6.22022587541 -1.98451538884
282 268.475068392 398.020998272 ... 8754.0 6.05079160593 -1.95453048936
283 299.80943822 398.027911813 ... 8890.0 6.11853416663 -1.96661847383
284 315.689448343 398.70251891 ... 6485.0 5.55471107793 -1.86165368631
285 360.437243037 398.698539555 ... 8079.0 5.26549321379 -1.80359764345
id xcentroid ycentroid sharpness ... sky peak flux mag
--- --------- --------- ---------- ... --- ---- --------- ------------
1 144.24757 6.3797904 0.58156257 ... 0 6903 5.7014303 -1.8899596
2 208.66907 6.8205805 0.48348966 ... 0 7896 6.7230673 -2.0689186
3 216.92614 6.5775933 0.69359525 ... 0 2195 1.6673747 -0.555083
4 351.62519 8.5459013 0.48577834 ... 0 6977 5.9009255 -1.9273003
5 377.51991 12.065501 0.52038488 ... 0 1260 1.118562 -0.12165019
... ... ... ... ... ... ... ... ...
280 345.59306 395.38222 0.384078 ... 0 9350 5.0559084 -1.759498
281 268.04924 397.92537 0.29650715 ... 0 9299 6.2202259 -1.9845154
282 268.47507 398.021 0.28325741 ... 0 8754 6.0507916 -1.9545305
283 299.80944 398.02791 0.32011339 ... 0 8890 6.1185342 -1.9666185
284 315.68945 398.70252 0.29502138 ... 0 6485 5.5547111 -1.8616537
285 360.43724 398.69854 0.81147144 ... 0 8079 5.2654932 -1.8035976
Length = 285 rows

Let's plot the image and mark the location of detected sources:
Expand Down Expand Up @@ -143,19 +146,20 @@ sigma above the background and a separated by at least 2 pixels:
>>> mean, median, std = sigma_clipped_stats(data, sigma=3.0)
>>> threshold = median + (10.0 * std)
>>> tbl = find_peaks(data, threshold, box_size=5)
>>> tbl['peak_value'].info.format = '%.8g' # for consistent table output
>>> print(tbl[:10]) # print only the first 10 peaks
x_peak y_peak peak_value
------ ------ -------------
233 0 27.4778521972
236 1 27.339519624
289 22 35.8532759965
442 31 30.2399941373
1 40 35.5482863002
89 59 41.2190469279
7 70 33.2880647048
258 75 26.5624808518
463 80 28.7588206692
182 93 38.0885687202
x_peak y_peak peak_value
------ ------ ----------
233 0 27.477852
236 1 27.33952
289 22 35.853276
442 31 30.239994
1 40 35.548286
89 59 41.219047
7 70 33.288065
258 75 26.562481
463 80 28.758821
182 93 38.088569

And let's plot the location of the detected peaks in the image:

Expand Down

0 comments on commit 5a06059

Please sign in to comment.