Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multi-scale Gaussian Normalisation 2: The Return of MGN #1899

Closed
wants to merge 10 commits into from
2 changes: 1 addition & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,7 @@ Latest
* Now the `sunpy.database.tables.display_entries()` prints an astropy table.
* Additional methods added inside the `sunpy.database` class to make it easier
to display the database contents.
* Added an implementation of the Multi-scale Gaussian Normalisation algorithm.

0.7.0
-----
Expand Down Expand Up @@ -147,7 +148,6 @@ Bug Fixes:
* Some documentation fixes
* fix file paths to use os.path.join for platform independance.


0.4.0
-----
Features:
Expand Down
110 changes: 110 additions & 0 deletions sunpy/image/normalization.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,110 @@
from __future__ import division

import numpy as np
import scipy.ndimage as ndimage


def multiscale_gaussian(data, sigma=[1.25, 2.5, 5, 10, 20, 40], k=0.7,
gamma=3.2, h=0.7, weights=None, truncate=3):
"""
Multi-scale Gaussian normalization.

Parameters
----------
data : numpy.ndarray
Image to be transformed.

sigma : list, optional
Range of guassian widths to transform over.

k : float, optional
Controls the severity of the arctan transformation.

gamma : float, optional
The value used to calulcate the global gamma-transformed image.
Ideally should be between 2.5 to 4.

h : float, optional
Weight of global filter to gaussian filters.

weights : list, optional
Used to weight all the transformed images during the calculation of the
final image. If not specificed, all weights are one.

width : `int`
An odd integer defining the width of the kernel to be convolved.

truncate : `int`
The number of sigmas to truncate the kernel.

Returns
-------
image: numpy.ndarray
Normalized image.

Reference
---------
Morgan, Huw, and Miloslav Druckmuller.
"Multi-scale Gaussian normalization for solar image processing."
arXiv preprint arXiv:1403.6613 (2014).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated Ref: Sol Phys (2014) 289: 2945. doi:10.1007/s11207-014-0523-9


Notes
-----
In practice, the weights and h may be adjusted according to the desired
output, and also according to the type of input image
(e.g. wavelength or channel).
For most purposes, the weights can be set
equal for all scales.
"""

if not weights:
weights = np.ones(len(sigma))

# 1. Replace spurious negative pixels with zero
data[data <= 0] = 1e-15 # Makes sure that all values are above zero
image = np.empty(data.shape, dtype=data.dtype)
conv = np.empty(data.shape, dtype=data.dtype)
sigmaw = np.empty(data.shape, dtype=data.dtype)

for s, weight in zip(sigma, weights):
# 2 & 3 Create kernel and convolve with image
ndimage.filters.gaussian_filter(data, sigma=s,
truncate=truncate, output=conv)
# 5. Calculate difference between image and the local mean image,
# square the difference, and convolve with kernel. Square-root the
# resulting image to give ‘local standard deviation’ image sigmaw
conv -= data
conv **= 2
ndimage.filters.gaussian_filter(conv, sigma=s,
truncate=truncate, output=sigmaw)
np.sqrt(sigmaw, out=sigmaw)
conv /= sigmaw

# 6. Apply arctan transformation on Ci to give C'i
conv *= k
conv *= weight
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks computationally inefficient to me, k*weight is a scalar and conv *= k * weight would be quicker than both lines 84 and 85

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually what the code does here is wrong, if we follow the original algorithm, the weights are applied out of the arctan, not in the arctan argument.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could that explain the differences between this version and the IDL output?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This would be possible only if testing with weights != 1 (I don't know what these differences with IDL output are).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well if i recall, at least for the initial version we had, our output was much noisier than the IDL code gave us. @Cadair initially tracked it down to being a difference in how the IDL code applies the transform compared to SciPy. But I assume your code output was compared to the IDL code and didn't see a noisier output?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i think this implementation was closer than the first one, but yeah they still seemed noisier.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would need to check that down the line as well.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually just moving a line would do, to be closer to the paper:

   conv *= k
   np.arctan(conv, out=conv)
   conv *= weight

Again, this doesn't change anything if weights are 1.

But the IDL version does something still different! (an arctan of the mean, as in my previous version, instead of a mean of the arctan's, as in the paper). The IDL code and the paper are not consistent.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh good!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Cadair, do I look at the sunpy/image/normalization.py and https://git.ias.u-psud.fr/ebuchlin/aia-movie/commit/a3c15fb46653aa5ccb9e3b261cadb189d25e26ce and compare their working mechanisms or do I look into the IDL code to make it consistent with the paper ?

np.arctan(conv, out=conv)

image += conv

# delete these arrays here as it reduces the total memory consumption when
# we create the Cprime_g temp array below.
del conv
del sigmaw

# 8. Take weighted mean of C'i to give a weighted mean locally normalised
# image.
image /= len(sigma)

# 9. Calculate global gamma-transformed image C'g
data_min = data.min()
data_max = data.max()
Cprime_g = (data - data_min)
Cprime_g /= (data_max - data_min)
Cprime_g **= (1/gamma)
Cprime_g *= h

image *= (1 - h)
image += Cprime_g

return image
15 changes: 15 additions & 0 deletions sunpy/image/tests/test_normalization.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
from sunpy.data.sample import AIA_171_IMAGE
from sunpy import map
from sunpy.image.normalization import multiscale_gaussian


def test_gaussian():
data = map.Map(AIA_171_IMAGE).data
new_image = multiscale_gaussian(data)
assert new_image.shape == data.shape


def test_gaussian_weights():
data = map.Map(AIA_171_IMAGE).data
new_image = multiscale_gaussian(data, weights=[1, 2, 3, 4, 5, 6])
assert new_image.shape == data.shape
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we all have a laugh at my attempt at tests.