Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add and do diffraction pattern input tests #35

Open
mxmlnkn opened this issue Jan 27, 2016 · 3 comments
Open

Add and do diffraction pattern input tests #35

mxmlnkn opened this issue Jan 27, 2016 · 3 comments

Comments

@mxmlnkn
Copy link
Collaborator

mxmlnkn commented Jan 27, 2016

Currently all tests begin their work from the original object to reconstruct and then use the function diffractionPattern to create, well, the diffraction pattern :). Normally the input is a diffraction pattern. We should test if it works for the examples we received.

A problem which occurs here is the anticipated layout of the diffraction pattern. See Issue #34

@mxmlnkn
Copy link
Collaborator Author

mxmlnkn commented Mar 4, 2016

Maybe try this self-made image using a cheap laser pointer and - to my calculations - a 40µm thick hair :). The hair results in the long stroke. The circular pattern seems because of a circular opening of 2mm diameter inside the laser pointer.
circular-diffraction-to-reconstruct-macro
(Note that the setup changed between the two pictures. The hair and the streak are perpendicular to each other, not parallel like suggested here)
hair-setup-b
Required issues to solve for this are:

  1. Analyze and choose a better threshold mechanism also working for large images Converging problem for large pictures #39
  2. Masking of the center as mentioned in Problem with input image diffraction layout #34
  3. Finding the diffraction pattern center as mentioned in Problem with input image diffraction layout #34
  4. [optional] read jpg files (can just use external program to convert to jpg)
  5. 3D -> 2D support ? 3D input #43 ?
  6. [optional] leveling, meaning make e.g. 20% of background perfect black, because a constant "gray" background gives problems with convergence (tested with Gaussian noise before, the higher the noise, the high will be the minimum reachable convergence error, thereby confusing the convergence condition). Perfect white leveling is not necessary if we use a mask anyway. Then we only need to choose a better threshold for the mask, instead of "all pixels being value 255" -> "all pixels with value 240-255"
  7. At least this image was taken in a poor quality. Instead of directly shining light onto a CCD sensor the reflection of a surface was photographed (8s exposure time). Even though the large exposure time we still see noise, which are laser freckles caused by the roughness of the screen. To account for this a Gaussian blur preprocessing may be in order
  8. If the image like above is in color, then a color to black/white preprocessing filter may be needed.

This is the picture with step 3., 4., 6., 7., 8. and partially 2. done with GIMP:
laser-freckles-edited

  1. center at ~ 935, 572 found with gimp
    -> 1870 x 1144 (from top right)
  2. delete blue and green channels by selecting them in layer dialog and making them black
  3. colors->components->channel mixer -> choose monochrome and apply
  4. colors->levels->leveling 15 to 242
  5. select by color tool with threshold 0 and select by red to make mask (click in center)
  6. fill selection with RGB blue
    The leveling is actually of such a quality, that the blue mask is unnecessary. The algorithm can just assume, that all pixels being 255 white, are masked.
    This edited version has much more bleeding in the center, because in the original when the red pixels are at full value the green and blue sensors can detect some of the overflowing light intensity, because the sensors aren't perfectly restricted to one wavelength. Also the laser has a a spectral range of 640-660nm given by the manufacturer with no error margins it may with less intensity have some shorter wavelengths.

Just for fun a 3D map:
diffraction-3d-height-map-with-lines

@mxmlnkn
Copy link
Collaborator Author

mxmlnkn commented Mar 6, 2016

Well, one relatively minor problem is bit depth. All tests up to date were done with artificially created diffraction patterns using 32-bit float, but they also work with 16-bit integers. But even 16-bit images aren't even supported by any Gimp release yet.
Reconstruction from 32-bit float working precision, which may not even be comparable to 128-bit integer precision because of the support for exponents ./miniExample oI_1200x1000.png O (in the working commit the O means that the input file is the object, not the diffraction pattern, the pattern will be created internally using float):
oi_1200x1000 png-reconstructed
Note that blue means values smaller than 0. They will be very close to zero though, because that's what HIO wants to achieve.
Reconstruction from 16-bit integer (saved as PNG) ./miniExample oI_1200x1000.png-diffraction.png:
oi_1200x1000 png-diffraction png-reconstructed
Reconstruction from 8-bit integer (resaved as PNG through GIMP 2.8.x) ./miniExample oI_1200x1000.png-diffraction-8-bit.png
oi_1200x1000 png-diffraction-8-bit png-reconstructed
My camera only supports 8-bit. Even cameras supporting raw-format are said to actually only measure 12-bit of data instead of 16-bit supported by raw itself. That's why in this paper they use high dynamic range imaging, meaning they take images at n different exposure settings and combine them somehow, to get better precision, where n isn't given in the paper.

If already lost precision is a problem, then I don't really wanna know, how much of an irrecoverable error the laser freckles induced by the indirect imaging method and the reflecting surface roughness introduce ...

@bussmann
Copy link
Member

bussmann commented Mar 6, 2016

Wonderful work!

FYI: Melanie Rödel from FWKH has some excellent SAXS data. Ask Thomas Kluge.

@BeyondEspresso have you seen this?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants