Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Viz: depth coded MinIP #32

Open
6 tasks
JohnTigue opened this issue Oct 19, 2019 · 7 comments
Open
6 tasks

Viz: depth coded MinIP #32

JohnTigue opened this issue Oct 19, 2019 · 7 comments
Assignees

Comments

@JohnTigue
Copy link
Contributor

JohnTigue commented Oct 19, 2019

Novelty:

  • Depth coded brightfield stack (not fluorescent)
  • Turbo not Jet
  • ?
import reconstrue.brigthfield.depth_coder
  • Colored Depth coding (grayscale in, z-axis rainbow out)
  • 2 projection views, one from each side of slide: stacked 1 to N, and stacked N to 1
  • Colormap to use should be a variable/dropdown
  • Viz: client-side JS for colormapping #74
  • Scale for eyes to map color to depth, axis labeled 0--[z_stack.depth]
  • Want this running in client-side JS for when rotating a volume

Algorithm:

  1. Start with z-stacks 2D images, grayscale 8-bit
  2. Turn each image into color
  • All pixels in image get set to same RGB color
  • That RGB color is the z-index scaled/bined to 255. Use that to index into matplotlib colormap
  • Input pixel intensity (0-255), inverted (255-intensity) becomes each pixel's opacity i.e. RGB => RGBA
  • Save those images to disk
  1. Now do a minimum intensity projection on the colorized z-stack, but it's actually maximum opacity projection (keeping the color in the accumulator image).

Finally, just show the color accumulator image, with or without transparency (dunno).

Bonus, take those images saved to disk (depth coded with inverse intensity as opacity), start with a pure white brightfield then merge that with each colored z-index image. That's what it would look like with light shining through but color filtered to depth. Then animate that as a GIF/movie. Pair that with the depth colored 2D projection mugshot.

@JohnTigue
Copy link
Contributor Author

JohnTigue commented Oct 19, 2019

Another interesting projection would be to just do a full RGBA 3D to 2D projection. Start with a brightfield, then from the deepest image to the top, for each pixel just modify accumulator by current slices color, letting transparency decide how much to mod the color by.

This is just normal 3D imaging, with RGBA, i.e. software for this already exists. There's probably a 3D viewer that could interactively move the perspective around that volume. Surely the web has such a tool already.

@JohnTigue
Copy link
Contributor Author

Not exactly what I need, but neat: numpy how to map an array to values in an LUT:
https://stackoverflow.com/a/14448935

You can just use image to index into lut if lut is 1D.
Here's a starter on indexing in NumPy:

In [54]: lut = np.arange(10) * 10

In [55]: img = np.random.randint(0,9,size=(3,3))

In [56]: lut
Out[56]: array([ 0, 10, 20, 30, 40, 50, 60, 70, 80, 90])

In [57]: img
Out[57]: 
array([[2, 2, 4],
       [1, 3, 0],
       [4, 3, 1]])

In [58]: lut[img]
Out[58]: 
array([[20, 20, 40],
       [10, 30,  0],
       [40, 30, 10]])

@JohnTigue
Copy link
Contributor Author

JohnTigue commented Oct 21, 2019

A different implementation might be: first a projection reduction phase. This is just like regular MinIP processing but there is additionally a 2D array of RGB values.

Start from deepest part of stack. At each z-depth

  • check for new minimums and accumulate those as in normal MinIP projection
  • For all newly mined z-pixels, in the RGB image color those pixels using value from turbo (or other colormap) scaled to percentage deep into z-stack depth i.e. this is the depth coding in color part.

Ties get assigned shallower depth's color b/c closer to "eye and we're coding depth illusion.

After the projection reduction phase, we know what color each pixel should be but we need to then darken/lighten the color proportional to the MinIP value determined in grayscale.

Then take the minimal intensity value in the grayscale MinIP projection and use it to darken/light the rainbow color that was accumulated during the projection reduction phase.

Hopefully, that's depth coded microscopy stack.

@JohnTigue
Copy link
Contributor Author

JohnTigue commented Oct 21, 2019

The scaled colormap might be as simple as *:

cm = plt.get_cmap('gist_rainbow', lut=8)

That maps it down to 8 values. So, we'd want to scale it to max_z_index

cm = plt.get_cmap('turbo', max_z_index)
# Would need turbo registered for that to work.

# Apply the colormap like a function to any array:
colored_image = cm(image)

Not that I want to cm(foo) right now. I just want that cm, which is the value to store in RGB version of projection, at pixels that just re-min'd in grayscale version.

@JohnTigue
Copy link
Contributor Author

Scikit-image has some things to say: Tinting gray-scale images

@JohnTigue JohnTigue changed the title Viz: depth coded minimum intensity projection Viz: depth coded MinIP Apr 26, 2020
@JohnTigue
Copy link
Contributor Author

JohnTigue commented May 5, 2020

Depth coding can also be used in a volumetric context:

via:
Screen Shot 2020-05-05 at 10 09 26

Video of same

@JohnTigue
Copy link
Contributor Author

Depth coding is:

  • output_image = np.array(grayed_image.shape) # or however to get same, blank.
  • grayscale minip coding
    • when find new min:
      • curr_depths_rgb_saturated = turbo(normalize(z)*256)
      • curr_depths_hsv_saturated = rgb2hsv(curr_depths_rgb_saturated)
      • new_mins_color_hsv = scale v of curr_depths_hsv_saturated to newly found min
      • new_mins_color_rgb = rgb2hsv2rgb(new_mins_color_hsv)
      • output_image[matches_coord] = new_mins_color_rgb
  • So, maintain two accumulators, minip_running and output_image
    • although if did core alg in all hsv and converted to rgb at end, would only need one accumlators b/c V is same values as minip_runnings

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant