Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to apply CoordinateTransformations with napari-ome-zarr #75

Closed
edyoshikun opened this issue Mar 2, 2023 · 5 comments
Closed
Assignees
Labels
NGFF OME-NGFF (OME-Zarr format) upstream Issues with upstream dependencies

Comments

@edyoshikun
Copy link
Contributor

I am trying to use the coordinateTransformation metadata from the iohug.ngff.create_image() and I am seeing a couple of issues that are possibly related so keeping them in the same issue.

The first issue that it might be more of a napari issue is that we cannot apply individual coordinate transforms for multiple images within a position. After running this code it throws the error below, which means it expects some sort of pyramidal structured dataset.

store_path = "/hpc/projects/comp_micro/sandbox/Ed/tmp/"
# timestamp = datetime.now().strftime("test_translate")

store_path = store_path + 'test_translate' + ".zarr"

tczyx_1= np.random.randint(
    0, np.iinfo(np.uint16).max, size=(1, 3, 3, 32, 32), dtype=np.uint16
)
tczyx_2= np.random.randint(
    0, np.iinfo(np.uint16).max, size=(1, 3, 3, 32, 32), dtype=np.uint16
)
coords_shift = [1.0,1.0,1.0, 100.0, 100.0]
coords_shift2 = [1.0,1.0,1.0, -100.0, 1.0]
scale_val = [1.0, 1.0, 1.0, 0.5, 0.5]
translation = TransformationMeta(type='translation', translation=coords_shift)
scaling = TransformationMeta(type='scale', scale=scale_val)
translation2 = TransformationMeta(type='translation', translation=coords_shift2)
with open_ome_zarr(
    store_path,
    layout="hcs",
    mode="w-",
    channel_names=["DAPI", "GFP", "Brightfield"]) as dataset:
    # Create and write to positions
    # This affects the tile arrangement in visualization
    position = dataset.create_position(0, 0, 0)
    position.create_image("0", tczyx_1, transform=[translation])
    position = dataset.create_position(0, 1, 0)
    position.create_image("0", tczyx_2, transform = [scaling])
    # Print dataset summary
    dataset.print_tree()

Error:

File "/hpc/mydata/eduardo.hirata/.conda/envs/pyplay/bin/napari", line 8, in <module>
    sys.exit(main())
  File "/hpc/mydata/eduardo.hirata/.conda/envs/pyplay/lib/python3.10/site-packages/napari/__main__.py", line 561, in main
    _run()
  File "/hpc/mydata/eduardo.hirata/.conda/envs/pyplay/lib/python3.10/site-packages/napari/__main__.py", line 341, in _run
    viewer._window._qt_viewer._qt_open(
  File "/hpc/mydata/eduardo.hirata/.conda/envs/pyplay/lib/python3.10/site-packages/napari/_qt/qt_viewer.py", line 830, in _qt_open
    self.viewer.open(
  File "/hpc/mydata/eduardo.hirata/.conda/envs/pyplay/lib/python3.10/site-packages/napari/components/viewer_model.py", line 1014, in open
    self._add_layers_with_plugins(
  File "/hpc/mydata/eduardo.hirata/.conda/envs/pyplay/lib/python3.10/site-packages/napari/components/viewer_model.py", line 1242, in _add_layers_with_plugins
    added.extend(self._add_layer_from_data(*_data))
  File "/hpc/mydata/eduardo.hirata/.conda/envs/pyplay/lib/python3.10/site-packages/napari/components/viewer_model.py", line 1316, in _add_layer_from_data
    layer = add_method(data, **(meta or {}))
  File "/hpc/mydata/eduardo.hirata/.conda/envs/pyplay/lib/python3.10/site-packages/napari/utils/migrations.py", line 44, in _update_from_dict
    return func(*args, **kwargs)
  File "/hpc/mydata/eduardo.hirata/.conda/envs/pyplay/lib/python3.10/site-packages/napari/components/viewer_model.py", line 823, in add_image
    layerdata_list = split_channels(data, channel_axis, **kwargs)
  File "/hpc/mydata/eduardo.hirata/.conda/envs/pyplay/lib/python3.10/site-packages/napari/layers/utils/stack_utils.py", line 79, in split_channels
    multiscale, data = guess_multiscale(data)
  File "/hpc/mydata/eduardo.hirata/.conda/envs/pyplay/lib/python3.10/site-packages/napari/layers/image/_image_utils.py", line 76, in guess_multiscale
    raise ValueError(
ValueError: Input data should be an array-like object, or a sequence of arrays of decreasing size. Got arrays of single shape: (1, 3, 3, 32, 64)
c

Now, if we remove the second image position.create_image("1", tczyx_2) and open the positions in napari napari --plugin napari-ome-zarr test_translate.zarr we see the two (32x32) test images as expected, but without transformations applied to them .

image

If we drag and drop the different positions separately into napari, we can see that the scaling was applied as well as the translation but now we have to slide the bar for positions.
image

image

@ziw-liu
Copy link
Collaborator

ziw-liu commented Mar 3, 2023

If we drag and drop the different positions separately into napari, we can see that the scaling was applied as well as the translation but now we have to slide the bar for positions.

Why would do you need to slide the bar though, it appears that they are loaded as duplicate layers so blending should work?

@edyoshikun
Copy link
Contributor Author

I dont need the slide bar. The slide bar appears when you drag and drop the two different positions into napari.

@ziw-liu ziw-liu added NGFF OME-NGFF (OME-Zarr format) upstream Issues with upstream dependencies labels Mar 24, 2023
@ziw-liu
Copy link
Collaborator

ziw-liu commented Apr 5, 2023

I modified your code a little bit to write the FOVs in the same well:

from iohub.ngff import open_ome_zarr, TransformationMeta

store_path = "test_translate.zarr"

tczyx_1 = np.random.randint(
    0, np.iinfo(np.uint16).max, size=(1, 3, 3, 32, 32), dtype=np.uint16
)
tczyx_2 = np.random.randint(
    0, np.iinfo(np.uint16).max, size=(1, 3, 3, 32, 32), dtype=np.uint16
)
coords_shift = [1., 1.0, 1.0, 10.0, 10.0]
coords_shift2 = [1., 1.0, 0., -10.0, -10.0]
scale_val = [1., 1.0, 1.0, 0.5, 0.5]
translation = TransformationMeta(type="translation", translation=coords_shift)
scaling = TransformationMeta(type="scale", scale=scale_val)
translation2 = TransformationMeta(
    type="translation", translation=coords_shift2
)
with open_ome_zarr(
    store_path,
    layout="hcs",
    mode="w-",
    channel_names=["DAPI", "GFP", "Brightfield"],
) as dataset:
    # Create and write to positions
    # This affects the tile arrangement in visualization
    position = dataset.create_position(0, 0, 0)
    position.create_image("0", tczyx_1, transform=[translation])
    position = dataset.create_position(0, 0, 1)
    position.create_image("0", tczyx_2, transform=[translation2, scaling])
    # Print dataset summary
    dataset.print_tree()

And it works as expected either by dragging all the FOVs or opening from command line with napari test_translate.zarr/0/0/*.

image

@edyoshikun
Copy link
Contributor Author

For multiple channels, there will be duplicates but I guess that's fine. Writing the multiple positions into one well works.
I think that for the movies and good snapshots, we will just create a separate zarr store that can do some blending between positions.

What do you think @mattersoflight?
image

@mattersoflight
Copy link
Collaborator

mattersoflight commented Apr 6, 2023

Writing the multiple positions into one well works.

I think this is a good solution for writing the reconstructed data and getting starting the analysis.
When this reconstructed data is further analyzed, we will be stitching volumes, projecting data, overlaying channels etc. in some order. These reduced datasets need should be separate zarr stores, potentially not even ome-zarr.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
NGFF OME-NGFF (OME-Zarr format) upstream Issues with upstream dependencies
Projects
None yet
Development

No branches or pull requests

3 participants