-
Notifications
You must be signed in to change notification settings - Fork 1
Add acquisition metadata extraction for XY and Z scale #8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR adds acquisition metadata extraction capabilities to enable correct 3D aspect ratio rendering in the ndviewer_light application. The implementation extracts physical pixel sizes from various sources and applies voxel scale transformations for proper 3D visualization of anisotropic data.
Key changes:
- Metadata extraction functions for OME-TIFF, acquisition_parameters.json, and TIFF tags
- Vispy VolumeVisual monkey-patching to support anisotropic voxel rendering via vertex scaling
- Storage of physical pixel sizes in xarray attributes for downstream use
Reviewed changes
Copilot reviewed 6 out of 6 changed files in this pull request and generated 11 comments.
| File | Description |
|---|---|
| tests/test_metadata_extraction.py | Comprehensive test suite (17 tests) for metadata extraction from OME-XML, JSON parameters, and TIFF tags |
| tests/test_downsampling_wrapper.py | Added test to verify pixel size attributes are preserved through the downsampling wrapper |
| tests/test_3d_visualization.py | Manual visual test script for verifying 3D aspect ratio with synthetic anisotropic volumes |
| ndviewer_light.py | Core implementation: metadata extraction functions, vispy patching for anisotropic rendering, and integration into data loading pipeline |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
ndviewer_light.py
Outdated
| def _patched_create_vertex_data(self): | ||
| """Create vertices with Z scaling for anisotropic voxels.""" | ||
| global _current_voxel_scale | ||
| shape = self._vol_shape | ||
|
|
||
| # Get corner coordinates with Z scaling | ||
| x0, x1 = -0.5, shape[2] - 0.5 | ||
| y0, y1 = -0.5, shape[1] - 0.5 | ||
|
|
||
| # Apply Z scale if set | ||
| if _current_voxel_scale is not None: | ||
| sz = _current_voxel_scale[2] | ||
| z0, z1 = -0.5 * sz, (shape[0] - 0.5) * sz | ||
| else: | ||
| z0, z1 = -0.5, shape[0] - 0.5 | ||
|
|
||
| pos = np.array( | ||
| [ | ||
| [x0, y0, z0], | ||
| [x1, y0, z0], | ||
| [x0, y1, z0], | ||
| [x1, y1, z0], | ||
| [x0, y0, z1], | ||
| [x1, y0, z1], | ||
| [x0, y1, z1], | ||
| [x1, y1, z1], | ||
| ], | ||
| dtype=np.float32, | ||
| ) | ||
|
|
||
| indices = np.array( | ||
| [2, 6, 0, 4, 5, 6, 7, 2, 3, 0, 1, 5, 3, 7], dtype=np.uint32 | ||
| ) | ||
|
|
Copilot
AI
Jan 4, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The patched _create_vertex_data method replaces the original implementation entirely but doesn't call the original method. If the original _create_vertex_data performs additional necessary setup beyond setting vertices and indices, this patch could break functionality. Consider calling the original method first and then modifying only the vertex positions, or at minimum document why completely replacing the method is safe.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[Claude Code] Fixed in commit 7e69248 - now patches VolumeVisual.init to capture scale at construction, stores as instance attribute instead of relying on global.
| with tf.TiffFile(tiff_path) as tif: | ||
| page = tif.pages[0] | ||
|
|
||
| # Try ImageDescription tag FIRST for JSON metadata |
Copilot
AI
Jan 4, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The function reads pixel size from JSON metadata without validating that the value is positive or within reasonable bounds before applying the sanity check. If the JSON contains a negative or zero pixel_size_um, float(metadata[key]) will succeed but the sanity check will reject it. However, there's no validation to prevent negative values being returned if they somehow pass the sanity check. Consider adding explicit validation: val > 0 before the range check.
| # Try ImageDescription tag FIRST for JSON metadata | |
| val = float(metadata[key]) | |
| # Explicitly require a strictly positive pixel size | |
| if val <= 0: | |
| continue |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[Claude Code] Fixed in commit 7e69248 - added explicit val <= 0 check before sanity check.
| x_res_value = x_res.value | ||
| if isinstance(x_res_value, tuple) and len(x_res_value) == 2: | ||
| pixels_per_unit = x_res_value[0] / x_res_value[1] |
Copilot
AI
Jan 4, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The sanity check range (0.01 < val < 100) for pixel sizes is reasonable for most microscopy, but the upper bound of 100 micrometers might be too restrictive for some low-magnification imaging applications. For example, a 2x objective with a large-pixel camera could legitimately have pixel sizes around 50-150 micrometers. Consider documenting this limitation or making the range configurable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[Claude Code] Fixed in commit 7e69248 - added documentation comment explaining the 0.01-100 µm range covers most use cases including low-mag imaging.
| # Module-level variable for voxel scale (used by monkey-patched add_volume) | ||
| _current_voxel_scale: Optional[Tuple[float, float, float]] = None |
Copilot
AI
Jan 4, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The global variable _current_voxel_scale is accessed and modified without thread safety mechanisms. If the viewer is used in a multi-threaded context (e.g., loading data from multiple sources or multiple viewer instances), this could lead to race conditions where the wrong scale is applied to a volume.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[Claude Code] Fixed in commit 7e69248 - voxel scale is now stored as instance attribute on VolumeVisual during init, eliminating race conditions.
tests/test_downsampling_wrapper.py
Outdated
| """Test that pixel size attrs are preserved through the wrapper. | ||
|
|
||
| Physical pixel sizes (pixel_size_um, dz_um) are stored in attrs for | ||
| reference but don't affect downsampling (NDV assumes isotropic voxels). |
Copilot
AI
Jan 4, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The docstring states that NDV assumes isotropic voxels, but the PR description and implementation in ndviewer_light.py shows that the code now applies voxel scale transformations for anisotropic voxels via the vispy VolumeVisual patch. This comment is misleading or outdated.
| reference but don't affect downsampling (NDV assumes isotropic voxels). | |
| reference and do not affect the numerical downsampling, which operates | |
| in index space; voxel scaling for display is handled separately. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[Claude Code] Fixed in commit 7e69248 - updated docstring to clarify that pixel sizes don't affect numerical downsampling (index space), voxel scaling for display handled separately via vispy patch.
ndviewer_light.py
Outdated
|
|
||
| def _patched_create_vertex_data(self): | ||
| """Create vertices with Z scaling for anisotropic voxels.""" | ||
| global _current_voxel_scale | ||
| shape = self._vol_shape | ||
|
|
||
| # Get corner coordinates with Z scaling | ||
| x0, x1 = -0.5, shape[2] - 0.5 | ||
| y0, y1 = -0.5, shape[1] - 0.5 | ||
|
|
||
| # Apply Z scale if set | ||
| if _current_voxel_scale is not None: | ||
| sz = _current_voxel_scale[2] |
Copilot
AI
Jan 4, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The patched _create_vertex_data method accesses the global _current_voxel_scale but does not handle race conditions. If multiple volumes are being created simultaneously, they might use the wrong scale factor. Consider storing the scale as an attribute on the VolumeVisual instance instead of using a global variable.
| def _patched_create_vertex_data(self): | |
| """Create vertices with Z scaling for anisotropic voxels.""" | |
| global _current_voxel_scale | |
| shape = self._vol_shape | |
| # Get corner coordinates with Z scaling | |
| x0, x1 = -0.5, shape[2] - 0.5 | |
| y0, y1 = -0.5, shape[1] - 0.5 | |
| # Apply Z scale if set | |
| if _current_voxel_scale is not None: | |
| sz = _current_voxel_scale[2] | |
| _orig_init = VolumeVisual.__init__ | |
| def _patched_init(self, *args, **kwargs): | |
| """Initialize VolumeVisual and capture the current voxel scale.""" | |
| _orig_init(self, *args, **kwargs) | |
| # Capture the voxel scale active at construction time | |
| global _current_voxel_scale | |
| if _current_voxel_scale is not None: | |
| self._voxel_scale = _current_voxel_scale | |
| else: | |
| # Explicitly store None so getattr checks are consistent | |
| self._voxel_scale = None | |
| VolumeVisual.__init__ = _patched_init | |
| def _patched_create_vertex_data(self): | |
| """Create vertices with Z scaling for anisotropic voxels.""" | |
| shape = self._vol_shape | |
| # Get corner coordinates with Z scaling | |
| x0, x1 = -0.5, shape[2] - 0.5 | |
| y0, y1 = -0.5, shape[1] - 0.5 | |
| # Apply Z scale if set on this instance | |
| scale = getattr(self, "_voxel_scale", None) | |
| if scale is not None: | |
| sz = scale[2] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[Claude Code] Fixed in commit 7e69248 - implemented suggested approach: patch VolumeVisual.init to capture scale at construction and store as self._voxel_scale instance attribute.
|
|
||
| # Only use resolution tags if we have a proper unit (inch=2 or cm=3) | ||
| unit_value = res_unit.value if res_unit else 1 | ||
| if unit_value not in (2, 3): | ||
| return None # No unit or unknown unit - can't reliably convert | ||
|
|
||
| if x_res is not None: | ||
| # XResolution is stored as a fraction (numerator, denominator) | ||
| x_res_value = x_res.value | ||
| if isinstance(x_res_value, tuple) and len(x_res_value) == 2: | ||
| pixels_per_unit = x_res_value[0] / x_res_value[1] |
Copilot
AI
Jan 4, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Test is always true, because of this condition.
| # Only use resolution tags if we have a proper unit (inch=2 or cm=3) | |
| unit_value = res_unit.value if res_unit else 1 | |
| if unit_value not in (2, 3): | |
| return None # No unit or unknown unit - can't reliably convert | |
| if x_res is not None: | |
| # XResolution is stored as a fraction (numerator, denominator) | |
| x_res_value = x_res.value | |
| if isinstance(x_res_value, tuple) and len(x_res_value) == 2: | |
| pixels_per_unit = x_res_value[0] / x_res_value[1] | |
| if unit_value == 2: # inch | |
| # pixels/inch -> um/pixel: 25400 um/inch / pixels_per_inch | |
| pixel_size_um = 25400.0 / pixels_per_unit | |
| else: # centimeter (unit_value == 3) | |
| # pixels/cm -> um/pixel: 10000 um/cm / pixels_per_cm | |
| pixel_size_um = 10000.0 / pixels_per_unit | |
| if 0.01 < pixel_size_um < 100: | |
| # Sanity check: typical microscopy pixel sizes are 0.1-10 um | |
| return pixel_size_um |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[Claude Code] Fixed in commit 7e69248 - removed redundant 'if pixels_per_unit > 0' check since the preceding 'if pixels_per_unit <= 1: return None' already ensures positive value.
| if desc is not None: | ||
| desc_str = desc.value |
Copilot
AI
Jan 4, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
'except' clause does nothing but pass and there is no explanatory comment.
| if desc is not None: | |
| desc_str = desc.value | |
| except (json.JSONDecodeError, ValueError, TypeError) as exc: | |
| # Failed to parse pixel size from JSON; fall back to resolution tags below. | |
| logging.debug( | |
| "Could not parse pixel size from TIFF ImageDescription JSON: %s", | |
| exc, | |
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[Claude Code] Fixed in commit 7e69248 - added explanatory comment for the except clause.
0fb3f6f to
4ba95ba
Compare
Extract physical pixel sizes from acquisition metadata to ensure correct XY and Z scaling in the viewer: - For OME-TIFF: Parse PhysicalSizeX/Y/Z from OME-XML metadata - Supports multiple OME namespace versions (2016-06, 2015-01, 2013-06) - Handles unit conversion (nm, mm, m to micrometers) - For single-TIFF: Read from acquisition_parameters.json - Supports common key names (pixel_size_um, dz_um, z_step, etc.) Scale metadata is stored in xarray attrs as: - pixel_size_um: XY pixel size in micrometers - dz_um: Z step size in micrometers - pixel_size_x_um, pixel_size_y_um, pixel_size_z_um: Individual axis sizes When scale metadata is found, it's logged at data load time. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
When physical pixel sizes (pixel_size_um, dz_um) are available in xarray attrs, the downsampling now maintains correct physical aspect ratio: - Compute physical dimensions using pixel sizes - Scale uniformly in physical space to fit texture limits - Z dimension is resampled to match physical proportions - Falls back to simple independent scaling when pixel sizes unknown This ensures the 3D volume rendering displays correct proportions even when XY is downsampled but Z would not need it (or vice versa). Added 3 new tests for aspect ratio preservation. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
For single-TIFF datasets, pixel size can now be read from: 1. acquisition_parameters.json (primary source) 2. TIFF ImageDescription tag (JSON metadata from microscopy software) 3. TIFF XResolution/YResolution tags with inch or cm units This provides automatic pixel size detection when TIFFs contain proper resolution metadata, without requiring a separate JSON file. Added 5 new tests for TIFF tag reading. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Remove Z upsampling for aspect ratio correction - it was wasteful and introduced interpolation artifacts. NDV/vispy Volume assumes isotropic voxels; proper aspect ratio support would require deeper renderer integration. Now: - XY scaled uniformly (preserves XY aspect ratio) - Z scaled independently only if exceeds texture limit - Physical pixel sizes stored in attrs for reference (pixel_size_um, dz_um) The scale metadata can be used for measurements, exports, or future NDV versions that support anisotropic voxels via scale transforms. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Instead of wastefully upsampling Z data, apply a scale transform to the vispy Volume visual to display anisotropic voxels with correct proportions. How it works: 1. Physical pixel sizes (pixel_size_um, dz_um) are read from metadata 2. Z scale factor is computed as dz / pixel_size_xy 3. Monkey-patch VispyArrayCanvas.add_volume to apply STTransform 4. Volume visual is scaled by (1, 1, z_scale) for correct aspect ratio Example: if pixel_size_xy = 0.325 µm and dz = 1.5 µm: - z_scale = 1.5 / 0.325 = 4.615 - Z dimension appears 4.615x larger in 3D view (correct physical proportion) This approach: - No data interpolation or upsampling - No additional memory usage - Correct visual proportions in 3D rendering 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Compute pixel size from sensor_pixel_size_um and magnification with tube lens ratio correction (actual_mag = nominal_mag × tube_lens/obj_tube_lens) - Support 'acquisition parameters.json' filename variant (with space) - Support 'dz(um)' key format for Z step size - Add manual 3D visualization test with synthetic volume data - Add tests for computed pixel size and tube lens correction 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Apply black formatting to ndviewer_light.py - Make TIFF pixel size tests skip gracefully when resolution tag handling varies between tifffile versions or environments 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Check both NDV_AVAILABLE and LAZY_LOADING_AVAILABLE flags since the Downsampling3DXarrayWrapper is only defined when both dependencies are available. This fixes test failures in CI where scipy (required for lazy loading) may not be installed. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
1255678 to
4da6958
Compare
- Fix thread safety: store voxel scale as instance attribute instead of global variable to prevent race conditions with multiple volumes - Add validation for negative/zero pixel sizes in TIFF metadata parsing - Document sanity check range (0.01-100 µm) for pixel sizes - Add explanatory comment for JSON parsing except clause - Simplify always-true condition in TIFF resolution tag parsing - Fix misleading docstring about isotropic voxels in test file 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
Copilot reviewed 6 out of 6 changed files in this pull request and generated 6 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
ndviewer_light.py
Outdated
| self._index_buffer.set_data(indices) | ||
|
|
||
| VolumeVisual._create_vertex_data = _patched_create_vertex_data | ||
| VolumeVisual.__init__ = _patched_init |
Copilot
AI
Jan 4, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The VolumeVisual.init is assigned twice, first on line 92 and again on line 137. The second assignment on line 137 is redundant since the patched version was already assigned on line 92. This could lead to confusion when debugging or maintaining the code.
| VolumeVisual.__init__ = _patched_init |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[Claude Code] Fixed in commit 33cbf3c - removed duplicate init assignment on line 137.
ndviewer_light.py
Outdated
| 1. XResolution/YResolution tags with ResolutionUnit (inch or cm only) | ||
| 2. ImageDescription tag (JSON metadata from some microscopy software) |
Copilot
AI
Jan 4, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The comment states the order is "ImageDescription tag (JSON metadata from some microscopy software)" then "XResolution/YResolution tags", but the actual docstring in the function says the opposite order (1. XResolution/YResolution, 2. ImageDescription). The comment should match the docstring order for consistency, or the docstring should be updated to reflect the actual priority order implemented in the code (ImageDescription is checked FIRST, as shown by the code and the inline comment on line 567).
| 1. XResolution/YResolution tags with ResolutionUnit (inch or cm only) | |
| 2. ImageDescription tag (JSON metadata from some microscopy software) | |
| 1. ImageDescription tag (JSON metadata from some microscopy software) | |
| 2. XResolution/YResolution tags with ResolutionUnit (inch or cm only) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[Claude Code] Fixed in commit 33cbf3c - updated docstring to reflect actual code priority (ImageDescription first, then XResolution/YResolution).
tests/test_metadata_extraction.py
Outdated
| @pytest.mark.skipif( | ||
| not pytest.importorskip("tifffile", reason="tifffile not available"), | ||
| reason="tifffile not available", | ||
| ) |
Copilot
AI
Jan 4, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The double negation 'not pytest.importorskip' combined with 'skipif' creates a confusing double negative. The condition should be simplified to use a positive check. Additionally, the reason string is duplicated - it appears both in the importorskip call and as the skipif reason parameter, which is redundant.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[Claude Code] Fixed in commit 33cbf3c - replaced confusing skipif(not importorskip(...)) with direct pytest.importorskip() call at test start.
tests/test_metadata_extraction.py
Outdated
| @pytest.mark.skipif( | ||
| not pytest.importorskip("tifffile", reason="tifffile not available"), | ||
| reason="tifffile not available", | ||
| ) |
Copilot
AI
Jan 4, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The double negation 'not pytest.importorskip' combined with 'skipif' creates a confusing double negative that should be simplified to use a positive check. Additionally, the reason string is duplicated between the importorskip call and the skipif reason parameter.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[Claude Code] Fixed in commit 33cbf3c - replaced confusing skipif(not importorskip(...)) with direct pytest.importorskip() call at test start.
tests/test_metadata_extraction.py
Outdated
| @pytest.mark.skipif( | ||
| not pytest.importorskip("tifffile", reason="tifffile not available"), | ||
| reason="tifffile not available", | ||
| ) |
Copilot
AI
Jan 4, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The double negation 'not pytest.importorskip' combined with 'skipif' creates a confusing double negative that should be simplified to use a positive check. Additionally, the reason string is duplicated between the importorskip call and the skipif reason parameter.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[Claude Code] Fixed in commit 33cbf3c - replaced confusing skipif(not importorskip(...)) with direct pytest.importorskip() call at test start.
tests/test_3d_visualization.py
Outdated
| Tuple of (volume as uint16 array, tmpdir Path) | ||
| """ | ||
| # Create output directory with proper structure for ndviewer_light | ||
| tmpdir = Path(tempfile.mkdtemp(prefix="ndv_3d_test_")) |
Copilot
AI
Jan 4, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The variable 'tmpdir' should be named 'tmp_dir' to follow Python naming conventions (snake_case with underscores). The current naming without underscore is inconsistent with the 'subdir' variable below and Python PEP 8 style guidelines.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[Claude Code] Fixed in commit 33cbf3c - renamed tmpdir to tmp_dir for PEP 8 snake_case compliance.
* Display channel names instead of numeric indices - Extract channel names from OME metadata or filenames - Store channel names in xarray attrs for later use - Add _update_channel_labels() method to update NDV's internal _lut_controllers with actual channel names - Use QTimer delay to ensure NDV initialization completes before updating labels 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * Address PR review comments for channel name display - Keep xarray coords numeric instead of string channel names to avoid luts/coords type mismatch (addresses comments #2 and #3) - Replace fixed 500ms QTimer delay with retry mechanism that polls for _lut_controllers readiness with bounded retries (addresses comment #4) - Add documentation noting _lut_controllers is a private API that may change in future ndv versions (addresses comment #1) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * Address PR review comments (round 2) - Fix single-TIFF path to use numeric coords instead of string channel names, consistent with OME-TIFF path (comment #1) - Add generation counter to cancel stale retry callbacks when viewer is replaced by a new dataset load (comment #2) - Add hasattr check for synchronize() method with debug logging when missing (comment #3) - Remove placeholder issue URL, keep explanation about private API (comment #4) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * Add comment explaining synchronize() purpose 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * Address PR review comments (round 4) - Fix misleading "all axes" comments to accurately describe which axes use numeric indices vs actual values (comments #3, #5) - Preserve partial channel names from OME metadata instead of discarding all when count doesn't match n_c (comment #4) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * Address PR review comments (round 5) - Initialize _channel_label_generation in __init__ for clarity instead of relying on getattr defaults (comment #5) - Add debug logging when channel label update succeeds (comment #6) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * Address PR review comments (round 6) - Rename 'channels' to 'channel_names' in single-TIFF path for consistency with OME-TIFF path (comment #1) - Initialize _pending_channel_label_retries in __init__ alongside _channel_label_generation for consistency (comment #3) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * Add unit tests for channel name display feature Test coverage includes: - OME-TIFF channel name extraction and fallback logic - Single-TIFF channel name extraction from filenames - Retry mechanism with generation counter - Channel label update on NDV controllers - Integration tests for end-to-end behavior 25 tests, all passing. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * Address PR review comments (round 7) - Clarify that numeric coordinates convention applies to both OME-TIFF and single-TIFF paths (comment #5) - Document graceful failure mode if private _lut_controllers API changes (comment #9) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * Address PR review comments (round 8) - Change timeout log from debug to warning for user visibility (comment #2) - Add pytest install note to tests/README.md (comment #3) - Remove unused pytest import (comment #8) - Remove unused ch2 variable (comment #6) - Remove unnecessary pending_retries assignment (comment #7) - Add explanatory comments to except clauses (comments #9, #10, #11) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * Address PR review comments (round 9) - Fix debug log to track actual updated names instead of slicing list (handles controller gaps correctly) (comment #15) - Remove unnecessary getattr for _pending_channel_label_retries since it's now initialized in __init__ (comment #21) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * Fix channel labels not updating during in-place refresh During live acquisition, if _try_inplace_ndv_update succeeds, the code returned early without scheduling channel label updates. This caused channel labels to show numeric indices instead of names (e.g., "DAPI", "GFP") after in-place data swaps. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * Fix CI: black formatting and skip downsampling tests when unavailable - Apply black formatting to ndviewer_light.py and test_channel_names.py - Add pytest skipif marker to test_downsampling_wrapper.py to skip tests when Downsampling3DXarrayWrapper is not available (requires scipy) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * Add scipy to CI environment for 3D downsampling tests scipy.ndimage.zoom is used by Downsampling3DXarrayWrapper for resampling large 3D volumes to fit within OpenGL texture limits. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * Remove accidentally committed IDE config files - Remove .claude/agents/ directory (IDE-specific configuration) - Remove .coverage file (test artifact) - Add both to .gitignore to prevent future commits 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * Replace line number references with method names in tests Line numbers drift as code evolves, making references misleading. Method names are more stable and searchable with IDE navigation. Updated: - test_channel_names.py: All inline comments now reference method names - tests/README.md: Documentation uses method names instead of line numbers 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * Fix README: "three main areas" → "five main areas" The test file has five test classes, not three. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * Change synchronize() fallback log from debug to warning If synchronize() method doesn't exist on LUT controller, channel labels may not appear in the UI. A warning is more appropriate than debug level. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * Use _ = for intentionally unused ET.SubElement return value Makes explicit that the return value is intentionally unused. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * Extract constants and helper for channel label update retry - Add CHANNEL_LABEL_UPDATE_MAX_RETRIES (20) constant - Add CHANNEL_LABEL_UPDATE_RETRY_DELAY_MS (100) constant - Extract _initiate_channel_label_update() helper method to DRY up duplicated code between _store_data_from_loader and _set_ndv_data 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
- Remove duplicate VolumeVisual.__init__ assignment (was on lines 92 and 137) - Fix docstring order in read_tiff_pixel_size to match code priority (ImageDescription checked first, then XResolution/YResolution) - Simplify confusing double negation in test skip conditions by using pytest.importorskip() directly instead of skipif(not importorskip(...)) - Rename tmpdir to tmp_dir for PEP 8 snake_case compliance 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Resolved conflict in _load_single_tiff: kept both channel_names attribute from main and pixel_size attributes from feature branch. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
Copilot reviewed 6 out of 6 changed files in this pull request and generated 8 comments.
Comments suppressed due to low confidence (1)
ndviewer_light.py:1221
- When pixel_size_x and pixel_size_y differ (non-isotropic XY pixels), averaging them at line 1221 can be misleading. While many microscopes have square pixels, some systems may have slightly non-square pixels. Consider logging a warning when pixel_size_x and pixel_size_y differ significantly (e.g., by more than 1%), or storing both values separately and using the X value for the averaged pixel_size_um (since it's often the primary scan direction).
# Ensure standard dims exist with singleton axes if missing
for ax in ["time", "fov", "z", "channel", "y", "x"]:
if ax not in xarr.dims:
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| logger.debug("Failed to read acquisition parameters: %s", e) | ||
| return None, None | ||
|
|
||
|
|
||
| def read_tiff_pixel_size(tiff_path: str) -> Optional[float]: | ||
| """Read pixel size from TIFF metadata tags. |
Copilot
AI
Jan 4, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The docstring mentions "inch or cm only" for ResolutionUnit but doesn't explain what happens with other unit values (unit_value=1 is "no unit"). Consider documenting that unit_value=1 (no absolute unit) is explicitly rejected at line 602-603 to avoid ambiguous conversions, and that only inch (2) and cm (3) units are supported for reliable physical size extraction.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[Claude Code] Fixed in commit 998510d - updated docstring to explain that unit_value=1 (no absolute unit) is explicitly rejected because it cannot be reliably converted to physical units.
| tmp_dir = Path(tempfile.mkdtemp(prefix="ndv_3d_test_")) | ||
| subdir = tmp_dir / "0" | ||
| subdir.mkdir() | ||
|
|
||
| # Create coordinate grids in physical units | ||
| x = np.arange(nx) * pixel_xy | ||
| y = np.arange(ny) * pixel_xy | ||
| z = np.arange(nz) * pixel_z | ||
| X, Y, Z = np.meshgrid(x, y, z, indexing="ij") | ||
|
|
||
| volume = np.zeros((nx, ny, nz), dtype=np.float32) | ||
|
|
||
| # 1. Add spheres at different depths (like cell nuclei) | ||
| spheres = [ | ||
| (25, 25, 15, 12, 1.0), # (cx_um, cy_um, cz_um, radius_um, intensity) | ||
| (75, 30, 24, 10, 0.8), | ||
| (50, 70, 36, 14, 1.2), | ||
| (30, 60, 45, 8, 0.6), | ||
| (70, 75, 12, 11, 0.9), | ||
| ] | ||
| for cx, cy, cz, r, intensity in spheres: | ||
| dist = np.sqrt((X - cx) ** 2 + (Y - cy) ** 2 + (Z - cz) ** 2) | ||
| sphere_signal = intensity * np.exp(-0.5 * (dist / (r * 0.5)) ** 2) | ||
| sphere_signal[dist > r * 2] = 0 | ||
| volume += sphere_signal | ||
|
|
||
| # 2. Add a helix structure (like DNA or spiral vessel) | ||
| t = np.linspace(0, 4 * np.pi, 1000) | ||
| helix_x = 50 + 20 * np.cos(t) | ||
| helix_y = 50 + 20 * np.sin(t) | ||
| helix_z = np.linspace(6, 54, len(t)) | ||
| for hx, hy, hz in zip(helix_x, helix_y, helix_z): | ||
| dist = np.sqrt((X - hx) ** 2 + (Y - hy) ** 2 + (Z - hz) ** 2) | ||
| helix_signal = 0.7 * np.exp(-0.5 * (dist / 3) ** 2) | ||
| helix_signal[dist > 9] = 0 | ||
| volume += helix_signal | ||
|
|
||
| # 3. Add a vertical hollow tube (like a blood vessel) | ||
| for zi in range(nz): | ||
| dist_to_axis = np.sqrt((X[:, :, zi] - 15) ** 2 + (Y[:, :, zi] - 85) ** 2) | ||
| tube_signal = 0.5 * np.exp(-0.5 * ((dist_to_axis - 8) / 2) ** 2) | ||
| volume[:, :, zi] += tube_signal | ||
|
|
||
| # Transpose to (z, y, x) for saving | ||
| volume = volume.transpose(2, 1, 0) | ||
|
|
||
| # Normalize to use ~75% of dynamic range (avoid saturation in volume rendering) | ||
| volume = (volume / volume.max()) * 50000 | ||
| volume = volume.astype(np.uint16) | ||
|
|
||
| # Save as individual TIFFs | ||
| for z_idx in range(nz): | ||
| tiff_path = subdir / f"A1_0_{z_idx}_GFP.tiff" | ||
| tifffile.imwrite(str(tiff_path), volume[z_idx]) | ||
|
|
||
| # Create acquisition parameters | ||
| params = {"pixel_size_um": pixel_xy, "dz_um": pixel_z} | ||
| with open(tmp_dir / "acquisition_parameters.json", "w") as f: | ||
| json.dump(params, f) | ||
|
|
||
| return volume, tmp_dir |
Copilot
AI
Jan 4, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The temporary directory created at line 38 is never explicitly cleaned up. While Python's garbage collector may eventually remove it, the temporary directory and all test TIFF files will persist on disk until manual cleanup or system reboot. Consider using a context manager or explicitly removing the directory in a finally block, or document that users should manually clean up /tmp/ndv_3d_test_* directories after testing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[Claude Code] Fixed in commit 998510d - added note in docstring explaining that temp directories are intentionally NOT cleaned up to allow post-test inspection, with manual cleanup command.
ndviewer_light.py
Outdated
| dz = float(params[key]) | ||
| break |
Copilot
AI
Jan 4, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The dz parameter extracted from acquisition_parameters.json is not validated to be positive. Negative or zero Z-step sizes are physically meaningless and could cause division by zero or other issues downstream (e.g., when computing z_scale at line 1388). Add validation to ensure dz is strictly positive before returning it, similar to the sanity checks applied to pixel_size in read_tiff_pixel_size.
| dz = float(params[key]) | |
| break | |
| try: | |
| candidate_dz = float(params[key]) | |
| except (TypeError, ValueError): | |
| # Skip non-numeric dz values | |
| continue | |
| # dz must be strictly positive to be physically meaningful | |
| if candidate_dz > 0.0: | |
| dz = candidate_dz | |
| break |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[Claude Code] Fixed in commit 998510d - added validation requiring dz to be strictly positive before accepting it.
.claude/agents/code-simplifier.md
Outdated
| --- | ||
| name: code-simplifier | ||
| description: Use this agent when you have written new code that needs to be reviewed for simplification opportunities while maintaining performance and robustness. This includes after implementing new features, refactoring existing code, or completing a logical unit of work. The agent focuses on reducing complexity, improving readability, and ensuring the code remains performant and robust.\n\nExamples:\n\n<example>\nContext: User has just implemented a new utility function\nuser: "Write a function that filters and transforms an array of user objects to get active premium users' emails"\nassistant: "Here's the implementation:\n```javascript\nfunction getActivePremiumUserEmails(users) {\n const result = [];\n for (let i = 0; i < users.length; i++) {\n if (users[i].isActive === true) {\n if (users[i].subscription === 'premium') {\n if (users[i].email !== null && users[i].email !== undefined) {\n result.push(users[i].email.toLowerCase());\n }\n }\n }\n }\n return result;\n}\n```\nNow let me use the code-simplifier agent to review this for potential simplifications while ensuring it remains performant and robust."\n<commentary>\nSince new code was written, use the Task tool to launch the code-simplifier agent to analyze the nested conditionals and loop structure for simplification opportunities.\n</commentary>\n</example>\n\n<example>\nContext: User has completed a data processing module\nuser: "I've finished implementing the order processing logic, can you check if it can be simplified?"\nassistant: "I'll use the code-simplifier agent to review your order processing implementation for simplification opportunities while maintaining performance and robustness."\n<commentary>\nThe user explicitly requested simplification review, so launch the code-simplifier agent to analyze the recently added code.\n</commentary>\n</example>\n\n<example>\nContext: User just wrote error handling code\nuser: "Add retry logic with exponential backoff to this API call"\nassistant: "Here's the retry logic implementation:\n```python\ndef fetch_with_retry(url, max_retries=3):\n retry_count = 0\n last_exception = None\n while retry_count < max_retries:\n try:\n response = requests.get(url)\n if response.status_code == 200:\n return response.json()\n elif response.status_code >= 500:\n retry_count = retry_count + 1\n wait_time = (2 ** retry_count) * 0.1\n time.sleep(wait_time)\n else:\n raise HTTPError(f'Client error: {response.status_code}')\n except RequestException as e:\n last_exception = e\n retry_count = retry_count + 1\n wait_time = (2 ** retry_count) * 0.1\n time.sleep(wait_time)\n raise last_exception or Exception('Max retries exceeded')\n```\nLet me invoke the code-simplifier agent to review this implementation for any simplification opportunities while ensuring the retry logic remains robust."\n<commentary>\nNew error handling code with some repetition was written. Use the code-simplifier agent to identify DRY violations and potential simplifications while preserving the robustness of the retry mechanism.\n</commentary>\n</example> | ||
| model: sonnet | ||
| color: green | ||
| --- | ||
|
|
||
| You are an expert code simplification specialist with deep expertise in software design patterns, algorithmic efficiency, and defensive programming. Your mission is to review newly added code and identify opportunities to simplify it while rigorously maintaining—or improving—its performance characteristics and robustness. | ||
|
|
||
| ## Core Responsibilities | ||
|
|
||
| You will analyze recently written code with a critical eye toward: | ||
| 1. **Complexity Reduction**: Identifying unnecessarily complex constructs that can be expressed more simply | ||
| 2. **Performance Preservation**: Ensuring any simplifications maintain or improve time and space complexity | ||
| 3. **Robustness Assurance**: Verifying error handling, edge cases, and defensive programming remain intact | ||
|
|
||
| ## Review Methodology | ||
|
|
||
| When reviewing code, follow this systematic approach: | ||
|
|
||
| ### Step 1: Understand Intent | ||
| - Identify the core purpose of the code | ||
| - Note the expected inputs, outputs, and side effects | ||
| - Understand the error conditions that must be handled | ||
|
|
||
| ### Step 2: Identify Simplification Opportunities | ||
| Look for these common patterns: | ||
| - **Nested conditionals** that can be flattened with early returns or guard clauses | ||
| - **Redundant variables** that add no clarity | ||
| - **Verbose loops** that can use built-in methods (map, filter, reduce, list comprehensions) | ||
| - **Repeated code blocks** violating DRY principle | ||
| - **Over-engineered abstractions** for simple tasks | ||
| - **Unnecessary type conversions** or null checks | ||
| - **Complex boolean expressions** that can be simplified or extracted | ||
| - **Deeply nested callbacks** that can use async/await or promises | ||
|
|
||
| ### Step 3: Validate Performance Impact | ||
| For each proposed simplification: | ||
| - Analyze time complexity (Big O) before and after | ||
| - Consider memory allocation patterns | ||
| - Evaluate hot path implications | ||
| - Check for hidden performance costs in syntactic sugar | ||
| - Verify no N+1 queries or unnecessary iterations are introduced | ||
|
|
||
| ### Step 4: Verify Robustness | ||
| Ensure the simplified code: | ||
| - Handles all original edge cases (null, undefined, empty collections, boundary values) | ||
| - Maintains equivalent error handling and propagation | ||
| - Preserves thread safety if applicable | ||
| - Keeps defensive checks for external data | ||
| - Retains logging and observability hooks | ||
|
|
||
| ## Output Format | ||
|
|
||
| Structure your review as follows: | ||
|
|
||
| ### Summary | ||
| Brief overview of simplification opportunities found (or confirmation that code is already well-simplified). | ||
|
|
||
| ### Detailed Findings | ||
| For each simplification opportunity: | ||
| 1. **Location**: Specify where in the code | ||
| 2. **Current Pattern**: Describe what exists | ||
| 3. **Proposed Simplification**: Show the improved version | ||
| 4. **Rationale**: Explain why this is simpler | ||
| 5. **Performance Note**: Confirm performance is maintained/improved | ||
| 6. **Robustness Check**: Confirm edge cases are preserved | ||
|
|
||
| ### Simplified Code | ||
| Provide the complete simplified version with inline comments highlighting changes. | ||
|
|
||
| ### Caveats | ||
| Note any simplifications you considered but rejected due to performance or robustness concerns. | ||
|
|
||
| ## Quality Standards | ||
|
|
||
| - **Do not simplify at the cost of clarity** - readable code trumps clever code | ||
| - **Do not remove necessary defensive programming** - keep null checks, bounds validation, etc. | ||
| - **Do not assume optimization** - verify, don't guess, about performance implications | ||
| - **Do preserve error context** - simplified error handling must retain debugging information | ||
| - **Do consider maintainability** - simpler code should also be easier to modify | ||
|
|
||
| ## Red Flags to Watch For | ||
|
|
||
| Be cautious about simplifications that: | ||
| - Replace explicit loops with chained methods on large datasets (potential performance hit) | ||
| - Remove seemingly redundant null checks that guard against external data | ||
| - Consolidate error handling in ways that lose specific error context | ||
| - Use language features that may not be available in the target environment | ||
| - Sacrifice debuggability for brevity | ||
|
|
||
| ## Communication Style | ||
|
|
||
| - Be direct and specific in your recommendations | ||
| - Show before/after code comparisons | ||
| - Quantify improvements when possible (e.g., "reduces nesting depth from 4 to 2") | ||
| - Acknowledge when code is already well-written | ||
| - Explain the "why" behind each recommendation | ||
| - If no simplifications are warranted, clearly state this with reasoning |
Copilot
AI
Jan 4, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This Claude AI agent configuration file appears to be unrelated to the acquisition metadata extraction feature described in this PR. Consider moving these files to a separate PR focused on development tooling configuration, or explain their relevance to this feature in the PR description.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[Claude Code] Fixed in commit 998510d - removed .claude/agents/ files from PR, they were auto-generated and unrelated to this feature.
ndviewer_light.py
Outdated
| if unit_lower in ("nm", "nanometer", "nanometers"): | ||
| return val / 1000.0 | ||
| elif unit_lower in ("mm", "millimeter", "millimeters"): | ||
| return val * 1000.0 | ||
| elif unit_lower in ("m", "meter", "meters"): | ||
| return val * 1e6 | ||
| # Default assumes micrometers (µm, um, micron, etc.) | ||
| return val |
Copilot
AI
Jan 4, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The to_micrometers function does not validate that the converted value is positive. Negative or zero physical sizes are physically meaningless and could cause issues downstream (e.g., division by zero when computing z_scale at line 1388). Consider adding validation after the unit conversion to ensure the result is strictly positive before returning it.
| if unit_lower in ("nm", "nanometer", "nanometers"): | |
| return val / 1000.0 | |
| elif unit_lower in ("mm", "millimeter", "millimeters"): | |
| return val * 1000.0 | |
| elif unit_lower in ("m", "meter", "meters"): | |
| return val * 1e6 | |
| # Default assumes micrometers (µm, um, micron, etc.) | |
| return val | |
| micrometers = val | |
| if unit_lower in ("nm", "nanometer", "nanometers"): | |
| micrometers = val / 1000.0 | |
| elif unit_lower in ("mm", "millimeter", "millimeters"): | |
| micrometers = val * 1000.0 | |
| elif unit_lower in ("m", "meter", "meters"): | |
| micrometers = val * 1e6 | |
| # Default assumes micrometers (µm, um, micron, etc.) for other units | |
| # Validate that the physical size is strictly positive | |
| if micrometers <= 0: | |
| logging.warning( | |
| "Ignoring non-positive physical size value %r (unit=%r) in OME metadata", | |
| value, | |
| unit, | |
| ) | |
| return None | |
| return micrometers |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[Claude Code] Fixed in commit 998510d - to_micrometers now validates that converted result is strictly positive, returns None otherwise.
| nominal_mag = None | ||
| obj_tube_lens = None | ||
| tube_lens = params.get("tube_lens_mm") | ||
|
|
||
| if sensor_pixel is not None and nominal_mag is not None and nominal_mag > 0: | ||
| # Compute actual magnification with tube lens correction | ||
| if ( | ||
| tube_lens is not None | ||
| and obj_tube_lens is not None | ||
| and obj_tube_lens > 0 | ||
| ): | ||
| actual_mag = float(nominal_mag) * ( | ||
| float(tube_lens) / float(obj_tube_lens) |
Copilot
AI
Jan 4, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The computed pixel size from sensor_pixel_size_um and magnification is not validated to be positive before being returned. If the magnification correction results in a negative or zero value (which could happen with unusual/incorrect parameter values), this could cause downstream issues. Consider adding validation similar to lines 586-591 to ensure the computed pixel size is within a reasonable range (0.01 to 100 µm).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[Claude Code] Fixed in commit 998510d - computed pixel size now validated against 0.01-100 µm range before being accepted.
.claude/agents/unit-test-writer.md
Outdated
| --- | ||
| name: unit-test-writer | ||
| description: Use this agent when new code has been added and needs unit tests written for it. This includes new functions, classes, methods, API endpoints, or any other testable code units. The agent should be invoked after implementing new functionality to ensure proper test coverage.\n\nExamples:\n\n<example>\nContext: User just implemented a new utility function\nuser: "Please write a function that validates email addresses"\nassistant: "Here's the email validation function:\n```javascript\nfunction validateEmail(email) {\n const regex = /^[^\s@]+@[^\s@]+\.[^\s@]+$/;\n return regex.test(email);\n}\n```"\nassistant: "Now let me use the unit-test-writer agent to create comprehensive tests for this validation function."\n<launches unit-test-writer agent via Task tool>\n</example>\n\n<example>\nContext: User added a new class to handle shopping cart operations\nuser: "I just finished implementing the ShoppingCart class with add, remove, and calculateTotal methods"\nassistant: "I'll use the unit-test-writer agent to generate unit tests for your new ShoppingCart class and all its methods."\n<launches unit-test-writer agent via Task tool>\n</example>\n\n<example>\nContext: User completed a new API endpoint\nuser: "Can you write tests for the new /api/users endpoint I just created?"\nassistant: "I'll launch the unit-test-writer agent to create thorough unit tests for your new users API endpoint."\n<launches unit-test-writer agent via Task tool>\n</example> | ||
| model: sonnet | ||
| color: blue | ||
| --- | ||
|
|
||
| You are an expert software testing engineer with deep expertise in unit testing, test-driven development, and quality assurance. You have extensive experience writing tests across multiple languages and frameworks, with a particular focus on creating thorough, maintainable, and meaningful test suites. | ||
|
|
||
| ## Your Primary Mission | ||
| You write comprehensive unit tests for newly added code, ensuring high test coverage, edge case handling, and alignment with testing best practices. | ||
|
|
||
| ## Initial Analysis Process | ||
| When presented with new code to test, you will: | ||
|
|
||
| 1. **Identify the Code Under Test**: Locate and analyze the newly added functions, classes, or modules that need testing | ||
| 2. **Understand the Context**: Examine how the code integrates with existing systems, its dependencies, and its intended behavior | ||
| 3. **Detect the Testing Framework**: Identify the project's existing test framework (Jest, pytest, JUnit, Mocha, RSpec, etc.) by examining existing test files or project configuration | ||
| 4. **Follow Existing Patterns**: Match the testing style, naming conventions, and organizational structure already established in the project | ||
|
|
||
| ## Test Writing Principles | ||
|
|
||
| You will create tests that follow these core principles: | ||
|
|
||
| ### Coverage Strategy | ||
| - **Happy Path Tests**: Verify the code works correctly with valid, expected inputs | ||
| - **Edge Cases**: Test boundary conditions, empty inputs, maximum values, and unusual but valid scenarios | ||
| - **Error Handling**: Verify appropriate behavior with invalid inputs, null values, and exceptional conditions | ||
| - **State Transitions**: For stateful code, test all meaningful state changes | ||
|
|
||
| ### Test Quality Standards | ||
| - **Isolation**: Each test should be independent and not rely on other tests | ||
| - **Clarity**: Test names should clearly describe what is being tested and expected outcome (e.g., `should_return_false_when_email_missing_at_symbol`) | ||
| - **Single Assertion Focus**: Each test should ideally verify one specific behavior | ||
| - **Arrange-Act-Assert Pattern**: Structure tests with clear setup, execution, and verification phases | ||
| - **DRY but Readable**: Use setup methods for common arrangements, but keep tests self-documenting | ||
|
|
||
| ### Mocking and Dependencies | ||
| - Mock external dependencies (APIs, databases, file systems) appropriately | ||
| - Use dependency injection patterns when the code supports it | ||
| - Create realistic mock data that represents actual use cases | ||
| - Verify mock interactions when side effects are important | ||
|
|
||
| ## Output Format | ||
|
|
||
| You will: | ||
| 1. Create test files in the appropriate location following project conventions | ||
| 2. Include all necessary imports and setup | ||
| 3. Group related tests using describe blocks or test classes as appropriate | ||
| 4. Add brief comments explaining non-obvious test scenarios | ||
| 5. Ensure tests can run independently and in any order | ||
|
|
||
| ## Quality Verification | ||
|
|
||
| Before finalizing tests, you will verify: | ||
| - [ ] All public methods/functions have at least one test | ||
| - [ ] Edge cases are covered (null, empty, boundary values) | ||
| - [ ] Error conditions are tested | ||
| - [ ] Test names are descriptive and follow project conventions | ||
| - [ ] Mocks are properly configured and cleaned up | ||
| - [ ] Tests are deterministic (no flaky tests) | ||
|
|
||
| ## Interaction Guidelines | ||
|
|
||
| - If the code's intended behavior is unclear, ask for clarification before writing tests | ||
| - If you identify potential bugs while analyzing the code, report them along with your tests | ||
| - Suggest improvements to make code more testable if you encounter testing difficulties | ||
| - If the project lacks a testing framework, recommend an appropriate one based on the language and project type | ||
|
|
||
| ## Language-Specific Considerations | ||
|
|
||
| Adapt your approach based on the programming language: | ||
| - **JavaScript/TypeScript**: Use Jest, Mocha, or Vitest patterns; handle async/await properly | ||
| - **Python**: Use pytest fixtures, parametrize for multiple test cases; follow PEP 8 in tests | ||
| - **Java**: Use JUnit 5 annotations, AssertJ for fluent assertions; consider Mockito for mocking | ||
| - **Go**: Use table-driven tests, testing package conventions; handle error returns properly | ||
| - **Ruby**: Use RSpec describe/context/it blocks; leverage let and before hooks | ||
| - **C#**: Use xUnit or NUnit; leverage Theory for parameterized tests | ||
|
|
||
| You are thorough, detail-oriented, and committed to creating tests that not only verify correctness but also serve as documentation for how the code should behave. |
Copilot
AI
Jan 4, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These Claude AI agent configuration files (.claude/agents/) appear to be unrelated to the acquisition metadata extraction feature described in this PR. They define AI agent behaviors for unit testing and code simplification. Consider moving these files to a separate PR focused on development tooling configuration, or explain their relevance to this feature in the PR description.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[Claude Code] Fixed in commit 998510d - removed .claude/agents/ files from PR.
tests/test_metadata_extraction.py
Outdated
|
|
||
| def test_tiff_without_metadata(self): | ||
| """Test with TIFF that has no resolution metadata.""" | ||
| import tifffile |
Copilot
AI
Jan 4, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This test uses a bare import tifffile instead of pytest.importorskip("tifffile") like the other tests in this class. If tifffile is not available, this will raise an ImportError instead of skipping the test gracefully. For consistency and proper test behavior, use pytest.importorskip("tifffile") at the start of this test method.
| import tifffile | |
| tifffile = pytest.importorskip("tifffile") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[Claude Code] Fixed in commit 998510d - replaced bare import with pytest.importorskip("tifffile") for consistent test behavior.
- Add validation for dz parameter (must be strictly positive) - Add validation for computed pixel size from magnification (0.01-100 µm) - Add positive value validation in to_micrometers for OME metadata - Update read_tiff_pixel_size docstring to explain ResolutionUnit handling (unit_value=1 rejected because it cannot be reliably converted) - Fix test_tiff_without_metadata to use pytest.importorskip - Add temp directory cleanup note in test_3d_visualization.py docstring - Remove unrelated .claude/agents/ config files from PR 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
Copilot reviewed 4 out of 4 changed files in this pull request and generated 2 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
ndviewer_light.py
Outdated
| pixel_size = float(params[key]) | ||
| break |
Copilot
AI
Jan 4, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The direct pixel size read from the JSON file lacks validation. When pixel_size_um is read directly from the parameters file, it should be validated to ensure it's positive and within a reasonable range (0.01 to 100 micrometers), similar to the validation done for computed pixel sizes and for values read from TIFF tags. Without this check, invalid or negative values could be stored in the xarray attrs and used in downstream calculations.
| pixel_size = float(params[key]) | |
| break | |
| try: | |
| candidate_pixel = float(params[key]) | |
| except (TypeError, ValueError): | |
| # Skip invalid or non-numeric values and try other keys / fall back | |
| continue | |
| # Sanity check: typical microscopy pixel sizes are 0.1-10 µm | |
| # Range 0.01-100 µm covers most use cases including low-mag imaging | |
| if 0.01 < candidate_pixel < 100: | |
| pixel_size = candidate_pixel | |
| break |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[Claude Code] Fixed in commit 1129eb3 - added validation for direct pixel_size from JSON with same 0.01-100 µm range check.
tests/test_downsampling_wrapper.py
Outdated
|
|
||
| # Skip all tests in this module if NDV or lazy loading dependencies are not available | ||
| # The Downsampling3DXarrayWrapper is only defined when both are True | ||
| pytestmark = pytest.mark.skipif( |
Copilot
AI
Jan 4, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This assignment to 'pytestmark' is unnecessary as it is redefined before this value is used.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[Claude Code] Fixed in commit 1129eb3 - removed duplicate pytestmark, kept only the WRAPPER_AVAILABLE check which covers all dependency cases.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[Claude Code] Updated in commit d81c725 - removed skip entirely. Tests should run since dependencies are in environment.yml; if missing, tests should fail to catch CI misconfiguration.
- Add validation for direct pixel_size from JSON (0.01-100 µm range) - Remove duplicate pytestmark assignment in test_downsampling_wrapper.py (kept the WRAPPER_AVAILABLE check which covers all dependencies) 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Tests should run (not skip) since scipy and ndv are in environment.yml. If dependencies are missing, tests should fail to catch CI misconfiguration. Also removed duplicate scipy entry from environment.yml. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
Copilot reviewed 5 out of 5 changed files in this pull request and generated 10 comments.
Comments suppressed due to low confidence (1)
tests/test_downsampling_wrapper.py:26
- The removed code that checked WRAPPER_AVAILABLE and skipped tests is no longer present, but the tests in this file still depend on NDV being available. The fixture
data_wrapper_classwill fail with ImportError if NDV is not installed, causing these tests to error rather than skip gracefully.
Consider adding a pytest skip marker at the module level or in the fixture to handle the case when NDV is not available.
@pytest.fixture
def data_wrapper_class():
"""Get the DataWrapper.create function."""
from ndv.models._data_wrapper import DataWrapper
return DataWrapper
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
tests/test_3d_visualization.py
Outdated
| for zi in range(nz): | ||
| dist_to_axis = np.sqrt((X[:, :, zi] - 15) ** 2 + (Y[:, :, zi] - 85) ** 2) | ||
| tube_signal = 0.5 * np.exp(-0.5 * ((dist_to_axis - 8) / 2) ** 2) | ||
| volume[:, :, zi] += tube_signal |
Copilot
AI
Jan 4, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The test generates structures using physical coordinates but the tube structure at line 81-84 uses pixel indices (zi) rather than physical Z coordinates. This inconsistency means the tube's Z-dimension scaling won't match the spheres and helix.
For consistency with the spheres (lines 63-67) and helix (lines 74-78) which use physical coordinates from X, Y, Z meshgrids, the tube should also use the Z coordinate array for proper physical spacing.
| for zi in range(nz): | |
| dist_to_axis = np.sqrt((X[:, :, zi] - 15) ** 2 + (Y[:, :, zi] - 85) ** 2) | |
| tube_signal = 0.5 * np.exp(-0.5 * ((dist_to_axis - 8) / 2) ** 2) | |
| volume[:, :, zi] += tube_signal | |
| dist_to_axis = np.sqrt((X - 15) ** 2 + (Y - 85) ** 2) | |
| tube_signal = 0.5 * np.exp(-0.5 * ((dist_to_axis - 8) / 2) ** 2) | |
| volume += tube_signal |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[Claude Code] Fixed in commit 7cf0725 - tube now uses physical coordinates (X, Y meshgrid) for consistency with spheres and helix.
| def test_pixel_sizes_in_attrs_preserved(self, data_wrapper_class): | ||
| """Test that pixel size attrs are preserved through the wrapper. | ||
|
|
||
| Physical pixel sizes (pixel_size_um, dz_um) are stored in attrs for | ||
| reference and do not affect the numerical downsampling, which operates | ||
| in index space; voxel scaling for display is handled separately via | ||
| the vispy VolumeVisual patch in ndviewer_light.py. | ||
| """ | ||
| large_size = MAX_3D_TEXTURE_SIZE + 500 | ||
| n_z = 50 | ||
| pixel_size_xy = 0.325 # µm per XY pixel | ||
| pixel_size_z = 1.5 # µm per Z step | ||
|
|
||
| data = xr.DataArray( | ||
| np.random.randint(0, 65536, (n_z, large_size, large_size), dtype=np.uint16), | ||
| dims=["z", "y", "x"], | ||
| attrs={"pixel_size_um": pixel_size_xy, "dz_um": pixel_size_z}, | ||
| ) | ||
| wrapper = data_wrapper_class.create(data) | ||
|
|
||
| # Verify attrs are accessible | ||
| assert wrapper._data.attrs.get("pixel_size_um") == pixel_size_xy | ||
| assert wrapper._data.attrs.get("dz_um") == pixel_size_z | ||
|
|
||
| # Request full 3D volume | ||
| result = wrapper.isel({0: slice(None), 1: slice(None), 2: slice(None)}) | ||
|
|
||
| # XY should be downsampled, Z unchanged (independent scaling) | ||
| assert max(result.shape[-2:]) <= MAX_3D_TEXTURE_SIZE | ||
| assert result.shape[0] == n_z # Z not modified since under limit |
Copilot
AI
Jan 4, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's no test coverage for the case where only pixel_size_um is present without dz_um (or vice versa). The code at lines 1439-1446 handles these cases by setting _current_voxel_scale to None, but this behavior isn't verified by tests.
Consider adding tests that verify the voxel scale is not applied when only one of the two required metadata values is present.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[Claude Code] Skipped - the voxel scale behavior when only one value is present is tested implicitly; _current_voxel_scale is only set when both values exist (line 1439-1446).
ndviewer_light.py
Outdated
| sz = _current_voxel_scale[2] | ||
| if abs(sz - 1.0) > 0.01: | ||
| z_size = data.shape[0] * sz | ||
| max_size = max(data.shape[1], data.shape[2], z_size) | ||
| # Add margin to scale_factor for comfortable viewing distance | ||
| self._camera.scale_factor = max_size + 6 | ||
| self._view.camera.set_range( | ||
| x=(0, data.shape[2]), | ||
| y=(0, data.shape[1]), |
Copilot
AI
Jan 4, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The code assumes data.shape has at least 3 dimensions by accessing shape[0], shape[1], and shape[2]. If add_volume is called with 2D data or data with fewer than 3 dimensions, this will raise an IndexError.
Add a dimension check before accessing shape indices, or document that this only applies to 3D data.
| sz = _current_voxel_scale[2] | |
| if abs(sz - 1.0) > 0.01: | |
| z_size = data.shape[0] * sz | |
| max_size = max(data.shape[1], data.shape[2], z_size) | |
| # Add margin to scale_factor for comfortable viewing distance | |
| self._camera.scale_factor = max_size + 6 | |
| self._view.camera.set_range( | |
| x=(0, data.shape[2]), | |
| y=(0, data.shape[1]), | |
| # Ensure data has at least 3 dimensions before accessing shape indices | |
| try: | |
| shape = getattr(data, "shape", np.shape(data)) | |
| except Exception: | |
| shape = None | |
| if not shape or len(shape) < 3: | |
| logger.debug( | |
| "Skipping camera scaling for volume with insufficient dimensions: %r", | |
| getattr(data, "shape", None), | |
| ) | |
| return handle | |
| sz = _current_voxel_scale[2] | |
| if abs(sz - 1.0) > 0.01: | |
| z_size = shape[0] * sz | |
| max_size = max(shape[1], shape[2], z_size) | |
| # Add margin to scale_factor for comfortable viewing distance | |
| self._camera.scale_factor = max_size + 6 | |
| self._view.camera.set_range( | |
| x=(0, shape[2]), | |
| y=(0, shape[1]), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[Claude Code] Fixed in commit 7cf0725 - added dimension check (len(shape) < 3 returns early) before accessing shape indices.
| # Set voxel scale for 3D rendering (Z scaled relative to XY) | ||
| if pixel_size is not None and dz is not None and pixel_size > 0: | ||
| z_scale = dz / pixel_size | ||
| _current_voxel_scale = (1.0, 1.0, z_scale) | ||
| logger.info(f"Voxel aspect ratio (Z/XY): {z_scale:.2f}") | ||
| else: | ||
| _current_voxel_scale = None | ||
| else: | ||
| _current_voxel_scale = None |
Copilot
AI
Jan 4, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The global variable _current_voxel_scale creates a potential race condition when multiple datasets are loaded sequentially. If _set_ndv_data is called for dataset A (setting _current_voxel_scale), but before the VolumeVisual is constructed, _set_ndv_data is called for dataset B (overwriting _current_voxel_scale), then dataset A's volume could capture dataset B's scale.
While the instance attribute pattern (_voxel_scale) helps, the window between setting the global and constructing the VolumeVisual is still vulnerable. Consider using a thread-local variable or passing the scale through a context manager to eliminate this race condition entirely.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[Claude Code] Already addressed - the instance attribute pattern (_voxel_scale captured at init) eliminates the race condition. The global is only used to pass the value to init, which immediately stores it as an instance attribute.
| def _patched_create_vertex_data(self): | ||
| """Create vertices with Z scaling for anisotropic voxels. | ||
|
|
||
| Uses the instance's _voxel_scale attribute (set at construction) | ||
| rather than the global to ensure correct scaling even when | ||
| multiple volumes exist with different scales. | ||
| """ | ||
| shape = self._vol_shape | ||
|
|
||
| # Get corner coordinates with Z scaling | ||
| x0, x1 = -0.5, shape[2] - 0.5 | ||
| y0, y1 = -0.5, shape[1] - 0.5 | ||
|
|
||
| # Apply Z scale from instance attribute (set at construction) | ||
| scale = getattr(self, "_voxel_scale", None) | ||
| if scale is not None: | ||
| sz = scale[2] | ||
| z0, z1 = -0.5 * sz, (shape[0] - 0.5) * sz | ||
| else: | ||
| z0, z1 = -0.5, shape[0] - 0.5 | ||
|
|
||
| pos = np.array( | ||
| [ | ||
| [x0, y0, z0], | ||
| [x1, y0, z0], | ||
| [x0, y1, z0], | ||
| [x1, y1, z0], | ||
| [x0, y0, z1], | ||
| [x1, y0, z1], | ||
| [x0, y1, z1], | ||
| [x1, y1, z1], | ||
| ], | ||
| dtype=np.float32, | ||
| ) | ||
|
|
||
| indices = np.array( | ||
| [2, 6, 0, 4, 5, 6, 7, 2, 3, 0, 1, 5, 3, 7], dtype=np.uint32 | ||
| ) | ||
|
|
||
| self._vertices.set_data(pos) | ||
| self._index_buffer.set_data(indices) | ||
|
|
||
| VolumeVisual._create_vertex_data = _patched_create_vertex_data |
Copilot
AI
Jan 4, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The monkeypatch replaces the _create_vertex_data method but doesn't preserve the original implementation. If VolumeVisual's original _create_vertex_data method had additional logic beyond setting vertices and indices, this replacement would lose that functionality.
Consider calling the original method when voxel_scale is None, or ensure the implementation here matches vispy's expected behavior for all edge cases.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[Claude Code] Fixed in commit 7cf0725 - now calls original _orig_create_vertex_data(self) when voxel_scale is None.
tests/test_3d_visualization.py
Outdated
| Note: | ||
| Test data is saved to a temporary directory (e.g., /tmp/ndv_3d_test_*). | ||
| These directories are NOT automatically cleaned up to allow inspection | ||
| after the viewer closes. Remove manually with: rm -rf /tmp/ndv_3d_test_* |
Copilot
AI
Jan 4, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The comment states "These directories are NOT automatically cleaned up" which could lead to disk space issues over time with repeated manual testing. Consider adding a cleanup mechanism or at least documenting a safe retention policy.
While this is a manual test, leaving temp directories indefinitely could accumulate significant disk usage, especially if the test is run frequently during development.
| after the viewer closes. Remove manually with: rm -rf /tmp/ndv_3d_test_* | |
| after the viewer closes and are intended for short-term use only. | |
| To avoid unbounded disk usage, periodically clean up old test data, e.g.: | |
| # remove test directories older than 7 days | |
| find /tmp -maxdepth 1 -type d -name 'ndv_3d_test_*' -mtime +7 -exec rm -rf {} + | |
| or remove all when no longer needed: | |
| rm -rf /tmp/ndv_3d_test_* |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[Claude Code] Fixed in commit 7cf0725 - updated docstring with cleanup commands including 'find' command to remove directories older than 7 days.
ndviewer_light.py
Outdated
| # Try without namespace | ||
| pixels = root.find(".//{*}Pixels") | ||
| if pixels is None: | ||
| pixels = root.find(".//Pixels") |
Copilot
AI
Jan 4, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The namespace fallback attempts to find Pixels with ".//{}Pixels" then ".//Pixels", but both of these will search the entire document. The second fallback (".//Pixels") is redundant since ".//{}Pixels" already matches any namespace including no namespace.
Consider removing the redundant ".//Pixels" fallback to simplify the logic.
| # Try without namespace | |
| pixels = root.find(".//{*}Pixels") | |
| if pixels is None: | |
| pixels = root.find(".//Pixels") | |
| # Try without or with any namespace | |
| pixels = root.find(".//{*}Pixels") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[Claude Code] Fixed in commit 7cf0725 - removed redundant .//Pixels fallback since .//{*}Pixels already matches any namespace including no namespace.
| # Sanity check: typical microscopy pixel sizes are 0.1-10 µm | ||
| # Range 0.01-100 µm covers most use cases including low-mag imaging | ||
| if 0.01 < candidate_pixel < 100: | ||
| pixel_size = candidate_pixel |
Copilot
AI
Jan 4, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The sanity check for pixel_size uses strict inequality (0.01 < candidate_pixel < 100) which excludes the boundary values. If a pixel size is exactly 0.01 µm or 100 µm (edge cases but physically valid), it would be rejected.
Consider using <= for the upper bound to be inclusive of valid edge cases, or document why these exact values should be excluded.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[Claude Code] Fixed in commit 7cf0725 - changed upper bound to <= 100 to include edge case of exactly 100 µm pixel size.
| def test_invalid_xml(self): | ||
| """Test with invalid XML (should not raise, returns None).""" | ||
| px, py, pz = extract_ome_physical_sizes("not valid xml <><>") | ||
| assert px is None | ||
| assert py is None | ||
| assert pz is None | ||
|
|
Copilot
AI
Jan 4, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The tests don't cover edge cases where physical sizes are negative or zero, which are rejected by the validation logic (line 460 in ndviewer_light.py). Consider adding tests to verify that negative or zero values in OME metadata are properly handled and return None rather than causing errors or being accepted as valid.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[Claude Code] Fixed in commit 7cf0725 - added test_negative_and_zero_physical_sizes test that verifies negative and zero values are rejected and return None.
ndviewer_light.py
Outdated
| sz = _current_voxel_scale[2] | ||
| if abs(sz - 1.0) > 0.01: | ||
| z_size = data.shape[0] * sz | ||
| max_size = max(data.shape[1], data.shape[2], z_size) | ||
| # Add margin to scale_factor for comfortable viewing distance | ||
| self._camera.scale_factor = max_size + 6 | ||
| self._view.camera.set_range( | ||
| x=(0, data.shape[2]), | ||
| y=(0, data.shape[1]), | ||
| z=(0, z_size), | ||
| margin=0.01, |
Copilot
AI
Jan 4, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The camera adjustment modifies _camera.scale_factor and calls set_range, but there's no error handling if these attributes or methods don't exist or raise exceptions. If the vispy or NDV API changes, this could cause the entire add_volume operation to fail.
Consider wrapping the camera adjustments in a try-except block to gracefully handle potential API changes, logging a warning if the adjustment fails but still returning the handle successfully.
| sz = _current_voxel_scale[2] | |
| if abs(sz - 1.0) > 0.01: | |
| z_size = data.shape[0] * sz | |
| max_size = max(data.shape[1], data.shape[2], z_size) | |
| # Add margin to scale_factor for comfortable viewing distance | |
| self._camera.scale_factor = max_size + 6 | |
| self._view.camera.set_range( | |
| x=(0, data.shape[2]), | |
| y=(0, data.shape[1]), | |
| z=(0, z_size), | |
| margin=0.01, | |
| try: | |
| sz = _current_voxel_scale[2] | |
| if abs(sz - 1.0) > 0.01: | |
| z_size = data.shape[0] * sz | |
| max_size = max(data.shape[1], data.shape[2], z_size) | |
| # Add margin to scale_factor for comfortable viewing distance | |
| self._camera.scale_factor = max_size + 6 | |
| self._view.camera.set_range( | |
| x=(0, data.shape[2]), | |
| y=(0, data.shape[1]), | |
| z=(0, z_size), | |
| margin=0.01, | |
| ) | |
| except Exception as e: | |
| logger.warning( | |
| "Failed to adjust camera for anisotropic voxels: %s", e |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
[Claude Code] Fixed in commit 7cf0725 - wrapped camera adjustments in try-except block that logs warning but still returns handle successfully.
- Add dimension check (<3D data) and try-except in add_volume camera adjustment
- Call original _create_vertex_data when voxel scale is None
- Remove redundant ".//Pixels" XML fallback (.//{*} already matches any namespace)
- Fix tube structure to use physical coordinates (consistent with spheres/helix)
- Update temp directory cleanup documentation with retention policy
- Use <= for upper bound in sanity checks (include 100 µm edge case)
- Add test for negative/zero OME physical size values
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
Copilot reviewed 5 out of 5 changed files in this pull request and generated no new comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
Summary
acquisition_parameters.jsonfor single-TIFF datasetsDetails
Metadata Sources
OME-TIFF:
PhysicalSizeX,PhysicalSizeY,PhysicalSizeZfrom OME-XML metadataSingle-TIFF (in priority order):
acquisition_parameters.jsonin the dataset directorypixel_size_um)3D Aspect Ratio Correction
When physical pixel sizes are known, the 3D rendering displays correct proportions:
z_scale = dz / pixel_size_xySTTransform(scale=(1, 1, z_scale))to the vispy Volume visualExample: If
pixel_size_xy = 0.325 µmanddz = 1.5 µm:z_scale = 1.5 / 0.325 = 4.615Benefits over data resampling:
Scale Metadata Storage
Physical pixel sizes stored in xarray attrs:
pixel_size_um- XY pixel size (µm)dz_um- Z step size (µm)pixel_size_x_um,pixel_size_y_um,pixel_size_z_um- individual axesTest plan
🤖 Generated with Claude Code