Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

NXmx: convert int64 to int32 if bit_depth_readout<=32 #624

Open
rjgildea opened this issue Apr 21, 2023 · 4 comments
Open

NXmx: convert int64 to int32 if bit_depth_readout<=32 #624

rjgildea opened this issue Apr 21, 2023 · 4 comments

Comments

@rjgildea
Copy link

When it comes to determining the bit-depth of a nexus dataset there are two sources of "truth":

  1. the dtype of the numpy array returned for the VDS in /entry/data/data
  2. the bit depth reported in the detector metadata, i.e. /entry/instrument/detector/bit_depth_readout

The current behaviour if dtype==int64 but bit_depth_readout=32 is to convert int64 -> int32:

# Handle integer conversion. Safe to convert if:
# - Is signed and <= 4 bytes
# - Is unsigned and <= 2 bytes
#
# Unsafe conversions to 32-bit integer can occur, but only if
# bit_depth is explicitly set to 32.
if np.issubdtype(dtype, np.integer):
if (
(np.issubdtype(dtype, np.signedinteger) and dtype.itemsize <= 4)
or (np.issubdtype(dtype, np.unsignedinteger) and dtype.itemsize <= 2)
or bit_depth == 32
):
data_np = data_np.astype(np.int32, copy=False)
else:
raise TypeError(f"Unsupported integer dtype {data_np.dtype}")

If however dtype==int64 but bit_depth_readout=16 then we throw our hands up in despair and refuse to touch it. I propose that this is inconsistent and we should also convert to int32 in this case, i.e. we declare that we trust the value of bit_depth_readout (which typically comes directly from the detector metadata) more than the dtype of the array (which may come from DAQ software writing the nexus file).

@graeme-winter
Copy link
Collaborator

We have a couple of choices:

  • refuse to consume any data where VDS data type != underlying data type
  • define a hierarchy of trustworthiness which is well documented

The first is the most correct, the latter is probably the most useful. Personally I would trust the bit_depth_readout in preference to the VDS since the former is actually created by the detector. We could also look for a majority / minority opinion by checking the underlying data type of the actual data files.

If it claims (u)int64_t and there is nothing in the file to indicate that this is wrong we should refuse to process.

@ndevenish
Copy link
Collaborator

That change was introduced in fe5b43e explicitly stating to handle int/long issues. I agree that this is inconsistent, but if it isn't to handle int/long then we should never be doing that here and should remove the inconsistency by refusing to convert it.

e, i.e. we declare that we trust the value of bit_depth_readout (which typically comes directly from the detector metadata)

since the former is actually created by the detector.

"typically" = in this one specific internal case that you are looking at now. In literally every other scenario we trust the data more than a random hdf5 attribute that could have been written from fixed, known metadata data.

If you want to do this then it needs to:

  • Explicitly scan the entire data array to verify that it is a safe conversion
  • Not be in a function called "get_raw_data" in which you are now adding a second implicit copy and now three scans of the data (in order to guarantee safety)

I would much prefer that If we have a specific error in the VDS creation/metadata then we should a) fix the beamline, and b) handle this in a Format subclass instead of planting landmines that will come back to cause errors in the future.

@graeme-winter
Copy link
Collaborator

That change was introduced in fe5b43e explicitly stating to handle int/long issues. I agree that this is inconsistent, but if it isn't to handle int/long then we should never be doing that here and should remove the inconsistency by refusing to convert it.

e, i.e. we declare that we trust the value of bit_depth_readout (which typically comes directly from the detector metadata)

since the former is actually created by the detector.

"typically" = in this one specific internal case that you are looking at now. In literally every other scenario we trust the data more than a random hdf5 attribute that could have been written from fixed, known metadata data.

If you want to do this then it needs to:

  • Explicitly scan the entire data array to verify that it is a safe conversion
  • Not be in a function called "get_raw_data" in which you are now adding a second implicit copy and now three scans of the data (in order to guarantee safety)

I would much prefer that If we have a specific error in the VDS creation/metadata then we should a) fix the beamline, and b) handle this in a Format subclass instead of planting landmines that will come back to cause errors in the future.

For the record (a) is in progress and I have a log of sympathy for (b) as a viewpoint - but this will involve adding some tunnels to pass the corrected information down to where this is actually used.

We can scan more cheaply but in a system dependent case, which does align well with your suggestion to do this locally in the format class.

@rjgildea
Copy link
Author

  • refuse to consume any data where VDS data type != underlying data type

FWIW even with data from last run collected in 16-bit mode the VDS data type is inconsistent with the underyling data type and bit_depth_readout:

>>> import hdf5plugin
>>> import h5py
>>> f = h5py.File("/dls/i03/data/2023/cm33866-1/TestProteinaseK/protk_16/protk_16_94.nxs")
>>> f["/entry/data/data"].dtype
dtype('int32')
>>> f["/entry/instrument/detector/bit_depth_readout"][()]
array([16])
>>> f["/entry/data/data_000001"].dtype
dtype('uint16')

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants