Skip to content

Conversation

@jokasimr
Copy link
Contributor

I'm thinking I'll start by adding the helper here, and then we can add an example notebook in codeshelf for how to use it, and maybe we'll add a command line interface later.

Right now it takes almost no parameters, I imagine that it should take at least a parameter to determine how many time-bins we want.
And maybe some more? Let's keep it as simple as we can for now and add more functionality when it is requested.

)


def _add_to_event_time_offset_in_case_of_pulse_skipping(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why can we not use the work @nvaytet has done with the TOF workflow? It seems the code below will need thorough testing, redoing work that has already been done?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The requirement was to histogram it along event_time_offset as it is, before calculating tof.

We will also have a tool that uses OdinBraggEdgeWorkflow to make tiff file using tof.

Comment on lines 384 to 390
def tiff_from_nexus(
workflow: sciline.Pipeline,
nexus_file_name: str | Path | io.BytesIO,
output_path: str | Path | io.BytesIO,
*,
time_bins: int | sc.Variable = 50,
) -> None:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we set the pulse_stride up here? Allowing users to set it manually.

Comment on lines 416 to 422
image = (
sc.concat([data], dim='c')
.hist(event_time_offset=time_bins)
.rename_dims(event_time_offset='t')
.transpose(('t', 'c', 'dim_0', 'dim_1'))
)
imwrite(output_path, image.values)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we use scitiff.save_scitiff instead? (You'll have to rename dimensions and drop multi-dimensional coordinates first).
but the scitiff save interface makes sure of the dimension orders automatically.



def tiff_from_nexus(
workflow: sciline.Pipeline,
Copy link
Member

@YooSunYoung YooSunYoung Sep 12, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We're not calculating tof, so we don't need the workflow at all.
We can just load the file with scippnexus instead.
Also, because we should keep this tool always working even if there is sth wrong in the file.

@jokasimr
Copy link
Contributor Author

I'm thinking about adding a "chunkwise" version as well, in case they need to reduce files larger than memory.
What do you think @YooSunYoung, is it a good idea?

import io
from pathlib import Path

import h5py
import tifffile
import numpy as np


def _process_chunk(start_index, event_id, event_time_offset, event_index, tbins, image_shape, pulse_stride, pulse_length):
    index = event_index - start_index
    index = index[index > 0]
    event_time_offset += (np.repeat(np.arange(index.size + 1), np.diff(index, prepend=0, append=event_time_offset.size)) % pulse_stride) * pulse_length

    shape = (tbins, *image_shape)
    tbin = np.minimum(np.floor(tbins * (event_time_offset / (pulse_stride * pulse_length))), tbins - 1).astype('uint32')
    return np.bincount(tbin * np.prod(shape[1:]) + event_id - 1, minlength=np.prod(shape)).reshape(shape)
    

def nexus_to_tiff_chunkwise(
    nexus_file_name: str | Path | io.BytesIO,
    output_path: str | Path | io.BytesIO,
    *,
    time_bins: int,
    pulse_stride: int,
    chunksize: int = 100_000_000,
):
    
    with h5py.File(nexus_file_name) as f:
        event_id = f['/entry/instrument/event_mode_detectors/timepix3/timepix3_events/event_id']
        event_index = f['/entry/instrument/event_mode_detectors/timepix3/timepix3_events/event_index'][()]
        event_time_offset = f['/entry/instrument/event_mode_detectors/timepix3/timepix3_events/event_time_offset']
        image_shape = f['/entry/instrument/event_mode_detectors/timepix3/x_pixel_offset'].shape

        assert event_time_offset.attrs['units'] == 'ns'
        pulse_length = 1/14 * 1e9

        assert np.all(
            f['/entry/instrument/event_mode_detectors/timepix3/detector_number'][()] ==
            np.arange(1, 1 + np.prod(image_shape)).reshape(image_shape)
        )

        start = 0
        image = None
        while start < event_id.size:
            chunk = process_chunk(
                start,
                event_id[start:start+chunksize],
                event_time_offset[start:start+chunksize],
                event_index,
                time_bins,
                image_shape,
                pulse_stride,
                pulse_length,
            )
            if image is None:
                image = chunk
            else:
                image += chunk
            start += chunksize

        maxcounts = image.max()
        for dtype in ('uint8', 'uint16', 'uint32'):
            if np.iinfo(dtype).max >= maxcounts:
                image = image.astype(dtype)
                break

        tifffile.imwrite(output_path, image)


nexus_to_tiff(odin.data.iron_simulation_sample_small(), 'test.tiff', time_bins=50, pulse_stride=2)

@YooSunYoung
Copy link
Member

I'm thinking about adding a "chunkwise" version as well, in case they need to reduce files larger than memory.
What do you think @YooSunYoung, is it a good idea?

I think so...! That'll make the tool more robust.
They should normally have enough of memory on VISA but it'll be frustrating if it runs out of memory.

But maybe we can do it after this PR.
Because we may also want to allow blockifying the images (reducing resolution)

@YooSunYoung
Copy link
Member

Docs build fails cause scitiff is not in the docs dependencies.

@jokasimr jokasimr merged commit 67ae24d into main Sep 12, 2025
4 checks passed
@jokasimr jokasimr deleted the nexus-to-tiff branch September 12, 2025 15:28
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants