Too much of difference between loading .nii.gz vs .npy #1718
Replies: 2 comments
-
Could you please help share some comments here? I think you know much about NIfTI files and Nibabel, etc. Thanks. |
Beta Was this translation helpful? Give feedback.
-
Nifti files are read using Nibabel, I can't vouch for the efficiency of this library, but it involves parsing the header and interpreting the stored binary data into the orientation and memory representation suitable for a Numpy array. You're loading a compressed Nifti file so you're adding the time for decompression with gzip, this can indeed be a significant overhead. A .npy file has a very small header with a tiny amount of code followed by a binary blob that can be (often) copied directly into a Numpy memory segment, so loading these is going to be pretty well as fast as can be. These are also not compressed unlike .npz archives which are zip files of .npy files. If you want speed and don't need any of the Nifti header information you could go about converting all of them to .npy files if you have the disk space. |
Beta Was this translation helpful? Give feedback.
-
I was just wondering if it is normal to have such a big difference between loading .nii.gz vs .npy.
Training for 1 epoch, with both .nii.gz and .npy (same dataset, same augmentations) resulted in:
This is basically a 32% speed improvement just by changing the way I save my data. I am aware that the .nii.gz is compressed, but would that incur a 32% speed penalty?
Beta Was this translation helpful? Give feedback.
All reactions