-
Notifications
You must be signed in to change notification settings - Fork 88
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ValueError: Data size is not a multiple of the chunk size #40
Comments
with
|
I was able to read the file simply by not raising the error: if total_data_size % data_size != 0:
#raise ValueError(
print(
"Data size %d is not a multiple of the "
"chunk size %d" % (total_data_size, data_size))
#else:
self.num_chunks = total_data_size // data_size |
Hmm that's odd. Can you tell if you're missing any data at the beginning or end of the file compared to what LabView reads? The structure of the file looks pretty straightforward, so I'm not sure what's going wrong here. Does this only happen with very large files? |
I only have large files :( |
Any chance you can upload it somewhere so I can have a look? If you could generate a smaller file that showed the same issue that would be awesome. |
Hi guys, Have same issue here: b = TdmsFile('small_tdms_file.tdms')
Data size 1075200 is not a multiple of the chunk size 2293760 File is pretty small, about 1Mb. You may download it here. |
Thanks, I can reproduce the error with that file so hopefully will be able to figure out what's going on. The metadata says this file should have 448 channels ( |
I was provided only with file and kind of custom python parser that tries to read it. After reviewing the code and intermediate result - you are right, there should be 448x1280 values, but in fact there is only 448x600 values. I have no idea whether there should be more or less. And believe me, I completely understand how weird this sounds to you :) Should I consider this as corrupted tdms file? Or something that can be handled somehow? |
It looks like the last chunk is not complete it the actual data is smaller that the chunk size. There is no padding to match the chunk size. |
Yeah it sounds like something that npTDMS should handle, although I couldn't find any mention of this being valid in NI's documentation and how exactly to determine which channels have data and how many values are in each channel if the size is less than expected. Hopefully we can just keep reading as normal up until hitting the actual end of the segment. |
I haven't had a lot of time to look into this but have made a bit of progress and the changes in this branch should work for interleaved data, non-interleaved will be a bit more work: https://github.com/adamreeve/npTDMS/tree/not-multiple-chunk-size. @Delicate-aRt, your file is interleaved but @Nodd's isn't. |
@adamreeve you are right: it is in interleaved mode. Sorry for not providing that details, TDMS is something completely new for me. I'll check whether it works now and let you know! |
@Nodd, are you able to test out the latest code on that |
hi, I have the same issue, I also fixed it temporarily by commenting out the exception and handling the case, that there is only one chunk, which is not long enough I now tested your fix and it works for my data, please merge the changes into the next release unfortunately I can't provide you with the date, because it belongs to a customer of ours |
Thanks @fgroes, I've just released 0.8.2 with this fix in it. |
I saw this same problem, today, with version 0.13.0. Could the bug have regressed? |
It's not a regression. The problem was not fixed in 0.8.2 either ... for my TDMS file.
|
The traceback produced by v0.13.0 is a bit different:
|
I'm trying to open a 4.4GB tdms file, and I get the following error from
tdmsinfo
:I saw that there are closed issues with the same error, but I just installed v0.7.1 so I use the last version.
The error appear on multiple files. Those files can be read with a LabView program so they are valid. Also LabView didn't crash while writing the files.
I'm on Arch Linux 64 bits with python 3.5 with 32 GB of RAM.
The text was updated successfully, but these errors were encountered: