-
-
Notifications
You must be signed in to change notification settings - Fork 178
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OverflowError: value too large to convert to int - fastparquet.cencoding.write_thrift #823
Comments
Correct, it is not possible to change the data types in the stored thrift object - at least not if you want the file to be readable by anything other than fastparquet. When using |
@martindurant adjusting the row group size doesn't seem to make a difference; the number that ends up exceeding the int32 threshold seems linked to the total size of the parquet file, and not any of the individual write sizes. |
The total footer size (not data size) is given by a 4byte value, so it's not that. I could check the various byte offsets, but I think they are all 64bit (or var-int). I suppose your guess at the page header is correct, in which case chopping the data into row-groups (each of which contains one page per row) should help. You could also try using "hive" style output, in which each row group becomes a separate file. |
thanks so much for the help @martindurant ; I'm aware of 'hive' style output but trying to avoid using it to facilitate ease-of-read in another non-python language which doesn't support that structure. I was surprised by this part of your answer
...as I thought that's what was already happening under the hood when I call
Is there another API that I would have to use such that it will write row groups instead? |
The argument
I'm surprised if there are frameworks that wouldn't be able to read this output. |
@martindurant I am relatively new to parquet and still getting the "lay of the land", so to speak. Trying to discern what's part of the format specification and which parts are implementation-specific... Anyway the other language is Go and the other library is parquet-go - afaik it doesn't seem to support any parquet file hierarchy structures such as hive or drill. Again, thanks so much for the help; I've toyed around with |
"hive" style without partitioning amounts to splitting each row group into a file, but no encoding of information into the path structure. It is certainly worth a try, although I know nothing about parquet-go. Yes, the writing method(s) on ParquetFile are conveniences for append and alter operations on existing datasets, which use the same code beneath. |
@martindurant thanks for all the support and prompt communication; closing this issue as it is not an issue with the lib |
I'm having an issue storing a large dataset (around 40GB) in a single parquet file.
I'm using the
fastparquet
library to appendpandas.DataFrames
to this parquet dataset file, and everything goes fine until the dataset hits 2.3GB, at which point I get the following errors:Having debugged my way through the
fastparquet
code itself, it seems to me that what is happening internally is that to append rows, it has to update the page headers, and it seems that this is done by creating a Thrift object and writing it to a file:The problem could be that the
uncompressed_page_size
attribute is typed asint32
, and so as the file grows, when it reaches that limit in bytesfastparquet
begins to throw these errors on write... The fact that this is aThrift
object (where types are rigidly defined) suggests that this typing choice may be an inherent part of the parquet format itself; is this true?I'm not unsure if I'm looking at a bug in
fastparquet
, or if perhaps this is an intended design choice in the parquet format. I've been unable to get clarity on this anywhere else.The text was updated successfully, but these errors were encountered: