You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We want to process the database data into the corresponding parquet file, and then read the data directly to read the parquet file, which requires a corresponding update operation
No you can not update or insert into an existing parquet file as they are immutable. This is a restriction inherent to parquet, not pyarrow. (the spec theoretically supports appending but no lib supports it, details)
So to update an existing parquet file you have to read the existing data into memory, add the new data and write that to disk as a new file (with the same name). You can use partitioning to add/append new data to a multi-parquet-file data set by adding new files or overwriting only small partitions. See pyarrow docs for exisiting_data_behavior:
This behavior, in combination with a unique basename_template for each write, will allow for an append workflow.
‘delete_matching’ is useful when you are writing a partitioned dataset. The first time each partition directory is encountered the entire directory will be deleted. This allows you to overwrite old partitions completely.
Describe the usage question you have. Please include as many useful details as possible.
First, save the parquet file, there are 5 pieces of data
Then I want to add two new ones, and I want to get a total of 7 results, and the new data is as follows:
But this overwrites the original, there are only two data, how to achieve new data on the basis of the original
I have another question, if I want to update the data according to the conditions, how to change how to do it, for example
Component(s)
Parquet, Python
The text was updated successfully, but these errors were encountered: