New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Python][Parquet] improve reading of partitioned parquet datasets whose schema changed #25089
Comments
Joris Van den Bossche / @jorisvandenbossche: Can you show an example of just reading one of the old and one of the new files in the directory (you can pass the exact file name instead of the directory to |
Joris Van den Bossche / @jorisvandenbossche: There are in principle two solutions for this:
So for now, only the first is an option. |
Joris Van den Bossche / @jorisvandenbossche: |
Ira Saktor: I know how to load schema with pyarrow.parquet, however non-legacy dataset in parquet doesn't yet support schema specification, so i was hoping to manage this with pyarrow.dataset, if that's possible. |
Joris Van den Bossche / @jorisvandenbossche: I think something like this should work: schema = pyarrow.dataset.dataset(path_to_hdfs_single_parquet_file, paritioning = 'hive').schema
dataset = pyarrow.dataset.dataset(path_to_hdfs_directory, paritioning = 'hive', schema=schema)
dataset.to_table(filter = my_filter_expression).to_pandas() |
Ira Saktor: |
Ira Saktor: |
Joris Van den Bossche / @jorisvandenbossche:
Yes, will close it |
Hi there, i'm encountering the following issue when reading from HDFS:
My situation:
I have a paritioned parquet dataset in HDFS, whose recent partitions contain parquet files with more columns than the older ones. When i try to read data using pyarrow.dataset.dataset and filter on recent data, i still get only the columns that are also contained in the old parquet files. I'd like to somehow merge the schema or use the schema from parquet files from which data ends up being loaded.
when using:
pyarrow.dataset.dataset(path_to_hdfs_directory, paritioning = 'hive', filters = my_filter_expression).to_table().to_pandas()
Is there please a way to handle schema changes in a way, that the read data would contain all columns?
everything works fine when i copy the needed parquet files into a separate folder, however it is very inconvenient way of working.
Environment: Ubuntu 18.04, latest miniconda with python 3.7, pyarrow 0.17.1
Reporter: Ira Saktor
Related issues:
Note: This issue was originally created as ARROW-8964. Please see the migration documentation for further details.
The text was updated successfully, but these errors were encountered: