Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Datafusion timestamp type doesn't respect delta lake schema #2408

Closed
Veiasai opened this issue Apr 11, 2024 · 1 comment
Closed

Datafusion timestamp type doesn't respect delta lake schema #2408

Veiasai opened this issue Apr 11, 2024 · 1 comment
Labels
bug Something isn't working

Comments

@Veiasai
Copy link

Veiasai commented Apr 11, 2024

Environment

WSL2 Ubuntu 22.04

Delta-rs version:
0.17.1

Binding:

Environment:

  • Cloud provider:
  • OS: Ubuntu 22.04
  • Other:

Bug

What happened:
the timestamp in delta table definition is macro, and in parquet it is nanos. (Spark behavior)
Then, when datafusion reads it, the schema seems to be inferred on parquet raw data, rather than from delta lake definition.

Field { name: "exchange_time", data_type: Timestamp(Nanosecond, None), nullable: true, dict_id: 0, dict_is_ordered: false, metadata: {} }

What you expected to happen:

How to reproduce it:

More details:

@Veiasai Veiasai added the bug Something isn't working label Apr 11, 2024
@ion-elgreco
Copy link
Collaborator

This is a limitation of datafusion, please set the correct spark conf : SparkSession.config("spark.sql.parquet.outputTimestampType", "TIMESTAMP_MICROS")

@ion-elgreco ion-elgreco closed this as not planned Won't fix, can't repro, duplicate, stale Apr 12, 2024
This issue was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants