-
-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support MSSQL data type TIME #285
Comments
Support for Valid feature request might take a while till I get to it, though. |
You are right, MSSQL uses a custom type as described here: https://learn.microsoft.com/en-us/sql/relational-databases/native-client-odbc-date-time/data-type-support-for-odbc-date-and-time-improvements?view=sql-server-ver16 The type is called typedef struct tagSS_TIME2_STRUCT {
SQLUSMALLINT hour;
SQLUSMALLINT minute;
SQLUSMALLINT second;
SQLUINTEGER fraction;
} SQL_SS_TIME2_STRUCT; |
|
@leo-schick Does |
Closing this for now |
Hi @pacman82 , sorry for the late response. Was quite busy with some other tasks. Unfortunately, this seems to not work as it should. I am not 100% sure how this should be solved tho... Here my validation results:I have a table with a SQL After upgrading, the parquet-tools shows now that logical_type is time:
Reading with Apache SparkWhen I use Apache Spark 3.3.0 to read it as a SQL Just wonder how this gets messed up. Reading with Microsoft SynapseWhen I use Microsoft Synapse, I get the following error message:
Probably because the I think we should find another way to solve this. I mean, Apache Spark is quite famous and it should at least work there correctly. |
p.s. I just noted that SQL data type Then the remaining question is then why we read it with Apache Spark.... |
Hi @leo-schick , thanks for the response.
Do you mean 'why' or 'how'? |
I wonder if |
I am not yet sure about the flag for microseconds precision. I would propose a flag which tries to always convert to a converted_type of possible. I think I can build a way around it for me as long as it works in Apache Spark. IMO converting of data should not be part of the odbc2parquet tool - maybe as an option when you like to implement it but I will not use it I think. |
Currencly the data type time from MSSQL is exported as BYTE_ARRAY, UTF8, String:
Column description from parquet-tools:
I would have expected a parquet TIME type in
logical_type
/converted_type
.The text was updated successfully, but these errors were encountered: