-
Notifications
You must be signed in to change notification settings - Fork 111
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cannot write to table with computed column #25
Comments
does this work with default JDBC connector?
|
Yes, it works with default JDBC. |
Fix would be required to support a non-strict option. Also Utilize SqlBulkCopyColumnMapping during bulk copy to map specific columns. |
Looks same as #14 |
indeed, these 2 features are needed to deal with Identity, Computed & Defaults constraints. |
Solved in #52 |
I have a table like this:
Create table Test
(
Id int,
Year nvarchar(4),
Month nvarch(2),
Date As (Year + '-' + Month)
)
Because Date is a computed column, my dataframe doesn't have this column, I get exception 'Spark Dataframe and SQL Server table have different numbers of columns'.
The text was updated successfully, but these errors were encountered: