-
Notifications
You must be signed in to change notification settings - Fork 190
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Writes for Dataframes with more than 10000 partitions fail #248
Comments
Hello, I just stumbled upon on this problem. Do you know a good workaround? |
Fixed in version 0.18.1 |
Thanks. I've tested it - the fix works. |
This solution in #295 may not work in all cases, e.g. when there are many skipped partitions (because usually spark doesn't write empty parts). The easiest workaround is to do |
Even I see the error popping on with latest version of bigquery-connector |
When trying to write dataframes with more than 10000 partitions the load job fails with a "Too many sources provided: XXXX. Limit is 10000" error. This should be fixed with a better URI provided to the load job.
The text was updated successfully, but these errors were encountered: