-
Notifications
You must be signed in to change notification settings - Fork 177
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Dataflow job throws exception when reading from Bigtable #1921
Comments
Could you please create a Cloud support ticket? Thanks! |
This exception comes from the
You should open a Support Ticket, so that the support team can investigate issue 3. You can also do a @igorbernstein2, FYI. |
BigTable and Dataflow jobs are both on us-central1-f and I don't have sparse Filter set. I have increased the gprc timeout to 10 min as suggested and will update if the job succeeds or fails. I have also opened a support ticket on the same - https://issuetracker.google.com/113559508 |
As per the issue, you're running a large Dataflow job on a development cluster. There's not much more that the client can do, so I'm closing this bug. Please feel free to open if you have new information. |
…nfig artifact (#1921) chore: update renovate bot configs to update the sdk-platform-java-config artifact Source-Link: googleapis/synthtool@d7828c0 Post-Processor: gcr.io/cloud-devrel-public-resources/owlbot-java:latest@sha256:0d1bb26a1a99ae0456176bf891b8490e9aab424a5cb4e4d301d9703c4dc43b58
Hi,
I have a Dataflow job that reads from Bigtable ( table size approx 70Gb, 18 Mil Rows) and writes back to Bigtable. The job fails always after processing a few million records. I have checked that mutations per write are less than 100,000 limit, reading in only the latest version of the required cells. I am using a high mem vm with 8 cpus and 52GB memory.
I get multiple exceptions as below:
The final failure message I get is:
Appreciate your help.
Thanks,
Padmini
The text was updated successfully, but these errors were encountered: