New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
BigTable: 'Table.mutate_rows' deadline exceeded for large mutations #7
Comments
@crwilcox, @kolea2, @igorbernstein2: this is a problem in the GAPIC config. Currently, the default timeout is set to 60 seconds. Someone may need to increase the timeout for |
@sduskis Sending data in smaller batches might be a best practice anyway if a part of a large transfer failing could fail the whole transfer (vs. perhaps retries only the failed portion?). |
@cwbeitel, I agree on best practices. In theory, you can use a I also believe that we ought to review the default timeout value to make sure that it makes sense for this type of RPC. |
Relevant commits:
An odd side effect of how |
@tseaver Ah well done, the ability to configure a timeout seems like a good feature. |
Do *not* scribble on its internal API wrappers. See: #7 (comment)
Also, call data client's 'mutate_rows' directly -- do *not* scribble on its internal API wrappers. See: #7 (comment) Closes #7
This PR contains the following updates: | Package | Update | Change | |---|---|---| | [google-cloud-bigtable](https://togithub.com/googleapis/python-bigtable) | minor | `==1.5.1` -> `==1.6.0` | --- ### Release Notes <details> <summary>googleapis/python-bigtable</summary> ### [`v1.6.0`](https://togithub.com/googleapis/python-bigtable/blob/master/CHANGELOG.md#​160-httpswwwgithubcomgoogleapispython-bigtablecomparev151v160-2020-11-16) [Compare Source](https://togithub.com/googleapis/python-bigtable/compare/v1.5.1...v1.6.0) ##### Features - add 'timeout' arg to 'Table.mutate_rows' ([#​157](https://www.github.com/googleapis/python-bigtable/issues/157)) ([6d597a1](https://www.github.com/googleapis/python-bigtable/commit/6d597a1e5be05c993c9f86beca4c1486342caf94)), closes [/github.com//issues/7#issuecomment-715538708](https://www.github.com/googleapis//github.com/googleapis/python-bigtable/issues/7/issues/issuecomment-715538708) [#​7](https://www.github.com/googleapis/python-bigtable/issues/7) - Backup Level IAM ([#​160](https://www.github.com/googleapis/python-bigtable/issues/160)) ([44932cb](https://www.github.com/googleapis/python-bigtable/commit/44932cb8710e12279dbd4e9271577f8bee238980)) ##### [1.5.1](https://www.github.com/googleapis/python-bigtable/compare/v1.5.0...v1.5.1) (2020-10-06) ##### Bug Fixes - harden version data gathering against DistributionNotFound ([#​150](https://www.github.com/googleapis/python-bigtable/issues/150)) ([c815421](https://www.github.com/googleapis/python-bigtable/commit/c815421422f1c845983e174651a5292767cfe2e7)) </details> --- ### Renovate configuration :date: **Schedule**: At any time (no schedule defined). :vertical_traffic_light: **Automerge**: Disabled by config. Please merge this manually once you are satisfied. :recycle: **Rebasing**: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox. :no_bell: **Ignore**: Close this PR and you won't be reminded about this update again. --- - [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check this box --- This PR has been generated by [WhiteSource Renovate](https://renovate.whitesourcesoftware.com). View repository job log [here](https://app.renovatebot.com/dashboard#github/googleapis/python-bigtable).
Deadline exceeded for large mutation of BigQuery table; not necessarily a bug perhaps it would be helpful to other to add this to the docs, automatically batch mutations by size, make the timeout configurable, or document large data transfers as a potential cause of exceeding the deadline.
Environment details
Steps to reproduce and code example
Changing the value of
mutation_batch_size
in the following, too large and the deadline is exceeded; with tfexample serialized video examples with 4 frames of size 224x224. Not necessarily a bug if this is the expected behavior and users should handle this kind of batching themselves.Stack trace
The text was updated successfully, but these errors were encountered: