-
Notifications
You must be signed in to change notification settings - Fork 592
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
bigtable: retry partially failed reads and writes #1595
Comments
We do use an exponential backoff retry strategy before calling it a failure and returning an error. What errors are you getting? How often are you calling the API/are you waiting for a response before calling again? You can pass a "maxRetries" number in the Bigtable constructor. The default is 2. bigtable({ projectId: '...', maxRetries: 5 }) |
Thanks, I'll try out Here is the stacktrace:
|
Interesting. Not sure what that error is. We only retry after certain error types, I'm not sure this is one we would retry. @lesv @murgatroid99 have you heard of this one? |
I found @murgatroid99's comment on this issue, which says that this is a 503, so we do in fact retry on this error. |
Hi, I'm still seeing this error message come up with both Is there any more data that I could collect which would help identify the problem? |
Can you either show code or estimate how many requests you're making at once? Since this is a 503, it's an issue of either too many requests at once so the server needs a break, or the upstream API is actually broken in some way. |
Absolutely, here is a snippet of the relevant code:
There could be potentially ~1000 rows in any given key range. We only have one server node making these requests at any given moment of time. Perhaps there is a better way to handle |
@callmehiphop when you have a chance, would you mind trying to recreate this scenario? |
FWIW, this behavior (partial failure in batch operations) is expected from the Bigtable perspective. A bulk read or write operation can affect many rows, and what can happen is that some of the reads or writes will succeed, while others may fail (because different parts of the bulk request may go to different backing Bigtable servers, of which some may be busy, unavailable, or simply timeout) — Bigtable does not provide atomicity guarantees across multiple rows, so any single operation within the batch can succeed or fail independently of any others. However, these are typically not permanent errors, so they should be retried, but as an optimization, rather than retrying the entire batch request, the client library needs to iterate over the response statuses, and only retry the ones that were marked as having failed or timed out. This is precisely what we do in other Bigtable client libraries. The upside is that even with the occasional retries, the overall performance is much higher than with a single read or write operation per API call. |
@mbrukman @stephenplusplus given this, what is the recommended approach here? Is the user responsible for handling this retry logic? |
Java and golang both have automated retries. Retries are nuanced for long running scans and bulk writes. |
@sduskis I see, so for now we might have to include retry logic in our calls with this nodejs library? Is this expected for both bulk read calls and streaming reads? |
You are free to implement this in your application, but it's something we will eventually support in this library. |
getRows
How does this work for a streaming application? Should we restart the stream at the failed point? |
@arbesfeld @stephenplusplus Yes, for streaming reads it's best to restart the stream after the last successfully received row. For multi-row mutations that call mutate_rows under the hood, only mutations that received an error should be retried. As @stephenplusplus said, "smart" retries should definitely be handled in the library (should I create a separate issue to track that?). To make that effort a bit easier for node and other languages I'm putting together a little server that can be used to validate client retry behavior. I still need to push that out to a public place but, in the meantime, you can look at the test script to get some idea of what it will be testing: https://gist.github.com/garye/e7f4fa9694dd5b04580aa7cdd6adf16f You can also consult the java or go client retry logic, such as: |
We are having a bit of difficulty implementing this at the application level, since it seems like we are just getting thrown a generic error, so we end up having to retry the entire read. @stephenplusplus happy to make a contribution here if it makes sense, though I could use a bit of direction as to where to start looking. |
Alternatively, some recommendation for how to handle this at the application level would also be greatly appreciated. We are currently doing something like this:
Would it work to just wrap this in a try/catch and then restart from the last-seen row? It's hard to reproduce the BigTable failure so we have no idea if our approach is working. |
Hi @callmehiphop any updates on this issue? I would be happy to submit a PR if you wouldn't mind pointing me to where I should address the issue. |
At the very least, we'd like to be able to handle this at the application level. |
@arbesfeld sorry, we've been pretty busy with other items, but I'm going to try and get on this within the next week or so. |
@callmehiphop sorry to keep bugging you. I'd be happy to take a look if you could give me a bit of direction on the implementation :-) |
This issue was moved to googleapis/nodejs-bigtable#7 |
My calls to bigtable
getRows()
fail intermittently, so I have had to wrap all of these methods in retry blocks. I was wondering:Thanks!
The text was updated successfully, but these errors were encountered: