Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

bigtable: retry partially failed reads and writes #1595

Closed
arbesfeld opened this issue Sep 14, 2016 · 22 comments
Closed

bigtable: retry partially failed reads and writes #1595

arbesfeld opened this issue Sep 14, 2016 · 22 comments
Assignees
Labels
api: bigtable Issues related to the Bigtable API. priority: p2 Moderately-important priority. Fix may not be included in next release.

Comments

@arbesfeld
Copy link
Contributor

My calls to bigtable getRows() fail intermittently, so I have had to wrap all of these methods in retry blocks. I was wondering:

  1. Is this flakiness expected?
  2. Have you considered adding a retry mechanism to these RPCs?

Thanks!

@stephenplusplus
Copy link
Contributor

stephenplusplus commented Sep 14, 2016

We do use an exponential backoff retry strategy before calling it a failure and returning an error. What errors are you getting? How often are you calling the API/are you waiting for a response before calling again?

You can pass a "maxRetries" number in the Bigtable constructor. The default is 2.

bigtable({ projectId: '...', maxRetries: 5 })

@stephenplusplus stephenplusplus added the api: bigtable Issues related to the Bigtable API. label Sep 14, 2016
@arbesfeld
Copy link
Contributor Author

Thanks, I'll try out maxRetries

Here is the stacktrace:

Error: Secure read failed
  File "/app/packages/@apphub:logrocket-server-storage-bigtable/node_modules/grpc/src/node/src/client.js", line 189, in ClientReadableStream._emitStatusIfDone
    var error = new Error(status.details);
  File "/app/packages/@apphub:logrocket-server-storage-bigtable/node_modules/grpc/src/node/src/client.js", line 158, in ClientReadableStream._readsDone
    this._emitStatusIfDone();
  File "/app/packages/@apphub:logrocket-server-storage-bigtable/node_modules/grpc/src/node/src/client.js", line 229, in readCallback
    self._readsDone();

@stephenplusplus
Copy link
Contributor

Interesting. Not sure what that error is. We only retry after certain error types, I'm not sure this is one we would retry. @lesv @murgatroid99 have you heard of this one?

@stephenplusplus
Copy link
Contributor

I found @murgatroid99's comment on this issue, which says that this is a 503, so we do in fact retry on this error. maxRetries should work in this case in place of writing your own retry logic.

@stephenplusplus stephenplusplus added the type: question Request for information or clarification. Not an issue. label Sep 15, 2016
@arbesfeld
Copy link
Contributor Author

Hi, I'm still seeing this error message come up with both maxRetries: 6 and a custom retry wrapper around the getRows method.

Is there any more data that I could collect which would help identify the problem?

@stephenplusplus
Copy link
Contributor

Can you either show code or estimate how many requests you're making at once? Since this is a 503, it's an issue of either too many requests at once so the server needs a break, or the upstream API is actually broken in some way.

@arbesfeld
Copy link
Contributor Author

Absolutely, here is a snippet of the relevant code:

return new Promise((resolve, reject) => {
  this.table.getRows({
    decode: false,
    start: 'foo|',
    end: 'foo||,
    filter: [{
      column: {
        cellLimit: 1,
      },
    }],
  })
    .on('error', reject)
    .on('data', row => {
      processRow(row);
    })
    .on('end', () => {
      resolve();
    });
});

There could be potentially ~1000 rows in any given key range.

We only have one server node making these requests at any given moment of time.

Perhaps there is a better way to handle error here instead of immediately rejecting?

@stephenplusplus
Copy link
Contributor

@callmehiphop when you have a chance, would you mind trying to recreate this scenario?

@mbrukman
Copy link
Contributor

FWIW, this behavior (partial failure in batch operations) is expected from the Bigtable perspective. A bulk read or write operation can affect many rows, and what can happen is that some of the reads or writes will succeed, while others may fail (because different parts of the bulk request may go to different backing Bigtable servers, of which some may be busy, unavailable, or simply timeout) — Bigtable does not provide atomicity guarantees across multiple rows, so any single operation within the batch can succeed or fail independently of any others.

However, these are typically not permanent errors, so they should be retried, but as an optimization, rather than retrying the entire batch request, the client library needs to iterate over the response statuses, and only retry the ones that were marked as having failed or timed out. This is precisely what we do in other Bigtable client libraries.

The upside is that even with the occasional retries, the overall performance is much higher than with a single read or write operation per API call.

/cc: @garye, @sduskis

@arbesfeld
Copy link
Contributor Author

@mbrukman @stephenplusplus given this, what is the recommended approach here? Is the user responsible for handling this retry logic?

@sduskis
Copy link
Contributor

sduskis commented Dec 11, 2016

Java and golang both have automated retries. Retries are nuanced for long running scans and bulk writes.

@arbesfeld
Copy link
Contributor Author

arbesfeld commented Dec 11, 2016

@sduskis I see, so for now we might have to include retry logic in our calls with this nodejs library? Is this expected for both bulk read calls and streaming reads?

@stephenplusplus
Copy link
Contributor

You are free to implement this in your application, but it's something we will eventually support in this library.

@stephenplusplus stephenplusplus added enhancement and removed type: question Request for information or clarification. Not an issue. labels Dec 12, 2016
@stephenplusplus stephenplusplus changed the title bigtable retry on getRows bigtable: retry partially failed reads and writes Dec 12, 2016
@arbesfeld
Copy link
Contributor Author

How does this work for a streaming application? Should we restart the stream at the failed point?

@garye
Copy link

garye commented Dec 13, 2016

@arbesfeld @stephenplusplus Yes, for streaming reads it's best to restart the stream after the last successfully received row. For multi-row mutations that call mutate_rows under the hood, only mutations that received an error should be retried.

As @stephenplusplus said, "smart" retries should definitely be handled in the library (should I create a separate issue to track that?). To make that effort a bit easier for node and other languages I'm putting together a little server that can be used to validate client retry behavior. I still need to push that out to a public place but, in the meantime, you can look at the test script to get some idea of what it will be testing:

https://gist.github.com/garye/e7f4fa9694dd5b04580aa7cdd6adf16f

You can also consult the java or go client retry logic, such as:
https://github.com/GoogleCloudPlatform/google-cloud-go/blob/master/bigtable/bigtable.go#L149
https://github.com/GoogleCloudPlatform/google-cloud-go/blob/master/bigtable/bigtable.go#L556

@arbesfeld
Copy link
Contributor Author

We are having a bit of difficulty implementing this at the application level, since it seems like we are just getting thrown a generic error, so we end up having to retry the entire read.

@stephenplusplus happy to make a contribution here if it makes sense, though I could use a bit of direction as to where to start looking.

@arbesfeld
Copy link
Contributor Author

Alternatively, some recommendation for how to handle this at the application level would also be greatly appreciated.

We are currently doing something like this:

return new Promise((resolve, reject) => {
      eventsTable
        .createReadStream({
          decode: true,
          start: 'foo',
          end: 'bar',
        })
        .on('data', function handleRow(row) {

        })
        .on('error', reject)
        .on('end', () => {
          resolve();
        });
});

Would it work to just wrap this in a try/catch and then restart from the last-seen row? It's hard to reproduce the BigTable failure so we have no idea if our approach is working.

@arbesfeld
Copy link
Contributor Author

Hi @callmehiphop any updates on this issue? I would be happy to submit a PR if you wouldn't mind pointing me to where I should address the issue.

@arbesfeld
Copy link
Contributor Author

At the very least, we'd like to be able to handle this at the application level.

@callmehiphop
Copy link
Contributor

@arbesfeld sorry, we've been pretty busy with other items, but I'm going to try and get on this within the next week or so.

@arbesfeld
Copy link
Contributor Author

@callmehiphop sorry to keep bugging you. I'd be happy to take a look if you could give me a bit of direction on the implementation :-)

@jmuk jmuk added priority: p2 Moderately-important priority. Fix may not be included in next release. Status: Acknowledged labels Mar 7, 2017
@lukesneeringer
Copy link
Contributor

This issue was moved to googleapis/nodejs-bigtable#7

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
api: bigtable Issues related to the Bigtable API. priority: p2 Moderately-important priority. Fix may not be included in next release.
Projects
None yet
Development

No branches or pull requests

8 participants