Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

read_rows stream resumption not working? #469

Closed
igorbernstein2 opened this issue Nov 1, 2021 · 1 comment · Fixed by #759
Closed

read_rows stream resumption not working? #469

igorbernstein2 opened this issue Nov 1, 2021 · 1 comment · Fixed by #759
Assignees
Labels
api: bigtable Issues related to the googleapis/python-bigtable API. priority: p2 Moderately-important priority. Fix may not be included in next release. 🚨 This issue needs some love. type: bug Error or flaw in code with unintended results or allowing sub-optimal usage patterns. type: cleanup An internal cleanup or hygiene concern.

Comments

@igorbernstein2
Copy link
Contributor

Running the following snippet:

client = bigtable.Client(project="google.com:cloud-bigtable-dev")
instance = client.instance("igorbernstein-dev")
table = instance.table("table1")

rows = table.read_rows(start_key="a", end_key="b", retry=bigtable.table.DEFAULT_RETRY_READ_ROWS.with_deadline(2.0))
for r in rows:
    print(f'row:{r}\n')

Against a java emulator:

public static void main(String[] args) throws Exception {
    Server server =
        ServerBuilder.forPort(1234)
            .addService(
                new BigtableImplBase() {
                  @Override
                  public void readRows(
                      ReadRowsRequest request, StreamObserver<ReadRowsResponse> responseObserver) {
                    System.out.println(Context.current().getDeadline());
                    try {
                      Thread.sleep(100);
                    } catch (InterruptedException e) {
                      responseObserver.onError(e);
                      return;
                    }
                    responseObserver.onError(Status.UNAVAILABLE.asException());
                  }
                })
            .build();

    server.start();
    server.awaitTermination();
  }

I would expect to see multiple attempt rpcs spaced with exponential delay. But I only see one

@product-auto-label product-auto-label bot added the api: bigtable Issues related to the googleapis/python-bigtable API. label Nov 1, 2021
@yoshi-automation yoshi-automation added triage me I really want to be triaged. 🚨 This issue needs some love. labels Nov 3, 2021
@meredithslota meredithslota added priority: p2 Moderately-important priority. Fix may not be included in next release. type: bug Error or flaw in code with unintended results or allowing sub-optimal usage patterns. and removed 🚨 This issue needs some love. triage me I really want to be triaged. labels Nov 15, 2021
@yoshi-automation yoshi-automation added 🚨 This issue needs some love. and removed 🚨 This issue needs some love. labels Feb 13, 2022
@yoshi-automation yoshi-automation added the 🚨 This issue needs some love. label Apr 30, 2022
@kolea2 kolea2 added the type: cleanup An internal cleanup or hygiene concern. label Feb 13, 2023
@Mariatta
Copy link
Contributor

I found that the retry wasn't being passed to the grpc read_rows call.
When I added the retry parameter (shown in the PR), using the emulator code you provided, then I could see that a retry attempt was made.

Prior to this code change, using the emulator I saw only one request incoming. After this change, I could see the second request in the emulator.

Would this be a reasonable fix? #759

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
api: bigtable Issues related to the googleapis/python-bigtable API. priority: p2 Moderately-important priority. Fix may not be included in next release. 🚨 This issue needs some love. type: bug Error or flaw in code with unintended results or allowing sub-optimal usage patterns. type: cleanup An internal cleanup or hygiene concern.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants