-
Notifications
You must be signed in to change notification settings - Fork 211
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
hrpc.NumberOfRows is not working #190
Comments
am i using the scan methond wrong way? |
|
I'm sorry i don't understand what you said.
could you take an example, please.
…---Original---
From: "Thibault ***@***.***>
Date: Thu, Jul 14, 2022 23:27 PM
To: ***@***.***>;
Cc: ***@***.******@***.***>;
Subject: Re: [tsuna/gohbase] hrpc.NumberOfRows is not working (Issue #190)
// NumberOfRows is an option for scan requests. // Specifies how many rows are fetched with each request to regionserver. // Should be > 0, avoid extremely low values such as 1 because a request // to regionserver will be made for every row.
NumberOfRows doesn't limit how many rows are returned by the scan in total. It limit how many rows are returned by each scans request send to HBase.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you authored the thread.Message ID: ***@***.***>
|
A scan make multiples requests to regionservers. This will do what you want, get 2 results and stop: hrpc.NewScanRangeStr(ctx, table, startRow, stopRow, hrpc.NumberOfRows(2), hrpc.CloseScanner())
|
Another option, especially if you want to follow the same pattern for a larger number of rows or if the rows are spread on multiple regionservers, is to exit the for loop when you have reached the size you wanted and make sure to call |
got it thx very much
…---Original---
From: "Thibault ***@***.***>
Date: Fri, Jul 15, 2022 02:55 AM
To: ***@***.***>;
Cc: ***@***.******@***.***>;
Subject: Re: [tsuna/gohbase] hrpc.NumberOfRows is not working (Issue #190)
A scan make multiples requests to regionservers. NumberOfRows only limit the number of rows returned PER requests, not for the totality of the scan.
What you want to use is hrpc.NewScanRangeStr(ctx, table, startRow, stopRow, hrpc.NumberOfRows(2), hrpc.CloseScanner()). CloseScanner will close the scanner immediately after the first request is returned. This will do what you want: get 2 results and stop.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you authored the thread.Message ID: ***@***.***>
|
Hello, may I ask how can I pull x number of rows from a start key? I am trying to achieve paginated scans. I tried to leave the end key empty, but it still always fetches more than the specified limit. |
A scan make multiples requests to regionservers. NumberOfRows only limit the number of rows returned PER requests, not for the totality of the scan.
What you want to use is hrpc.NewScanRangeStr(ctx, table, startRow, stopRow, hrpc.NumberOfRows(2), hrpc.CloseScanner()). CloseScanner will close the scanner immediately after the first request is returned. This will do what you want: get 2 results and stop.
…---Original---
From: "Jonathan ***@***.***>
Date: Sat, Jul 16, 2022 10:18 AM
To: ***@***.***>;
Cc: ***@***.******@***.***>;
Subject: Re: [tsuna/gohbase] hrpc.NumberOfRows is not working (Issue #190)
Hello, may I ask how can I pull x number of rows from a start key? I am trying to achieve paginated scans. I tried to leave the end key empty, but it still always fetches more than the specified limit.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you authored the thread.Message ID: ***@***.***>
|
Hmmm, yeah I get that part. But when I did this,
It still returns more rows that |
I made it work like this:
Caller:
Note that despite setting |
1 Cell = 1 Column. If you ask for 1 row but that row has 10000000 columns, you will get 10000000 cells. You can change that by asking only for a specific columns using the |
i set the hrpc.NumberOfRows to 2,but i got a length of 1600
The text was updated successfully, but these errors were encountered: