New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
IndexOutOfBounds for DataProvider #6072
Comments
Hi, unfortunately I may not use your code to reproduce the issue. There is no So : please simplify the code that it doesn't use unknown beans, the repository which may be avoided in the example ( and replaced by an in-memory collection ). |
OK, that type of response, that's fine. It is out of scope, to provide you a fully functioning web application to demonstrate your method back to you. We cannot be expected to use in-memory lists in production, even if lists work, and they appear to work, but do not work in production due to data sizes. Here are basics on spring data jpa to get you started, so you can create a quick bean, then spring data repository: It populates a grid via grid.setDataProvider(), but this is an issue with DataProvider, not with Grid, or ComboBox, or some other component. The same off by one error occurs. It looks like this is binned as "feature work". We cannot use grid in a production environment while this bug exists, so please let us know if we need to start migrating away from vaadin. |
Make sure you check values you use: But what value does your count method return? Is it 50? Here is simple example code that works with Vaadin 13:
The only line this prints with default page size (50) is:
|
The immediate problem in this case is indeed as indicated by @alump, i.e. that This causes the data provider to return a stream of size 0 (starting from index 50 and ending at index 50) which indirectly causes the indexing error because the logic expected 50 items to be available from the stream. You can observe this by breaking up the example code to also print the size of the intermediate sub list:
This can be fixed by changing the example code to calculate the end index for the sub list based on the offset and the limit, i.e. At the same time, the commented-out lines in the example indicates that this is only an intermediate step towards lazy loading items from the database. This won't work as it is implemented in the commented-out code line because the (deprecated) It might be tempting to try to compensate for this simply by dividing the requested offset by the limit in order to get a corresponding page index. This will seem to work in simple cases, but it will break down in cases when the component requests multiple logical pages at the same time. Instead, you need to use slightly more logic to align the requested item range to a page boundary that Spring Data can process. There's already code for doing this freely available in https://github.com/Artur-/spring-data-provider. The add-on also provides additional helper API that makes it easier to use Spring Data repositories together with Vaadin. |
On a completely different note, I'd also like to address this comment:
In order to determine that the problem was in the example code and not in Vaadin itself, I had to make assumptions about the implementation of the missing classes and how the data provider was used. This was a relatively straightforward case but as an example, the symptoms would have been different if I'd used the data provider with a There have also been cases where we have made wrong assumptions without noticing and then ended up only implementing a partial fix or fixing a completely different related issue instead of fixing the issue faced by the reporter. We are receiving hundreds of reports through open source issue trackers every week. We want to dedicate our time to fixing actual issues or implementing new features rather than filling in blanks in reported issues or making changes that are not helping the reporter. By providing a fully functional example that can be run as-is, you are significantly increasing the chances that you will receive help quickly and efficiently. In this case, a functionally equivalent self-contained example could have been implemented like this:
|
But it indicates that our contract is not obvious/well defined. |
HI everyone, thank you for the responses, and we'll look at this in a bit,
over the weekend.
We've moved to other parts of our product from the web app (machine
learning automation, etc. on different servers) while some type of decision
is made internally.
It appears there is a potential way forward with Vaadin customer-facing
then, which is good.
I understand - to Denis' point, and now I understand the length item.
Could we rename the attributes and/or make it more obvious in the
documentation what length you are looking at for the documentation?
While we confirmed the location we saw in our stacktraces, in vaadin's
github repo, we did/do not have time in the near term to trace how the
attributes are used in your code, given the other pieces to the product.
The commented portions were just left to show some other efforts to
enumerate what is going on, versus us deleting them.
Sorry to be be confusing there.
Cheers,
Dennis Underwood
CEO
Cyber Crucible, Inc.
dennis.underwood@cybercrucible.com
+1.410.216.0369 x1001
…On Wed, Jul 17, 2019 at 11:23 AM Denis ***@***.***> wrote:
This causes the data provider to return a stream of size 0 (starting from
index 50 and ending at index 50) which indirectly causes the indexing error
because the logic expected 50 items to be available from the stream.
But it indicates that our contract is not obvious/well defined.
So an additional question is : should we add some logic which verifies the
contract like we did in this case:
https://github.com/vaadin/flow/blob/master/flow-data/src/main/java/com/vaadin/flow/data/provider/DataCommunicator.java#L374
https://github.com/vaadin/flow/blob/master/flow-data/src/main/java/com/vaadin/flow/data/provider/DataCommunicator.java#L378
And whether it's feasible ?
—
You are receiving this because you were assigned.
Reply to this email directly, view it on GitHub
<#6072?email_source=notifications&email_token=AB6ONGW5CWRPPRERJGKXYCTP742P7A5CNFSM4IDXTAJ2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD2EYHOA#issuecomment-512328632>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AB6ONGRIFBJPBJ6FGWY52NLP742P7ANCNFSM4IDXTAJQ>
.
--
www.cybercrucible.com <http://www.cybercrucible.com/>
550M Ritchie Hwy.
#135
Severna Park, MD 21146
This email and any files transmitted with
it are confidential and intended solely for the use of the individual or
entity to whom they are addressed. If you have received this email in error
please notify the system manager. This message contains confidential
information and is intended only for the individual named. If you are not
the named addressee you should not disseminate, distribute or copy this
e-mail. Please notify the sender immediately by e-mail if you have received
this e-mail by mistake and delete this e-mail from your system. If you are
not the intended recipient you are notified that disclosing, copying,
distributing or taking any action in reliance on the contents of this
information is strictly prohibited.
|
With the original phrasing, "limit" could potentially also be understood as the index of the last item to fetch. Related to #6072
Seems like it's not completely trivial to improve the error message since we also have logic added by #4889 that tries to deal with situations where there is a size mismatch if rows have been removed from the database since the last time I suspect that logic gets triggered in this case, which makes it call This in turn might imply that the logic from #4889 is actually also broken if the database contents change again between the time when My conclusion for now is thus that the only thing we could reasonably do is to clarify the documentation. I wouldn't consider changing any method names since those would cause a mess with backwards compatibility. I'm quite blind to the ways in which the documentation might be misunderstood since I already know exactly what it wants to say. I only spotted one immediately obvious case that is addressed in #6099. Please write a comment here or create a new issue if you encounter something else in the documentation that could be clarified or if you have an idea on how error reporting could be improved in a way that also makes sense in the case when the database size has actually changed. |
With the original phrasing, "limit" could potentially also be understood as the index of the last item to fetch. Related to #6072
This seems like it may be related to #3830. The self-contained app (mentioned again in this comment as a copy and paste from above comment) errors out for me once I scroll past the first 49 results.
Stack trace:
|
@jhult I found a solution...we had moved on shortly afterwards. |
Created a DataProvider.fromCallbacks object. First page of paging works correctly, second load of data,however causes an indexoutofbounds error:
query.getOffset(): 0
query.getLimit(): 50
query.getOffset(): 50
query.getLimit(): 50
java.lang.IndexOutOfBoundsException: Index: 50, Size: 50
This is about as stripped down as I can make it quickly while keeping this verbose for you...
DataProvider lazy loading population to a grid.
Crash upon scroll.
13.0.10
Java: 1.8.212 zulu
Spring: Spring boot 2.1.6.RELEASE
--N/A
The text was updated successfully, but these errors were encountered: