-
-
Notifications
You must be signed in to change notification settings - Fork 626
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
about Collection.offset() performance #683
Comments
Yes, that is true and also documented here. Paging can generally be done more efficiently by adapting the query to the last result instead of using offset/limit. const PAGE_SIZE = 10;
const page1 = await db.friends
.where('age').above(25) // keyrange query (affects result order)
.filter(friend => /nice/.test(friend.notes)) // Some custom filter...
.limit(PAGE_SIZE)
.toArray();
// Now, prepare next page by picking last entry
const lastEntry = page1[page1.length-1];
const page2 = await db.friends
.where('age').above(lastEntry.age)
.filter(friend => /nice/.test(friend.notes)) // Some custom filter...
.limit(PAGE_SIZE);
.toArray();
... |
I updated the docs for offset() with hints for paging. Some more samples there... http://dexie.org/docs/Collection/Collection.offset()#a-better-paging-approach |
I'm seeing some behavior with
I'm using a simple I've put a demo on codepen to illustrate this. It creates a table with 100k entries, just two numbers in each, both indexed. Run Is it possible that there's some criterion for a fast |
Well, I dug in a bit more and it turns out this is actually a limitation of IndexedDB itself - at least on Chrome and Firefox on macOS. Personally I think it would still be worth noting in the documentation that Either way, thanks for Dexie! I'm really enjoying using it. |
Thanks for pointing this out. I think describing offset as "fast" is misleading, even though it is faster than the other offset cases where dexie needs to go some non-native work to fulfill the offset. You are more than welcome to update the docs the way you find more accurate 👍 |
If there are 6000 data,
when N is big, query is slower
The text was updated successfully, but these errors were encountered: