New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement startkey_docid and endkey_docid in allDocs/mapreduce #1397
Comments
I think this is fairly complicated and I think we wanna try on reducing some redundent logic here, we do want it but I dont think its a goodfirstpatch |
Yeah, you're right, since it involves a secondary index, meaning it will definitely need @neojski's |
|
I started working on this, but I am pretty sure there is a bug in CouchDB 1.5.0. The behavior for |
whats wrong with zero offset when you don't set skip? On Sun, Apr 13, 2014 at 9:32 PM, Nolan Lawson notifications@github.comwrote:
-Calvin W. Metcalf |
Isn't the offset always supposed to === skip? |
Ah okay, wow. The offset is supposed to be how many docs are before the first result. We totally implemented that wrong in both |
Probably it would be good to put a note somewhere saying that PouchDB doesn't return the same offset as CouchDB, since it's just unfeasible for us to do that in anything but websql. |
really wow, I was just thinking that if you don't specify skip then it would be zero by default ... |
Nah, I'm testing it now in CouchDB 1.5.0, and it's definitely the number of documents before the first result returned. Definitely not feasible unless we want to implement a btree on top of level/idb. |
Okay, so I noted in that commit all the places where I was surprised by the functionality. In particular:
In general, if this is the desired functionality, then it's going to be tough for us to implement because we index everything by |
Okay, for the first thing, Kxepal told me what I was misunderstanding:
|
This was fixed awhile ago. |
These are just some of the CouchDB params we don't support yet (see also
group_level
in #967 and #1340 for somekeys
fixes).The text was updated successfully, but these errors were encountered: