If you run quite complex query on really large data sets it might take a long time to finish the count query.
It would be great if there would be an option that sets paging to infinite, which basically means:
What do you think?
You can find the instruction for infinite scrolling in the wiki. I don't think any count quiries will be called in that example(I just looked through the code, though). This is not an option of kaminari, but you can still build infinite scrolling very easily.
In my opinion using infinite scrolling to implement a infinite paging feature is just a workaround.
Just thought it would be a valuable feature extension to improve support for really large datasets.
Anyway, thanks a lot for your quick answer and keep up your great work.
@vhochstein Yeah, I like the idea!
Any update about this issue? I'm also looking for a solution how to skip/jump to pages. Like, skip/jump 20 pages from current page so that I can go to half of the list. Thanks!
I think you ve misunderstood this issue. It s not about skipping eg 20 pages using links in page navigation.
It s about ommiting the sql count query which might take for large dataset and complex join and where conditions quite a long time.
I would recommend to open another issue which describes your issue..
Hey, I also need this
Currently, MongoDB has a problem with count queries, they are very slow due to how it internally works.
This will be fixed only in the upcoming versions.
A count on one of my collections (ranging from 4 to 20 mio entries) takes up to 40 seconds sometimes, and the counts are unfortunately only needed for pagination.
I tried to figure out how to skip the counts in kaminari when using the paginate view helper, but it doesn't seem to be possible... it would be enough to just have a made up number of pages that I could specify or a "more" link ... anyone got an idea?
I'd love to see an option for PostgreSQL to switch to estimating the row count:
http://wiki.postgresql.org/wiki/Slow_Counting (2nd section)
Doing the actual count renders it completely useless for tables larger than 500k records.
The infinite scroll solution works, as for the other digressions: without a patch I must close this issue.
closing an issue which is nt resolved...
I don't understand either why this has been closed.
@vhochstein We like your idea, but you must submit a patch: then we can talk.
Well, I would say you would be able to extend your library with such a simple extension in round about an hour.
I cannot even get your library tests up and running:
`initialize': SQLite3::SQLException: near "TRUNCATE": syntax error: TRUNCATE TABLE "gem_defined_models", "users", "books", "readerships", "authorships", "user_addresses" ; (ActiveRecord::StatementInvalid)
Please don't close it, there is no solution yet?
This gist should make things faster with rails and postgres
Ok, you may find a commit which adds infinite_pages option to kaminari, please take a look:
Here's a very good fix, which works with Postgres: http://engineering.nulogy.com/posts/avoiding-redundant-counting/
@zzak would you still accept a PR for this issue?
@joseluistorres created a PR to work around this in Active Admin, but I'd much rather see this added to Kaminari.
Hi @zzak @amatsuda how can I run the tests locally? I forked the repo and ran bundle install locally and then ran "rake" only but a lot of the tests failed, any ideas?
NVM I found the way http://stackoverflow.com/questions/10937805/run-kaminari-specs
removing the count(*) when no column assigned or :all #240