-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Search result page count changes half-way, "ERROR: Stuck on page number 3466 of 3466" #355
Comments
OMG, are you serious? Fetching 69,000 caches - how long did it take to get this far? You must have reached sleep times of the order of minutes between queries? (There's a progressive sleep, 1 second per 250 connections on average, for a reason.) What you're seeing seems to happen from time to time, while the exact details aren't known and hard to reproduce (it's sufficient that new caches get published while the query is running - and the longer the query takes, the more probable such an event becomes). To get you out of this misery though: Please use the file manager of your choice, go into your cache directory ( Note that the lifetime of those files is less than one day, thus reproducing the error may become difficult. If you're willing to find out more, you may run geotoad with the It may already be too late, and your cached search results may have expired, making the issue disappear. It is not recommended in general to "touch" cached files, but it might make sense in this particular case (only the "nearest.aspx" ones, of course). Having found less than 6000 caches over the period of more than 8 years, I'm curious why you'd be interested in 70 k of them (is this one state? a whole country?)... This seems to be a new record, the previous champ was a guy from Utah who wanted to search all 13 k caches in his state! There's a reason why official PQs are limited to 1 k - please reconsider your approach. It's for your own good. Cheers, S |
Um, thinking about it the root cause may be not an addition of caches but quite the opposite. Since the number of result pages is provided right at the start of the query, with page 1, if the last page becomes unavailable (because all caches now fit into less pages that what was communicated initially) the outcome may be what you saw. |
Thank you for your detailed answers! I will try adding the vvv switch, if I run into that problem again. I run geotoad on a linux box which happens to be online 24/7 for other purposes. The area covered by my query is about 2% the size of Utah and is centered in northern Germany. I am especially interested in “night caches” (which I try to filter by cache attribute). I am aware, that this is a rather large query and that I am running it on my own risk. Although it surely isn’t the best idea, to run it on a regular basis. ;) |
Good morning, 2% the size of Utah, but 5 times the number of caches, it takes Germany to achieve that ;)
Can you make use of that? E.g. estimate the publish rate (per day, per 4 hours, ...) in your search area(s), limit the query to a corresponding number of pages (with a safety margin, of course), and only process the caches you haven't seen before to perform the selection? This way, keep a list of GC IDs (or corresponding GUIDs) already filtered (by keywords, "torch" and "night" attributes, etc), but updated, and only check that for getting archived/modified? This takes some heavy scripting, but I've been doing something similar to get alerted if a new cache gets published in my home zone (not immediately though - my last FTF happened years ago). If you're a BM only, you will get fooled by caches which are set to PMO later, and it's pretty tricky to find out whether a PMO cache has been archived. You've been warned. Do you maintain a bookmark list with your results? Cheers, S |
This issue is believed to be fixed (or at least, sufficiently addressed) by release 3.28.0 |
Hi,
I get the following error:
Please let me know, if I can provide additional information.
Cheers, Jan
The text was updated successfully, but these errors were encountered: