Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unique Digests dashboard not filtering the full set of results #529

Closed
wasabigeek opened this issue Jul 6, 2020 · 5 comments
Closed

Unique Digests dashboard not filtering the full set of results #529

wasabigeek opened this issue Jul 6, 2020 · 5 comments

Comments

@wasabigeek
Copy link

Describe the bug
I've had some instances where filtering returns no result, but going to the direct url shows the unique digest.
image
image

In the console:

irb(main):003:0> SidekiqUniqueJobs::Digests.all(pattern: 'uniquejobs:b8f6e7a7d359ce5de0c89b4948a64ae9', count: 1)
[
    [0] "uniquejobs:b8f6e7a7d359ce5de0c89b4948a64ae9"
]
irb(main):004:0> SidekiqUniqueJobs::Digests.page(pattern: 'uniquejobs:b8f6e7a7d359ce5de0c89b4948a64ae9')
[
    [0] 996,
    [1] "88",
    [2] []
]

# for reference on the set size
irb(main):019:0> SidekiqUniqueJobs::Digests.count
996

I think it's because we don't page through all the results in the SSCAN, only if the returned cursor is 0 can we assume a search through the set is complete:
https://github.com/mhenrixon/sidekiq-unique-jobs/blob/v6.0.21/lib/sidekiq_unique_jobs/digests.rb#L34

Additional context
sidekiq-unique-jobs (6.0.21)

@mhenrixon
Copy link
Owner

There are two completely different mechanisms in play here which is why you see this.

The keys listed here are indeed keys, the keys listed on the index page is not n fact a list or set if I recall correctly.

It was always a bad idea to list all keys just to extract the digest because there are multiple keys per digest.

When you have set a key to expire it is removed from the list/set immediately because I cannot expire just one entry in the collection and that's the reason you are seeing this problem.

In previous version these keys where never removed which caused some high intensity redid servers to run out out of memory.

It is not a bug per se, just an old "feature" that I haven't been able to improve on.

@wasabigeek
Copy link
Author

wasabigeek commented Jul 11, 2020

@mhenrixon I didn't quite understand, did you mean there is a set which maintains the root keys of the digests, and they are removed from the set if their "child" keys (e.g. with the GRABBED/EXISTS and other suffixes, and exist outside the side) are set to expire?

We had an issue in prod where after we rebooted our worker, we weren't able to re-queue a job, and only after finding and delete the digest in the dashboard via it's direct URL (it wasn't searchable) was the job able to re-queue. (It wasn't this particular key though, chanced upon it in staging)

the keys listed on the index page is not n fact a list or set if I recall correctly.

(Disclaimer: I may have misunderstood what you meant) I would have thought the keys on the index are from a set, since this looks to be calling SidekiqUniqueJobs::Digests.page, which uses sscan_...: https://github.com/mhenrixon/sidekiq-unique-jobs/blob/v6.0.21/lib/sidekiq_unique_jobs/web.rb#L31

FWIW, the key in the example above was accessible via the page method, I just had to loop through the cursors to find it.

@mhenrixon
Copy link
Owner

@wasabigeek maybe I missunderstood then. Do you mean that the bug is in the pagination? That the key was in the set but not displayed on the page?

@mhenrixon mhenrixon reopened this Jul 11, 2020
@wasabigeek
Copy link
Author

wasabigeek commented Jul 11, 2020

@mhenrixon yes, I think so.

My guess is because of how SSCAN works:

SCAN family functions do not guarantee that the number of elements returned per call are in a given range. The commands are also allowed to return zero elements, and the client should not consider the iteration complete as long as the returned cursor is not zero.

https://redis.io/commands/scan

i.e. if the first iteration of sscan returns an empty list, but the cursor is not zero, we can’t assume that no match exists in the entire set. We would still need to page through the entire set before making that conclusion.

I’d be happy to try and contribute a fix if this is the correct interpretation, but am not familiar enough with Redis to determine the ideal solution. Maybe attempt a sscan till X results are returned then use the cursor to start the next search? Seems like that will have some edge cases but might be a good enough solution without having to change the underlying data structures? Alternatively we could use sscan_each as in the .all call, that seems to do the paging under the hood.

@mhenrixon
Copy link
Owner

#571 v7.0.1 is highly recommended. Let me know if there is still a problem in v7 and I will get to it immediately.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants