Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

After 1 comment constantly receiving RATE_LIMIT_EXCEEDED #2675

Open
sgtram opened this issue Oct 28, 2019 · 12 comments
Assignees

Comments

@sgtram
Copy link
Contributor

@sgtram sgtram commented Oct 28, 2019

Hi guys,
After 1 comment constantly receiving RATE_LIMIT_EXCEEDED.
When I look into redis DB with redis-cli I do see some entries there. When I remove them one by one - nothing helps, still see RATE_LIMIT_EXCEEDED. When I do FLUSHDB the error disappears and after 1 comment appears again.
Any idea?

@sgtram sgtram added the bug label Oct 28, 2019
@kgardnr kgardnr added troubleshooting help and removed bug labels Oct 28, 2019
@wyattjoh

This comment has been minimized.

Copy link
Member

@wyattjoh wyattjoh commented Oct 28, 2019

Hi @sgtram! We implemented rate limiting for commenting to help curb bot style behaviour in v5.2.0. It's currently set to 3 seconds, (not configurable at the moment), so if you keep trying to send a comment every 2 seconds, you'll never be able to comment. The window will reset after the 3 second timeout. Is this not the behaviour you're experiencing?

@sgtram

This comment has been minimized.

Copy link
Contributor Author

@sgtram sgtram commented Oct 29, 2019

I'm receiving RATE_LIMIT_EXCEEDED constantly, after 3 seconds or after 3 hours.

@sgtram

This comment has been minimized.

Copy link
Contributor Author

@sgtram sgtram commented Nov 5, 2019

Hi guys, any updates on the issue/bag?

@kgardnr

This comment has been minimized.

Copy link
Member

@kgardnr kgardnr commented Nov 7, 2019

Hey @sgtram we're unable to reproduce the issue. Could you send us a link to your staging environment or post some additional information?

@sgtram

This comment has been minimized.

Copy link
Contributor Author

@sgtram sgtram commented Nov 8, 2019

It is happening only in one of the staging environments. I also can't reproduce it on the other one. Maybe something with Redis?

@kgardnr

This comment has been minimized.

Copy link
Member

@kgardnr kgardnr commented Nov 8, 2019

Yeah I'm sorry @sgtram, I really can't say. Is there someone else on your team you could headcheck the SSO and setup with? Unfortunately that's the best advice we can give at this point.

@wyattjoh wyattjoh self-assigned this Nov 9, 2019
@wyattjoh

This comment has been minimized.

Copy link
Member

@wyattjoh wyattjoh commented Nov 9, 2019

If you are able to reproduce the issue on an existing instance @sgtram, can you run the following via the redis-cli:

TTL {{ tenantID }}:lastCommentTimestamp:{{ userID }}
GET {{ tenantID }}:lastCommentTimestamp:{{ userID }}

Where you replace {{ tenantID }} and {{ userID }} with the Tenant and User ID for the affected Tenant/User. If it's an issue with Redis, we should see something interesting there.

@sgtram

This comment has been minimized.

Copy link
Contributor Author

@sgtram sgtram commented Nov 11, 2019

Hi, @wyattjoh I don't have such entries in the Redis.
Here is what I have:

  1. "{queue}:scraper:1167"
  2. "{queue}:scraper:1148"
  3. "ce3b8a94-3480-4683-ba4f-293223801739:commentCounts:moderationQueue:queues"
  4. "jtir:c4cf1b5d-f460-4325-98ca-87e674909141"
  5. "{queue}:scraper:1162"
  6. "{queue}:scraper:id"
  7. "{queue}:scraper:1166"
  8. "{queue}:scraper:1163"
  9. "{queue}:scraper:1165"
  10. "{queue}:scraper:1147"
  11. "{queue}:scraper:1164"
  12. "{queue}:scraper:1161"
  13. "{queue}:scraper:failed"
  14. "{queue}:scraper:1150"
  15. "ce3b8a94-3480-4683-ba4f-293223801739:commentCounts:status"
  16. "ce3b8a94-3480-4683-ba4f-293223801739:commentCounts:moderationQueue:total"
  17. "{queue}:scraper:1149"
@wyattjoh

This comment has been minimized.

Copy link
Member

@wyattjoh wyattjoh commented Nov 12, 2019

Currently the timeout is set at 3 seconds, which means that those keys should only live in Redis for a total of 3 seconds. Are you able to confirm that within that 3 second window, that the keys do not appear? We can't seem to replicate any issues without having some issue with keys in Redis.

@sgtram

This comment has been minimized.

Copy link
Contributor Author

@sgtram sgtram commented Nov 13, 2019

I can confirm that within 3 seconds the keys are still there. Something with TTL?

@wyattjoh

This comment has been minimized.

Copy link
Member

@wyattjoh wyattjoh commented Nov 13, 2019

What version of Redis are you working with? I am unable to reproduce the mentioned error unfortunately.

@sgtram

This comment has been minimized.

Copy link
Contributor Author

@sgtram sgtram commented Nov 14, 2019

We're running on Redis version 3.2.6

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
3 participants
You can’t perform that action at this time.