-
Notifications
You must be signed in to change notification settings - Fork 5.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
New error with v3.17.4 and how to do read only #4464
Comments
Can you share logs with |
Definitely! Didn't know how much to collect but here's a sample:
|
Is it a Redis cluster? |
No this is running in Google Cloud Memorystore which does not support clustering. |
So it seems that a client which is connected to the read replica endpoint is trying to evict cache entries. Given that I'm using a single server redisson config, it could be that this is intentional (i.e. any redisson client should be able to issue eviction commands), and that I should be configuring redisson differently to avoid this, but I'm still curious as to why the error only surfaces with 3.17.4. |
Is the MasterSlaveServerConfiguration the appropriate one to use in this scenario? |
I have now rolled out MasterSlaveServerConfiguration with the newest version of redisson and I am now no longer seeing this error, so closing this issue 👍 |
After upgrading to 3.17.4 I have a new error in my logs not previously seen:
We use google memorystore with redis, and read replicas. For performance reasons we have 2 kubernetes deployments, one which writes to the main endpoint, and the other reading from the read replica endpoint.
The error above is from the deployment which is only supposed to read, not do any writing.
It's starting to occur to me that I maybe should configure these deployments differently with regards to redisson? Right now they use the same config using single server just pointing to different endpoints.
Why am I only getting this error with upgrade to 3.17.4, and can you point me towards the proper way to configure this setup?
Thanks in advance!
The text was updated successfully, but these errors were encountered: