Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add cluster connection support to redis (using jedis client) #122

Closed
yuriyz opened this issue Mar 19, 2019 · 3 comments
Closed

Add cluster connection support to redis (using jedis client) #122

yuriyz opened this issue Mar 19, 2019 · 3 comments
Assignees
Milestone

Comments

@yuriyz
Copy link
Contributor

yuriyz commented Mar 19, 2019

Add cluster connection support to redis (using jedis client)

@yuriyz yuriyz added this to the 4.0 milestone Mar 19, 2019
@yuriyz yuriyz self-assigned this Mar 19, 2019
@yuriyz
Copy link
Contributor Author

yuriyz commented Apr 10, 2019

image

image

This works in 4.0 and in 3.1.6. If jedis get redirection exception (e.g. master goes down and we got promotion of next server from slave to master) then jedis automatically re-new cluster slots. All this work out of the box (see screenshot above attached).

cc @mogluu

@yuriyz yuriyz closed this as completed Apr 10, 2019
@moabu
Copy link
Member

moabu commented Apr 13, 2019

If all servers are introduced redis1.gluu.org:6379,redis2.gluu.org:6379,redis3.gluu.org:6379 we get this error. However, we can connect to all servers via the consul. So the issue is not in redis.

This is the oxCacheConfiguration during testing three redis servers:
oxCacheConfiguration

{"cacheProviderType":"REDIS","memcachedConfiguration":{"servers":"localhost:11211","maxOperationQueueLength":100000,"bufferSize":32768,"defaultPutExpiration":60,"connectionFactoryType":"DEFAULT"},"inMemoryConfiguration":{"defaultPutExpiration":60},"redisConfiguration":{"redisProviderType":"CLUSTER","servers":"redis1.gluu.org:6379,redis2.gluu.org:6379,redis3.gluu.org:6379","defaultPutExpiration":60,"password":null,"decryptedPassword":null,"useSSL":false,"sslTrustStoreFilePath":""},"nativePersistenceConfiguration":{"defaultPutExpiration":60,"defaultCleanupBatchSize":25}}
root@localhost:~# tail -f /opt/gluu/jetty/identity/logs/oxtrust.log
2019-04-13 11:16:50,054 INFO  [main] [org.gluu.oxtrust.ldap.service.TemplateService] (TemplateService.java:102) - file.resource.loader.path = /opt/gluu/jetty/identity/conf/shibboleth3/idp, /opt/gluu/jetty/identity/conf/shibboleth3/sp, /opt/gluu/jetty/identity/conf/ldif, /opt/gluu/jetty/identity/conf/shibboleth3/idp/MetadataFilter, /opt/gluu/jetty/identity/conf/shibboleth3/idp/ProfileConfiguration, /opt/gluu/jetty/identity/conf/template/conf, /opt/gluu/jetty/identity/conf/template/shibboleth3
2019-04-13 11:16:50,156 WARN  [main] [org.gluu.oxtrust.ldap.service.SubversionService] (SubversionService.java:191) - The service which commit configuration files into SVN was disabled
2019-04-13 11:16:51,486 INFO  [main] [org.gluu.oxtrust.ldap.service.ShibbolethInitializer] (ShibbolethInitializer.java:41) - IDP config generation is set to true
2019-04-13 11:16:51,524 INFO  [main] [org.gluu.oxtrust.ldap.service.ShibbolethInitializer] (ShibbolethInitializer.java:87) - ########## shibbolethVersion = v3
2019-04-13 11:16:51,524 INFO  [main] [org.gluu.oxtrust.ldap.service.Shibboleth3ConfService] (Shibboleth3ConfService.java:1325) - >>>>>>>>>> IN Shibboleth3ConfService.generateMetadataFiles()...
2019-04-13 11:16:51,568 INFO  [main] [org.gluu.oxtrust.ldap.service.Shibboleth3ConfService] (Shibboleth3ConfService.java:1391) - >>>>>>>>>> LEAVING Shibboleth3ConfService.generateMetadataFiles()...
2019-04-13 11:16:51,569 INFO  [main] [org.gluu.oxtrust.ldap.service.Shibboleth3ConfService] (Shibboleth3ConfService.java:180) - >>>>>>>>>> IN Shibboleth3ConfService.generateConfigurationFiles()...
2019-04-13 11:16:51,585 INFO  [main] [org.gluu.oxtrust.ldap.service.ApplicationFactory] (ApplicationFactory.java:58) - Cache configuration: CacheConfiguration{cacheProviderType=REDIS, memcachedConfiguration=MemcachedConfiguration{servers='localhost:11211', maxOperationQueueLength=100000, bufferSize=32768, defaultPutExpiration=60, connectionFactoryType=DEFAULT}, redisConfiguration=RedisConfiguration{servers='redis1.gluu.org:6379,redis2.gluu.org:6379,redis3.gluu.org:6379', defaultPutExpiration=60, redisProviderType=CLUSTER, useSSL=false, sslTrustStoreFilePath=}, inMemoryConfiguration=InMemoryConfiguration{defaultPutExpiration=60}, nativePersistenceConfiguration=NativePersistenceConfiguration [defaultPutExpiration=60, defaultCleanupBatchSize=25, baseDn=o=gluu]}
2019-04-13 11:16:51,617 ERROR [main] [org.xdi.service.cache.RedisClusterProvider] (RedisClusterProvider.java:44) - Failed to start RedisClusterProvider.
2019-04-13 11:16:51,617 ERROR [main] [org.xdi.service.cache.RedisProvider] (RedisProvider.java:50) - Failed to start RedisProvider.
^C
root@localhost:~# redis-cli -h redis1.gluu.org
redis1.gluu.org:6379> ping
PONG
redis1.gluu.org:6379>

Even more. If I set server to STANDALONE with one Redis URL 'redis1.gluu.org:6379` we get a connection and cache is working through Redis.

oxCacheConfiguration

{"cacheProviderType":"REDIS","memcachedConfiguration":{"servers":"localhost:11211","maxOperationQueueLength":100000,"bufferSize":32768,"defaultPutExpiration":60,"connectionFactoryType":"DEFAULT"},"inMemoryConfiguration":{"defaultPutExpiration":60},"redisConfiguration":{"redisProviderType":"STANDALONE","servers":"redis1.gluu.org:6379","defaultPutExpiration":60,"password":null,"decryptedPassword":null,"useSSL":false,"sslTrustStoreFilePath":""},"nativePersistenceConfiguration":{"defaultPutExpiration":60,"defaultCleanupBatchSize":25}}

@moabu
Copy link
Member

moabu commented Apr 13, 2019

This issue can be closed. Tested in 3.1.6 and 4.0 with Cluster Redis. Sentinel was causing conflict when Jedis was trying to connect to redis cluster. I will update the proposed UMA for HA Redis cache configuration to accommodate the change.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants