Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Override reconnectOnError in tsed ioredis module #2668

Closed
ygpark80 opened this issue Apr 23, 2024 · 3 comments · Fixed by #2671
Closed

Override reconnectOnError in tsed ioredis module #2668

ygpark80 opened this issue Apr 23, 2024 · 3 comments · Fixed by #2671

Comments

@ygpark80
Copy link

ygpark80 commented Apr 23, 2024

Is your feature request related to a problem? Please describe.

There is no way to override 'reconnectOnError' since the internal 'reconnectOnError' always overwrites the one in 'redisOptions'.

Describe the solution you'd like

I've been experiencing issues with ioredis in our AWS Lambda-based ts.ed distribution, specifically receiving 'connection is closed' errors from time to time. Many comments including this suggest using 'reconnectOnError', but the ts.ed ioredis distribution hard codes this function.

Describe alternatives you've considered

Perhaps we could change the order to something like the following?

          connection = new Redis({
            lazyConnect: true,
            reconnectOnError,
            ...redisOptions
          } as RedisOptions);

Additional context

No response

Acceptance criteria

No response

@ygpark80
Copy link
Author

Oh, one more thing. I'm not sure if this is going to make sense, but I have copied the registerConnectionProvider to my source base and tested the reconnectOnError, but things didn't change. As I was examining the error, since you are forcing the use of lazyConnect: true and making a connection manually, an error may arise from the connect() function, thus rendering the connection useless. Since our lambda is reusing the instance and thus reusing the broken connection, the error continues to occur when ioredis is used. So, I wrapped the connect() function with a try-catch, since the ioredis internal was using connect().catch(noop) when lazyConnect is not used. Since then, an ETIMEDOUT error has occurred once in a while, but the reconnection logic kicked in, and I haven't seen any 5xx errors since then.

i.e.

				try {
					await connection.connect()
					logger.info("Connected to redis database...")
				} catch (error) {
					Sentry.captureException(error)
				}

Copy link

🎉 Are you happy?

If you appreciated the support, know that it is free and is carried out on personal time ;)

A support, even a little bit makes a difference for me and continues to bring you answers!

github opencollective

@Romakita
Copy link
Collaborator

🎉 This issue has been resolved in version 7.67.7 🎉

The release is available on:

Your semantic-release bot 📦🚀

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants