Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Managing Failovers in the cluster #131

Closed
hugrave opened this issue Sep 3, 2021 · 3 comments
Closed

Managing Failovers in the cluster #131

hugrave opened this issue Sep 3, 2021 · 3 comments

Comments

@hugrave
Copy link

hugrave commented Sep 3, 2021

Hello guys!

First of all, I want to congratulate with you for the amazing job of this application.

In our use case, we use Kubernetes to deploy the application and we use the rolling strategy for managing new versions. The main idea is that every time a new version is released, old instances are shut down and new versions are spun up.

How to manage failover in this case? If one of the instances kept a state (some entries in the cache), all the data would be lost because it doesn't get transferred to other nodes of the cluster. This is not a major problem, since the data is still kept in the source of truth, but this would mean a reset of the cache every time a new version of the software is rolled out.

Do you believe this feature can actually be needed by the community? Do you already have it in the pipeline?

@cabol
Copy link
Owner

cabol commented Sep 3, 2021

Hey 👋 !

First of all, thank you so much 👍

On the other hand, regarding the scenario you describe, using k8s and having that deployment strategy is pretty common, and it is true when you say it should not be an issue since you should have the SoR, but I agree the downside is that every time you deploy, the cached data will be lost.

Do you believe this feature can actually be needed by the community?

I think it may be useful because as I mentioned before, it is a very common use case.

Do you already have it in the pipeline?

No, it is not in the roadmap. But this kind of feature can be implemented as a separate project, like an add-on.

However, I think there are multiple ways to handle this, but let me start with something simple (not sure if that works for you).

  1. Define/configure a multi-level cache, where the L1 is what you currently have, and the L2 could be Redis (and you can use NebulexRedisAdapter) deployed in separate nodes (independent cache servers), and acting as a kind of online/on-fly backup. Therefore, by the time you re-deploy your app, only the L1 will be reset, but you still have data in L2. Of course, this means the L2 will be accessed to retrieve the data and cache it in L1, but it may be better than accessing the DB for example.

  2. Another option is implementing a process/task/job that loads the data from your SoR to the cache. The issue here is to know what data or entries, or maybe all? And depending on the case, it may be a heavy process, but anyhow, it is another option that I think might be straightforward to implement.

There are other options, but I think they are more complicated, and honestly, not sure if they are worth it. For example, a snapshot manager using S3 maybe? But I think it is not that simple, especially when you have multiple instances. This could be an interesting add-on for Nebulex, but again, not sure if it's worth it. Perhaps we can continue the brainstorming, but in the meantime, see if the two simpler options above may work for you.

Stay tuned! Thanks!

@hugrave
Copy link
Author

hugrave commented Sep 6, 2021

Hi @cabol!

Thank you for your fast answer. I believe there are several possible roads here in order to face this issue. I will have a thought in these days with my team and I'll come back with some other alternatives in order to keep brainstorming.

Talk to you soon!

@cabol
Copy link
Owner

cabol commented Oct 15, 2022

I will close this issue for now since it has been opened a long time ago but feel free to reopen it in case you have some feedback or new thoughts, etc. Thanks!

@cabol cabol closed this as completed Oct 15, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants