-
-
Notifications
You must be signed in to change notification settings - Fork 448
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Updated configuration - fix #37 #41
Conversation
This is the configuration I have in mind in regards to the problem exposed by merging this PR |
well imho we should have a solution to configure cache services and then simply connect them to EM's. this is what i partially started with LiipDoctrineCacheBundle. |
Ok, that I agree with but meanwhile maybe this should be merged and there should be a new issue on GH/Jira to mark the effects of this PR and what should be done to fix them? |
What should I do in order for this to get merged? |
@stof Can you provide some feedback for this? I'm not sure who/where to ask about this PR but I do believe it should be either merged or, if possible, explained how the PR should be improved in order to fix the issue it arises once merged. What do you think? Kind regards. |
ping @stof Can you please take a look at this and tell what's missing before it can be merged? Thanks! |
@dlsniper this will need a rebase before merging. But I will not have time to review it until Wednesday as I only have my internet connection this evening and I'm leaving then. We may need to move this PR to Symfony itself if symfony/symfony#4153 is merged btw |
@stof thanks. I'll rebase this and study the PR in symfony. If it gets merged then I'll try and come up with something for it. |
…iver on different cache type.
@dlsniper : symfony/symfony#4153 has been merged so you're good for a PR to symfony ;) |
Closing this PR as it should be sent to the bridge |
Updated configuration to support different settings for same cache driver on different cache type.
This PR fixes #37 BUT, and this one is the real catch, it will also decimate system resources in terms of connections to memcache(d) servers as currently there's no way to see if the memcache(d) connection from one EM matches the another existing one. This means that if you have multiple EMs, like me, and want to configure them to use memcache(d) for all cache types it will spawn as many as 3xEMs number connections to the memcache(d) server (if my math is right).
I believe that this configuration should be done different but I'm not sure about the direction for it. It should also support a pool of servers as currently you can only use one server / cache type and this is not good for a production/fail-over system.
So, while merging this will solve the current bug, this instead will open another bunch of issues.
Please advise on how to proceed with this as it should be fixed and I'm willing to do the required changes for this if I have the input from the dev team.
Thank you.