Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[config] [dynamic-config] Map configurations inconsistency between nodes #10860

Closed
aarnaout opened this issue Jun 29, 2017 · 5 comments
Closed

[config] [dynamic-config] Map configurations inconsistency between nodes #10860

aarnaout opened this issue Jun 29, 2017 · 5 comments

Comments

@aarnaout
Copy link

@aarnaout aarnaout commented Jun 29, 2017

This is similar but not exactly the same of the following:
#851
#592

I am using the following code to pre-configure a few maps when running Hazelcast instance:

Iterator hzIterator = Hazelcast.getAllHazelcastInstances().iterator();
HazelcastInstance hzinsance = null;
while (hzIterator.hasNext()) {
            hzinsance = (HazelcastInstance) hzIterator.next();
            Config config = hzinsance.getConfig();
            MapConfig mapConfig = config.getMapConfig("default");
            mapConfig.setMaxSizeConfig(new MaxSizeConfig(defaultMaxSize , MaxSizeConfig.MaxSizePolicy.PER_NODE)) 
                    .setEvictionPolicy(EvictionPolicy.LRU)  //Last Recently Used
                    .setTimeToLiveSeconds(defaultTtl);
}

When I run my two nodes, everything is good on both of them. However, when one node gets restarted, the configurations go back to default despite the code above! Yet in mancenter still showing the correct configurations from the other node. Once that other node with correct configurations gets restarted, then all the nodes have the wrong configurations, and that will be reflected in Mancenter when clicking Map Config

@jerrinot any idea? I know you were working on #10814 which could be related

@mmedenjak
Copy link
Contributor

@mmedenjak mmedenjak commented Jul 10, 2017

@aarnaout yes, you are right, currently there is no support for dynamically adding config. There is a new "dynamic config" feature coming up in 3.9. The configuration should be copied to newly joining nodes. Can you take a look and see if it fits your use case?

@mmedenjak mmedenjak added this to the 3.9 milestone Jul 10, 2017
@mmedenjak mmedenjak changed the title Map configurations inconsistency between nodes [config] [dynamic-config] Map configurations inconsistency between nodes Jul 11, 2017
@jerrinot
Copy link
Contributor

@jerrinot jerrinot commented Jul 13, 2017

@aarnaout: can you try this with 3.9 EA? you will need to slighly change your code, something like this should do:

Config config = instance.getConfig();
MapConfig mapConfig = new MapConfig("myMap");
mapConfig.setFoo().setBar();
config.addMapConfig(mapConfig);

this should work on 3.9 EA. It's OK to call it on just a single cluster instance - cluster will automatically distribute it too all members. Including the ones joining later.

Caveat: You can use this to submit new configurations, but you cannot change existing configuration. At least for now.

@ahmetmircik
Copy link
Member

@ahmetmircik ahmetmircik commented Aug 1, 2017

Hi @aarnaout, any update on this?

@aarnaout
Copy link
Author

@aarnaout aarnaout commented Aug 1, 2017

So there were two issues indeed.

  1. Setting up the "default" map configurations the way I had it above was not working for the second node after it restarts. I changed that to within the bean xml configs, and that took care of this part:
    <hz:map name="default" eviction-policy="LRU" max-size="1000" time-to-live-seconds="10000"></hz:map>
    This works fine in both 3.8 & 3.9-EA

  2. The other maps that I have specific eviction configurations for (different than default) were going back to default in this scenario:
    Two node cluster, node 2 was restarted at some point, then a while after node 1 stops. All the maps that were copied to node 2 after it restarted picked up the "default" configurations instead of their specific configs.
    This issues seems to be resolved in 3.9-EA (using code provided by jerrinot), so all good now.

@vbekiaris
Copy link
Contributor

@vbekiaris vbekiaris commented Aug 2, 2017

Thanks for the heads up @aarnaout , closing this issue now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
5 participants
You can’t perform that action at this time.