Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dataloss in map when adding key a second time after new member joined the cluster #117

Closed
AlexDus opened this issue Apr 9, 2012 · 16 comments
Milestone

Comments

@AlexDus
Copy link

AlexDus commented Apr 9, 2012

The distributed map seems to loose data, when you try to put an already mapped key into the map after another member has been started. I found this behaviour in the Version 2.0.2 and 2.1 snapshot build. Didn't try any 1.x version. I am using two Win7 64 Bit machines with jdk 1.7u3.

Steps to reproduce the problem:

  • Start one hazelcast server
  • Fill a map with some simple data:
public static void main(String[] args) {
        int startKey = 0;
        int numberOfKeys = 10;
        Random rand = new Random();
        ClientConfig clientConfig = new ClientConfig();
        clientConfig.getGroupConfig().setName("dev").setPassword("dev-pass");
        clientConfig.addAddress(Globals.serverNames);
        HazelcastInstance client = HazelcastClient.newHazelcastClient(clientConfig);

        IMap<Long, Long> myMap = client.getMap("mydata");
        for (int i = startKey; i < (startKey+numberOfKeys); i++) {
            Long key = new Long(i);
            myMap.lock(key);
            myMap.put(key, new Long(rand.nextInt()));
            myMap.unlock(key);
            System.out.println(new Long(i).toString());
        }
        myMap.flush();
        System.out.println(myMap.size());
        client.getLifecycleService().shutdown();

    }
  • List the keys of the map:
public static void main(String[] args) {

        ClientConfig clientConfig = new ClientConfig();
        clientConfig.getGroupConfig().setName("dev").setPassword("dev-pass");
        clientConfig.addAddress(Globals.serverNames);
        HazelcastInstance client = HazelcastClient.newHazelcastClient(clientConfig);
        PartitionService partitionService = client.getPartitionService();

        Map<Long, Long> myMap = client.getMap("mydata");
        Set keys = myMap.keySet();

        Iterator<Long> iter = keys.iterator();
        int counter = 0;
        while (iter.hasNext()) {
            Long t  = iter.next();
            Partition partition = partitionService.getPartition(t);
            Member ownerMember = partition.getOwner();
            System.out.println(t + ";" + ownerMember.toString());
            counter++;
        }
        System.out.println("Number of keys: " + counter);
        client.getLifecycleService().shutdown();
    }

Output:

0;Member [192.168.0.4]:5701
1;Member [192.168.0.4]:5701
2;Member [192.168.0.4]:5701
3;Member [192.168.0.4]:5701
4;Member [192.168.0.4]:5701
5;Member [192.168.0.4]:5701
6;Member [192.168.0.4]:5701
7;Member [192.168.0.4]:5701
8;Member [192.168.0.4]:5701
9;Member [192.168.0.4]:5701
Number of keys: 10
  • Start a second hazelcast server
  • List the keys again:
    Output:
0;Member [192.168.0.4]:5701
1;Member [192.168.0.4]:5701
2;Member [192.168.0.5]:5701
3;Member [192.168.0.4]:5701
4;Member [192.168.0.4]:5701
5;Member [192.168.0.5]:5701
6;Member [192.168.0.4]:5701
7;Member [192.168.0.4]:5701
8;Member [192.168.0.5]:5701
9;Member [192.168.0.5]:5701
Number of keys: 10

Some of the keys have changed ownership - just as expected.

  • Now run the code the fill the map again.
  • List the keys again:

Output:

0;Member [192.168.0.4]:5701
1;Member [192.168.0.4]:5701
3;Member [192.168.0.4]:5701
4;Member [192.168.0.4]:5701
6;Member [192.168.0.4]:5701
7;Member [192.168.0.4]:5701
Number of keys: 6

All the migrated keys are missing. When setting the log level to finest, the second hazelcast server tells me, that it is evicting 4 entries. If you run the code to fill the map again, everything will be fine. I tried it with a lot of different map properties - none seem to solve this problem. Changes to the code (for example setting the ttl with put, setting a lock on the map, using replace instead of put for second mapping) didn't help.

@mdogan
Copy link
Contributor

mdogan commented Apr 9, 2012

I could not reproduce the issue. Are you using default map configuration
for "mydata" map?

On Mon, Apr 9, 2012 at 10:46 AM, AlexDus <
reply@reply.github.com

wrote:

The distributed map seems to loose data, when you try to put an already
mapped key into the map after another member has been started. I found this
behaviour in the Version 2.0.2 and 2.1 snapshot build. Didn't try any 1.x
version. I am using two Win7 64 Bit machines with jdk 1.7u3.

Steps to reproduce the problem:

  • Start one hazelcast server
  • Fill a map with some simple data:
public static void main(String[] args) {
       int startKey = 0;
       int numberOfKeys = 10;
       Random rand = new Random();
       ClientConfig clientConfig = new ClientConfig();

 clientConfig.getGroupConfig().setName("dev").setPassword("dev-pass");
       clientConfig.addAddress(Globals.serverNames);
       HazelcastInstance client =
HazelcastClient.newHazelcastClient(clientConfig);

       IMap<Long, Long> myMap = client.getMap("mydata");
       for (int i = startKey; i < (startKey+numberOfKeys); i++) {
           Long key = new Long(i);
           myMap.lock(key);
           myMap.put(key, new Long(rand.nextInt()));
           myMap.unlock(key);
           System.out.println(new Long(i).toString());
       }
       myMap.flush();
       System.out.println(myMap.size());
       client.getLifecycleService().shutdown();

   }
  • List the keys of the map:
public static void main(String[] args) {

       ClientConfig clientConfig = new ClientConfig();

 clientConfig.getGroupConfig().setName("dev").setPassword("dev-pass");
       clientConfig.addAddress(Globals.serverNames);
       HazelcastInstance client =
HazelcastClient.newHazelcastClient(clientConfig);
       PartitionService partitionService = client.getPartitionService();

       Map<Long, Long> myMap = client.getMap("mydata");
       Set keys = myMap.keySet();

       Iterator<Long> iter = keys.iterator();
       int counter = 0;
       while (iter.hasNext()) {
           Long t  = iter.next();
           Partition partition = partitionService.getPartition(t);
           Member ownerMember = partition.getOwner();
           System.out.println(t + ";" + ownerMember.toString());
           counter++;
       }
       System.out.println("Number of keys: " + counter);
       client.getLifecycleService().shutdown();
   }

Output:

0;Member [192.168.0.4]:5701
1;Member [192.168.0.4]:5701
2;Member [192.168.0.4]:5701
3;Member [192.168.0.4]:5701
4;Member [192.168.0.4]:5701
5;Member [192.168.0.4]:5701
6;Member [192.168.0.4]:5701
7;Member [192.168.0.4]:5701
8;Member [192.168.0.4]:5701
9;Member [192.168.0.4]:5701
Number of keys: 10
  • Start a second hazelcast server
  • List the keys again:
    Output:
0;Member [192.168.0.4]:5701
1;Member [192.168.0.4]:5701
2;Member [192.168.0.5]:5701
3;Member [192.168.0.4]:5701
4;Member [192.168.0.4]:5701
5;Member [192.168.0.5]:5701
6;Member [192.168.0.4]:5701
7;Member [192.168.0.4]:5701
8;Member [192.168.0.5]:5701
9;Member [192.168.0.5]:5701
Number of keys: 10

Some of the keys have changed ownership - just as expected.

  • Now run the code the fill the map again.
  • List the keys again:

Output:

0;Member [192.168.0.4]:5701
1;Member [192.168.0.4]:5701
3;Member [192.168.0.4]:5701
4;Member [192.168.0.4]:5701
6;Member [192.168.0.4]:5701
7;Member [192.168.0.4]:5701
Number of keys: 6

All the migrated keys are missing. When setting the log level to finest,
the second hazelcast server tells me, that it is evicting 4 entries. If you
run the code to fill the map again, everything will be fine. I tried it
with a lot of different map properties - none seem to solve this problem.
Changes to the code (for example setting the ttl with put, setting a lock
on the map, using replace instead of put for second mapping) didn't help.


Reply to this email directly or view it on GitHub:
#117

@AlexDus
Copy link
Author

AlexDus commented Apr 9, 2012

I tried some different setting. Currently it looks like this:

<map name="mydata">
        <backup-count>1</backup-count>
        <time-to-live-seconds>0</time-to-live-seconds>
        <max-size>100000</max-size>
        <merge-policy>hz.ADD_NEW_ENTRY</merge-policy>
        <eviction-policy>NONE</eviction-policy>
    </map>

@mdogan
Copy link
Contributor

mdogan commented Apr 9, 2012

And still having the issue?

On Mon, Apr 9, 2012 at 12:08 PM, AlexDus <
reply@reply.github.com

wrote:

I tried some different setting. Currently it looks like this:

<map name="mydata">
       <backup-count>1</backup-count>
       <time-to-live-seconds>0</time-to-live-seconds>
       <max-size>100000</max-size>
       <merge-policy>hz.ADD_NEW_ENTRY</merge-policy>
       <eviction-policy>NONE</eviction-policy>
   </map>

Reply to this email directly or view it on GitHub:
#117 (comment)

@AlexDus
Copy link
Author

AlexDus commented Apr 9, 2012

Yes, and the settings are the same on both machines. I keep them in-sync with my private git repository.

@mdogan
Copy link
Contributor

mdogan commented Apr 9, 2012

Could you send your whole configuration and those debug logs (mentioning eviction)?

@AlexDus
Copy link
Author

AlexDus commented Apr 9, 2012

I just tried it with a default config file again - same result.

Here is my custom config i am testing with:

<?xml version="1.0" encoding="UTF-8"?>
<!--
To change this template, choose Tools | Templates
and open the template in the editor.
-->

<hazelcast xsi:schemaLocation="http://www.hazelcast.com/schema/config hazelcast-config-2.0.xsd"
           xmlns="http://www.hazelcast.com/schema/config"
           xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">

    <queue name="tasks">
        <max-size-per-jvm>10000</max-size-per-jvm>
        <time-to-live-seconds>0</time-to-live-seconds>
    </queue>


    <group>
        <name>dev</name>
        <password>dev-pass</password>
    </group>
    <network>
        <port auto-increment="true">5701</port>
        <join>
            <multicast enabled="false">
                <multicast-group>224.2.2.3</multicast-group>
                <multicast-port>54327</multicast-port>
            </multicast>
            <tcp-ip enabled="true">
                <interface>192.168.0.4</interface>  
                <interface>192.168.0.5</interface> 
            </tcp-ip>
            <aws enabled="false">
                <access-key>my-access-key</access-key>
                <secret-key>my-secret-key</secret-key>
                <!--optional, default is us-east-1 -->
                <region>us-west-1</region>
                <!-- optional, only instances belonging to this group will be discovered, default will try all running instances -->
                <security-group-name>hazelcast-sg</security-group-name>
                <tag-key>type</tag-key>
                <tag-value>hz-nodes</tag-value>
            </aws>
        </join>
        <interfaces enabled="false">
            <interface>10.10.1.*</interface>
        </interfaces>
        <ssl enabled="false" />
        <socket-interceptor enabled="false" />
        <symmetric-encryption enabled="false">
            <!--
               encryption algorithm such as
               DES/ECB/PKCS5Padding,
               PBEWithMD5AndDES,
               AES/CBC/PKCS5Padding,
               Blowfish,
               DESede
            -->
            <algorithm>PBEWithMD5AndDES</algorithm>
            <!-- salt value to use when generating the secret key -->
            <salt>thesalt</salt>
            <!-- pass phrase to use when generating the secret key -->
            <password>thepass</password>
            <!-- iteration count to use when generating the secret key -->
            <iteration-count>19</iteration-count>
        </symmetric-encryption>
        <asymmetric-encryption enabled="false">
            <!-- encryption algorithm -->
            <algorithm>RSA/NONE/PKCS1PADDING</algorithm>
            <!-- private key password -->
            <keyPassword>thekeypass</keyPassword>
            <!-- private key alias -->
            <keyAlias>local</keyAlias>
            <!-- key store type -->
            <storeType>JKS</storeType>
            <!-- key store password -->
            <storePassword>thestorepass</storePassword>
            <!-- path to the key store -->
            <storePath>keystore</storePath>
        </asymmetric-encryption>
    </network>
    <partition-group enabled="false"/>
    <executor-service>
        <core-pool-size>16</core-pool-size>
        <max-pool-size>64</max-pool-size>
        <keep-alive-seconds>60</keep-alive-seconds>
    </executor-service>
    <queue name="default">
        <!--
            Maximum size of the queue. When a JVM's local queue size reaches the maximum,
            all put/offer operations will get blocked until the queue size
            of the JVM goes down below the maximum.
            Any integer between 0 and Integer.MAX_VALUE. 0 means
            Integer.MAX_VALUE. Default is 0.
        -->
        <max-size-per-jvm>0</max-size-per-jvm>
        <!--
            Name of the map configuration that will be used for the backing distributed
            map for this queue.
        -->
        <backing-map-ref>default</backing-map-ref>
    </queue>


    <map name="default">
        <!--
            Number of backups. If 1 is set as the backup-count for example,
            then all entries of the map will be copied to another JVM for
            fail-safety. 0 means no backup.
        -->
        <backup-count>1</backup-count>
        <!--
            Maximum number of seconds for each entry to stay in the map. Entries that are
            older than <time-to-live-seconds> and not updated for <time-to-live-seconds>
            will get automatically evicted from the map.
            Any integer between 0 and Integer.MAX_VALUE. 0 means infinite. Default is 0.
        -->
        <time-to-live-seconds>0</time-to-live-seconds>
        <!--
            Maximum number of seconds for each entry to stay idle in the map. Entries that are
            idle(not touched) for more than <max-idle-seconds> will get
            automatically evicted from the map. Entry is touched if get, put or containsKey is called.
            Any integer between 0 and Integer.MAX_VALUE. 0 means infinite. Default is 0.
        -->
        <max-idle-seconds>0</max-idle-seconds>
        <!--
            Valid values are:
            NONE (no eviction),
            LRU (Least Recently Used),
            LFU (Least Frequently Used).
            NONE is the default.
        -->
        <eviction-policy>NONE</eviction-policy>
        <!--
            Maximum size of the map. When max size is reached,
            map is evicted based on the policy defined.
            Any integer between 0 and Integer.MAX_VALUE. 0 means
            Integer.MAX_VALUE. Default is 0.
        -->
        <max-size policy="cluster_wide_map_size">0</max-size>
        <!--
            When max. size is reached, specified percentage of
            the map will be evicted. Any integer between 0 and 100.
            If 25 is set for example, 25% of the entries will
            get evicted.
        -->
        <eviction-percentage>25</eviction-percentage>
        <!--
            While recovering from split-brain (network partitioning),
            map entries in the small cluster will merge into the bigger cluster
            based on the policy set here. When an entry merge into the
            cluster, there might an existing entry with the same key already.
            Values of these entries might be different for that same key.
            Which value should be set for the key? Conflict is resolved by
            the policy set here. Default policy is hz.ADD_NEW_ENTRY

            There are built-in merge policies such as
            hz.NO_MERGE      ; no entry will merge.
            hz.ADD_NEW_ENTRY ; entry will be added if the merging entry's key
                               doesn't exist in the cluster.
            hz.HIGHER_HITS   ; entry with the higher hits wins.
            hz.LATEST_UPDATE ; entry with the latest update wins.
        -->
        <merge-policy>hz.ADD_NEW_ENTRY</merge-policy>
    </map>
    <!-- Add your own semaphore configurations here:
        <semaphore name="default">
            <initial-permits>10</initial-permits>
            <semaphore-factory enabled="true">
                <class-name>com.acme.MySemaphoreFactory</class-name>
            </semaphore-factory>
        </semaphore>
    -->

    <!-- Add your own map merge policy implementations here:
        <merge-policies>
            <map-merge-policy name="MY_MERGE_POLICY">
                <class-name>com.acme.MyOwnMergePolicy</class-name>
            </map-merge-policy>
        </merge-policies>
    -->
    <map name="mydata">
        <backup-count>1</backup-count>
        <time-to-live-seconds>0</time-to-live-seconds>
        <max-size>100000</max-size>
        <merge-policy>hz.ADD_NEW_ENTRY</merge-policy>
        <eviction-policy>NONE</eviction-policy>
        <eviction-percentage>0</eviction-percentage>
    </map>

</hazelcast>

Log-File from first server:


run:
Apr 09, 2012 11:38:12 AM com.hazelcast.config.XmlConfigBuilder
INFO: Looking for hazelcast.xml config file in classpath.
Apr 09, 2012 11:38:12 AM com.hazelcast.config.XmlConfigBuilder
INFO: Using configuration file /C:/Users/alex/Documents/NetBeansProjects/HazelTest/build/classes/hazelcast.xml in the classpath.
Apr 09, 2012 11:38:12 AM com.hazelcast.system
INFO: /192.168.0.4:5701 [dev] Hazelcast Community Edition 2.0.2 (20120321) starting at Address[192.168.0.4:5701]
Apr 09, 2012 11:38:12 AM com.hazelcast.system
INFO: /192.168.0.4:5701 [dev] Copyright (C) 2008-2012 Hazelcast.com
Apr 09, 2012 11:38:12 AM com.hazelcast.impl.LifecycleServiceImpl
INFO: /192.168.0.4:5701 [dev] Address[192.168.0.4:5701] is STARTING
Apr 09, 2012 11:38:12 AM com.hazelcast.impl.TcpIpJoiner
INFO: /192.168.0.4:5701 [dev] connecting to Address[192.168.0.5:5702]
Apr 09, 2012 11:38:12 AM com.hazelcast.impl.TcpIpJoiner
INFO: /192.168.0.4:5701 [dev] connecting to Address[192.168.0.5:5701]
Apr 09, 2012 11:38:12 AM com.hazelcast.impl.TcpIpJoiner
INFO: /192.168.0.4:5701 [dev] connecting to Address[192.168.0.4:5702]
Apr 09, 2012 11:38:12 AM com.hazelcast.impl.TcpIpJoiner
INFO: /192.168.0.4:5701 [dev] connecting to Address[192.168.0.5:5703]
Apr 09, 2012 11:38:12 AM com.hazelcast.impl.TcpIpJoiner
INFO: /192.168.0.4:5701 [dev] connecting to Address[192.168.0.4:5703]
Apr 09, 2012 11:38:17 AM com.hazelcast.impl.TcpIpJoiner
INFO: /192.168.0.4:5701 [dev] 


Members [1] {
    Member [192.168.0.4:5701] this
}

Apr 09, 2012 11:38:17 AM com.hazelcast.impl.management.ManagementCenterService
INFO: /192.168.0.4:5701 [dev] Hazelcast Management Center started at port 5801.
Apr 09, 2012 11:38:17 AM com.hazelcast.impl.LifecycleServiceImpl
INFO: /192.168.0.4:5701 [dev] Address[192.168.0.4:5701] is STARTED
/192.168.0.4:5701 [dev] Initializing cluster partition table first arrangement...
Apr 09, 2012 11:38:22 AM com.hazelcast.impl.PartitionManager
INFO: /192.168.0.4:5701 [dev] Initializing cluster partition table first arrangement...
/192.168.0.4:5701 [dev] Connection timed out: connect
/192.168.0.4:5701 [dev] An error occurred on connection to Address[192.168.0.5:5702] Cause => java.net.ConnectException {Connection timed out: connect}, Error-Count: 0
/192.168.0.4:5701 [dev] Connection timed out: connect
/192.168.0.4:5701 [dev] An error occurred on connection to Address[192.168.0.5:5703] Cause => java.net.ConnectException {Connection timed out: connect}, Error-Count: 0
/192.168.0.4:5701 [dev] Connection timed out: connect
/192.168.0.4:5701 [dev] An error occurred on connection to Address[192.168.0.5:5701] Cause => java.net.ConnectException {Connection timed out: connect}, Error-Count: 0
/192.168.0.4:5701 [dev] 5701 is accepting socket connection from /192.168.0.4:61463
/192.168.0.4:5701 [dev] 5701 accepted socket connection from /192.168.0.4:61463
Apr 09, 2012 11:38:46 AM com.hazelcast.nio.SocketAcceptor
INFO: /192.168.0.4:5701 [dev] 5701 is accepting socket connection from /192.168.0.4:61463
Apr 09, 2012 11:38:46 AM com.hazelcast.nio.ConnectionManager
INFO: /192.168.0.4:5701 [dev] 5701 accepted socket connection from /192.168.0.4:61463
/192.168.0.4:5701 [dev] received auth from Connection [/192.168.0.4:61463 -> null] live=true, client=true, type=CLIENT, successfully authenticated
Apr 09, 2012 11:38:46 AM com.hazelcast.impl.ClientHandlerService
INFO: /192.168.0.4:5701 [dev] received auth from Connection [/192.168.0.4:61463 -> null] live=true, client=true, type=CLIENT, successfully authenticated
/192.168.0.4:5701 [dev] Instance created ProxyKey {name='c:mydata', key=null}
Created MAP,c:mydata
/192.168.0.4:5701 [dev] Connection [Address[192.168.56.1:61463]] lost. Reason: java.io.IOException[An existing connection was forcibly closed by the remote host]
/192.168.0.4:5701 [dev] Connection is removed Address[192.168.56.1:61463]
Apr 09, 2012 11:38:46 AM com.hazelcast.nio.Connection
/192.168.0.4:5701 [dev] hz._hzInstance_1_dev.IO.thread-1 Closing socket to endpoint Address[192.168.56.1:61463], Cause:java.io.IOException: An existing connection was forcibly closed by the remote host
INFO: /192.168.0.4:5701 [dev] Connection [Address[192.168.56.1:61463]] lost. Reason: java.io.IOException[An existing connection was forcibly closed by the remote host]
Apr 09, 2012 11:38:46 AM com.hazelcast.nio.ReadHandler
WARNING: /192.168.0.4:5701 [dev] hz._hzInstance_1_dev.IO.thread-1 Closing socket to endpoint Address[192.168.56.1:61463], Cause:java.io.IOException: An existing connection was forcibly closed by the remote host
/192.168.0.4:5701 [dev] 5701 is accepting socket connection from /192.168.0.5:61611
Apr 09, 2012 11:39:10 AM com.hazelcast.nio.SocketAcceptor
INFO: /192.168.0.4:5701 [dev] 5701 is accepting socket connection from /192.168.0.5:61611
/192.168.0.4:5701 [dev] 5701 accepted socket connection from /192.168.0.5:61611
Apr 09, 2012 11:39:10 AM com.hazelcast.nio.ConnectionManager
INFO: /192.168.0.4:5701 [dev] 5701 accepted socket connection from /192.168.0.5:61611
/192.168.0.4:5701 [dev] Resetting connection monitor for endpoint Address[192.168.0.5:5701]
/192.168.0.4:5701 [dev] false Handling join from Address[192.168.0.5:5701] timeToStart: -1333964351197
/192.168.0.4:5701 [dev] false Handling join from Address[192.168.0.5:5701] timeToStart: 4985
/192.168.0.4:5701 [dev] false Handling join from Address[192.168.0.5:5701] timeToStart: -7
/192.168.0.4:5701 [dev] Starting Join.
/192.168.0.4:5701 [dev] Updating Members
/192.168.0.4:5701 [dev] ClusterManager adding Member [192.168.0.4:5701] this
/192.168.0.4:5701 [dev] ClusterManager adding Member [192.168.0.5:5701]
/192.168.0.4:5701 [dev] 

Members [2] {
    Member [192.168.0.4:5701] this
    Member [192.168.0.5:5701]
}

Apr 09, 2012 11:39:16 AM com.hazelcast.cluster.ClusterManager
INFO: /192.168.0.4:5701 [dev] 

Members [2] {
    Member [192.168.0.4:5701] this
    Member [192.168.0.5:5701]
}

/192.168.0.4:5701 [dev] Re-partitioning cluster data... Immediate-Tasks: 271, Scheduled-Tasks: 0
Apr 09, 2012 11:39:16 AM com.hazelcast.impl.PartitionManager
INFO: /192.168.0.4:5701 [dev] Re-partitioning cluster data... Immediate-Tasks: 271, Scheduled-Tasks: 0
/192.168.0.4:5701 [dev] Started Migration : MigrationRequestTask{partitionId=0, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Finished Migration : MigrationRequestTask{partitionId=0, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Started Migration : MigrationRequestTask{partitionId=1, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Finished Migration : MigrationRequestTask{partitionId=1, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Started Migration : MigrationRequestTask{partitionId=2, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Finished Migration : MigrationRequestTask{partitionId=2, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Started Migration : MigrationRequestTask{partitionId=3, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Finished Migration : MigrationRequestTask{partitionId=3, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Started Migration : MigrationRequestTask{partitionId=4, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Finished Migration : MigrationRequestTask{partitionId=4, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Started Migration : MigrationRequestTask{partitionId=5, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Finished Migration : MigrationRequestTask{partitionId=5, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Started Migration : MigrationRequestTask{partitionId=6, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Finished Migration : MigrationRequestTask{partitionId=6, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Started Migration : MigrationRequestTask{partitionId=7, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Finished Migration : MigrationRequestTask{partitionId=7, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Started Migration : MigrationRequestTask{partitionId=8, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Finished Migration : MigrationRequestTask{partitionId=8, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Started Migration : MigrationRequestTask{partitionId=9, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Finished Migration : MigrationRequestTask{partitionId=9, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Started Migration : MigrationRequestTask{partitionId=10, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Finished Migration : MigrationRequestTask{partitionId=10, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Started Migration : MigrationRequestTask{partitionId=11, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Finished Migration : MigrationRequestTask{partitionId=11, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Started Migration : MigrationRequestTask{partitionId=12, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Finished Migration : MigrationRequestTask{partitionId=12, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Started Migration : MigrationRequestTask{partitionId=13, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Finished Migration : MigrationRequestTask{partitionId=13, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Started Migration : MigrationRequestTask{partitionId=14, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Finished Migration : MigrationRequestTask{partitionId=14, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Started Migration : MigrationRequestTask{partitionId=15, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Finished Migration : MigrationRequestTask{partitionId=15, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Started Migration : MigrationRequestTask{partitionId=16, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Finished Migration : MigrationRequestTask{partitionId=16, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Started Migration : MigrationRequestTask{partitionId=17, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Finished Migration : MigrationRequestTask{partitionId=17, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Started Migration : MigrationRequestTask{partitionId=18, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Finished Migration : MigrationRequestTask{partitionId=18, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Started Migration : MigrationRequestTask{partitionId=19, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Finished Migration : MigrationRequestTask{partitionId=19, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Started Migration : MigrationRequestTask{partitionId=20, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Finished Migration : MigrationRequestTask{partitionId=20, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Started Migration : MigrationRequestTask{partitionId=21, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Finished Migration : MigrationRequestTask{partitionId=21, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Started Migration : MigrationRequestTask{partitionId=22, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Finished Migration : MigrationRequestTask{partitionId=22, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Started Migration : MigrationRequestTask{partitionId=23, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Finished Migration : MigrationRequestTask{partitionId=23, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Started Migration : MigrationRequestTask{partitionId=24, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Finished Migration : MigrationRequestTask{partitionId=24, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Started Migration : MigrationRequestTask{partitionId=25, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Finished Migration : MigrationRequestTask{partitionId=25, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Started Migration : MigrationRequestTask{partitionId=26, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Finished Migration : MigrationRequestTask{partitionId=26, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Started Migration : MigrationRequestTask{partitionId=27, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Finished Migration : MigrationRequestTask{partitionId=27, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Started Migration : MigrationRequestTask{partitionId=28, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Finished Migration : MigrationRequestTask{partitionId=28, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Started Migration : MigrationRequestTask{partitionId=29, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Finished Migration : MigrationRequestTask{partitionId=29, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Started Migration : MigrationRequestTask{partitionId=30, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Finished Migration : MigrationRequestTask{partitionId=30, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Started Migration : MigrationRequestTask{partitionId=31, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Finished Migration : MigrationRequestTask{partitionId=31, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Started Migration : MigrationRequestTask{partitionId=32, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Finished Migration : MigrationRequestTask{partitionId=32, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Started Migration : MigrationRequestTask{partitionId=33, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Finished Migration : MigrationRequestTask{partitionId=33, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Started Migration : MigrationRequestTask{partitionId=34, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Finished Migration : MigrationRequestTask{partitionId=34, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Started Migration : MigrationRequestTask{partitionId=35, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Finished Migration : MigrationRequestTask{partitionId=35, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Started Migration : MigrationRequestTask{partitionId=36, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Finished Migration : MigrationRequestTask{partitionId=36, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Started Migration : MigrationRequestTask{partitionId=37, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Finished Migration : MigrationRequestTask{partitionId=37, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Started Migration : MigrationRequestTask{partitionId=38, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Finished Migration : MigrationRequestTask{partitionId=38, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Started Migration : MigrationRequestTask{partitionId=39, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Finished Migration : MigrationRequestTask{partitionId=39, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Started Migration : MigrationRequestTask{partitionId=40, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Finished Migration : MigrationRequestTask{partitionId=40, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Started Migration : MigrationRequestTask{partitionId=41, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Finished Migration : MigrationRequestTask{partitionId=41, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Started Migration : MigrationRequestTask{partitionId=42, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Finished Migration : MigrationRequestTask{partitionId=42, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Started Migration : MigrationRequestTask{partitionId=43, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Finished Migration : MigrationRequestTask{partitionId=43, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Started Migration : MigrationRequestTask{partitionId=44, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Finished Migration : MigrationRequestTask{partitionId=44, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Started Migration : MigrationRequestTask{partitionId=45, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Finished Migration : MigrationRequestTask{partitionId=45, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Started Migration : MigrationRequestTask{partitionId=46, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Finished Migration : MigrationRequestTask{partitionId=46, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Started Migration : MigrationRequestTask{partitionId=47, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Finished Migration : MigrationRequestTask{partitionId=47, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Started Migration : MigrationRequestTask{partitionId=48, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Finished Migration : MigrationRequestTask{partitionId=48, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Started Migration : MigrationRequestTask{partitionId=49, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Finished Migration : MigrationRequestTask{partitionId=49, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Started Migration : MigrationRequestTask{partitionId=50, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] false Handling join from Address[192.168.0.5:5701] timeToStart: 4017
/192.168.0.4:5701 [dev] Ignoring join request, member already exists.. => JoinInfo{request=true, memberCount=2  JoinRequest{nodeType=MEMBER, address=Address[192.168.0.5:5701], buildNumber='20120321', packetVersion='7', config='Config [groupConfig=GroupConfig [name=dev, password=********], port=5701, liteMember=false, reuseAddress=false, checkCompatibility=true, portAutoIncrement=true, properties={}, networkConfig=NetworkConfig [join=Join [multicastConfig=MulticastConfig [enabled=false, multicastGroup=224.2.2.3, multicastPort=54327, multicastTimeoutSeconds=2], tcpIpConfig=TcpIpConfig [enabled=true, connectionTimeoutSeconds=5, members=[192.168.0.4, 192.168.0.5], requiredMember=null, addresses=[]], awsConfig=AwsConfig{enabled=false, accessKey='null', secretKey='null', region='null', securityGroupName=null}], interfaces=com.hazelcast.config.Interfaces@6cf143b4, symmetricEncryptionConfig=SymmetricEncryptionConfig{enabled=false, salt='thesalt', password='thepassword', iterationCount=19, algorithm='PBEWithMD5AndDES'}, asymmetricEncryptionConfig=AsymmetricEncryptionConfig{enabled=false, algorithm='RSA/NONE/PKCS1PADDING', keyPassword='thekeypass', keyAlias='local', storeType='JKS', storePassword='thestorepass', storePath='keystore'}], mapConfigs={mydata=MapConfig{name='mydata', backupCount=1, mergePolicy=hz.ADD_NEW_ENTRY, evictionPercentage=0, timeToLiveSeconds=0, maxIdleSeconds=0, evictionDelaySeconds=3, maxSizeConfig=MaxSizeConfig{maxSizePolicy='cluster_wide_map_size', size=100000}, evictionPolicy='NONE', mapStoreConfig=null, nearCacheConfig=null, readBackupData=false, wanReplicationRef=null}, default=MapConfig{name='default', backupCount=1, mergePolicy=hz.ADD_NEW_ENTRY, evictionPercentage=25, timeToLiveSeconds=0, maxIdleSeconds=0, evictionDelaySeconds=3, maxSizeConfig=MaxSizeConfig{maxSizePolicy='cluster_wide_map_size', size=2147483647}, evictionPolicy='NONE', mapStoreConfig=null, nearCacheConfig=null, readBackupData=false, wanReplicationRef=null}}, mapMergePolicyConfigs={hz.ADD_NEW_ENTRY=MergePolicyConfig{name='hz.ADD_NEW_ENTRY', className='null'}, hz.HIGHER_HITS=MergePolicyConfig{name='hz.HIGHER_HITS', className='null'}, hz.LATEST_UPDATE=MergePolicyConfig{name='hz.LATEST_UPDATE', className='null'}, hz.PASS_THROUGH=MergePolicyConfig{name='hz.PASS_THROUGH', className='null'}}, executorConfig=com.hazelcast.config.ExecutorConfig@6fcf5ba, mapExecutors={default=com.hazelcast.config.ExecutorConfig@496889eb}, mapTopicConfigs={}, mapQueueConfigs={tasks=QueueConfig [name=tasks, backingMapRef=q:tasks, maxSizePerJVM=10000], default=QueueConfig [name=default, backingMapRef=default, maxSizePerJVM=0]}]'}}
/192.168.0.4:5701 [dev] Finished Migration : MigrationRequestTask{partitionId=50, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Started Migration : MigrationRequestTask{partitionId=51, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Finished Migration : MigrationRequestTask{partitionId=51, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Started Migration : MigrationRequestTask{partitionId=52, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Finished Migration : MigrationRequestTask{partitionId=52, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Started Migration : MigrationRequestTask{partitionId=53, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Finished Migration : MigrationRequestTask{partitionId=53, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Started Migration : MigrationRequestTask{partitionId=54, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Finished Migration : MigrationRequestTask{partitionId=54, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Started Migration : MigrationRequestTask{partitionId=55, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Finished Migration : MigrationRequestTask{partitionId=55, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Started Migration : MigrationRequestTask{partitionId=56, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Finished Migration : MigrationRequestTask{partitionId=56, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Started Migration : MigrationRequestTask{partitionId=57, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Finished Migration : MigrationRequestTask{partitionId=57, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Started Migration : MigrationRequestTask{partitionId=58, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Finished Migration : MigrationRequestTask{partitionId=58, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Started Migration : MigrationRequestTask{partitionId=59, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Finished Migration : MigrationRequestTask{partitionId=59, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Started Migration : MigrationRequestTask{partitionId=60, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Finished Migration : MigrationRequestTask{partitionId=60, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Started Migration : MigrationRequestTask{partitionId=61, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Finished Migration : MigrationRequestTask{partitionId=61, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Started Migration : MigrationRequestTask{partitionId=62, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Finished Migration : MigrationRequestTask{partitionId=62, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Started Migration : MigrationRequestTask{partitionId=63, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
/192.168.0.4:5701 [dev] Finished Migration : MigrationRequestTask{partitionId=63, from=Address[192.168.0.4:5701], to=Address[192.168.0.5:5701], replicaIndex=0, migration=true, diffOnly=false, selfCopyReplicaIndex=1}
...

Log-File from second server:

run:
Apr 09, 2012 11:38:59 AM com.hazelcast.config.XmlConfigBuilder
Information: Looking for hazelcast.xml config file in classpath.
Apr 09, 2012 11:38:59 AM com.hazelcast.config.XmlConfigBuilder
Information: Using configuration file /C:/Users/alex/Documents/NetBeansProjects/HazelTest/build/classes/hazelcast.xml in the classpath.
Apr 09, 2012 11:39:00 AM com.hazelcast.system
Information: /192.168.0.5:5701 [dev] Hazelcast Community Edition 2.0.2 (20120321) starting at Address[192.168.0.5:5701]
Apr 09, 2012 11:39:00 AM com.hazelcast.system
Information: /192.168.0.5:5701 [dev] Copyright (C) 2008-2012 Hazelcast.com
Apr 09, 2012 11:39:00 AM com.hazelcast.impl.LifecycleServiceImpl
Information: /192.168.0.5:5701 [dev] Address[192.168.0.5:5701] is STARTING
Apr 09, 2012 11:39:00 AM com.hazelcast.impl.TcpIpJoiner
Information: /192.168.0.5:5701 [dev] connecting to Address[192.168.0.5:5702]
Apr 09, 2012 11:39:00 AM com.hazelcast.impl.TcpIpJoiner
Information: /192.168.0.5:5701 [dev] connecting to Address[192.168.0.4:5702]
Apr 09, 2012 11:39:00 AM com.hazelcast.impl.TcpIpJoiner
Information: /192.168.0.5:5701 [dev] connecting to Address[192.168.0.5:5703]
Apr 09, 2012 11:39:00 AM com.hazelcast.impl.TcpIpJoiner
Information: /192.168.0.5:5701 [dev] connecting to Address[192.168.0.4:5703]
Apr 09, 2012 11:39:00 AM com.hazelcast.impl.TcpIpJoiner
Information: /192.168.0.5:5701 [dev] connecting to Address[192.168.0.4:5701]
Apr 09, 2012 11:39:00 AM com.hazelcast.nio.ConnectionManager
Information: /192.168.0.5:5701 [dev] 61611 accepted socket connection from /192.168.0.4:5701
Apr 09, 2012 11:39:06 AM com.hazelcast.cluster.ClusterManager
Information: /192.168.0.5:5701 [dev] 

Members [2] {
    Member [192.168.0.4:5701]
    Member [192.168.0.5:5701] this
}

Apr 09, 2012 11:39:08 AM com.hazelcast.impl.management.ManagementCenterService
Information: /192.168.0.5:5701 [dev] Hazelcast Management Center started at port 5801.
Apr 09, 2012 11:39:08 AM com.hazelcast.impl.LifecycleServiceImpl
Information: /192.168.0.5:5701 [dev] Address[192.168.0.5:5701] is STARTED
MAP,c:mydata
Apr 09, 2012 11:39:20 AM com.hazelcast.nio.SocketAcceptor
/192.168.0.5:5701 [dev] 5701 is accepting socket connection from /192.168.0.4:61472
Information: /192.168.0.5:5701 [dev] 5701 is accepting socket connection from /192.168.0.4:61472
/192.168.0.5:5701 [dev] 5701 accepted socket connection from /192.168.0.4:61472
Apr 09, 2012 11:39:20 AM com.hazelcast.nio.ConnectionManager
Information: /192.168.0.5:5701 [dev] 5701 accepted socket connection from /192.168.0.4:61472
/192.168.0.5:5701 [dev] received auth from Connection [/192.168.0.4:61472 -> null] live=true, client=true, type=CLIENT, successfully authenticated
Apr 09, 2012 11:39:20 AM com.hazelcast.impl.ClientHandlerService
Information: /192.168.0.5:5701 [dev] received auth from Connection [/192.168.0.4:61472 -> null] live=true, client=true, type=CLIENT, successfully authenticated
Apr 09, 2012 11:39:21 AM com.hazelcast.nio.Connection
/192.168.0.5:5701 [dev] Connection [Address[192.168.0.5:61472]] lost. Reason: java.io.IOException[Eine vorhandene Verbindung wurde vom Remotehost geschlossen]
/192.168.0.5:5701 [dev] Connection is removed Address[192.168.0.5:61472]
Information: /192.168.0.5:5701 [dev] Connection [Address[192.168.0.5:61472]] lost. Reason: java.io.IOException[Eine vorhandene Verbindung wurde vom Remotehost geschlossen]
Apr 09, 2012 11:39:21 AM com.hazelcast.nio.ReadHandler
/192.168.0.5:5701 [dev] hz._hzInstance_1_dev.IO.thread-2 Closing socket to endpoint Address[192.168.0.5:61472], Cause:java.io.IOException: Eine vorhandene Verbindung wurde vom Remotehost geschlossen
Warnung: /192.168.0.5:5701 [dev] hz._hzInstance_1_dev.IO.thread-2 Closing socket to endpoint Address[192.168.0.5:61472], Cause:java.io.IOException: Eine vorhandene Verbindung wurde vom Remotehost geschlossen
/192.168.0.5:5701 [dev] Connection timed out: connect
/192.168.0.5:5701 [dev] Connection timed out: connect
/192.168.0.5:5701 [dev] An error occurred on connection to Address[192.168.0.4:5702] Cause => java.net.ConnectException {Connection timed out: connect}, Error-Count: 0
/192.168.0.5:5701 [dev] An error occurred on connection to Address[192.168.0.4:5703] Cause => java.net.ConnectException {Connection timed out: connect}, Error-Count: 0
/192.168.0.5:5701 [dev] 4 evicting

@AlexDus
Copy link
Author

AlexDus commented Apr 9, 2012

Here is something weird: I just tested the whole procedure with two hazelcast servers on one physical machine and the problem is gone. If i try it again with two hazelcast servers on two physical machines, the problem is still there.

@AlexDus
Copy link
Author

AlexDus commented Apr 9, 2012

I added some code to the listing of the keys. It now shows the ttl and the valid-state:

public static void main(String[] args) {

        ClientConfig clientConfig = new ClientConfig();
        clientConfig.getGroupConfig().setName("dev").setPassword("dev-pass");
        clientConfig.addAddress(Globals.serverNames);
        HazelcastInstance client = HazelcastClient.newHazelcastClient(clientConfig);
        PartitionService partitionService = client.getPartitionService();     

        IMap<Long, Long> myMap = client.getMap("mydata");
        Set keys = myMap.keySet();

        Iterator<Long> iter = keys.iterator();
        int counter = 0;
        while (iter.hasNext()) {
            Long t = iter.next();
            Partition partition = partitionService.getPartition(t);
            Member ownerMember = partition.getOwner();
            MapEntry rec = myMap.getMapEntry(t);
            System.out.println(t + "; " + ownerMember.toString() + "; " + rec.getExpirationTime() + "; " + rec.isValid());
            counter++;
        }


        System.out.println("Number of keys: " + counter);
        client.getLifecycleService().shutdown();
    }

Looks like something is messing with the ttls:

Output before the second hazelcast server starts up:

0; Member [192.168.0.4:5701]; 9223372036854775807; true
1; Member [192.168.0.4:5701]; 9223372036854775807; true
2; Member [192.168.0.4:5701]; 9223372036854775807; true
3; Member [192.168.0.4:5701]; 9223372036854775807; true
4; Member [192.168.0.4:5701]; 9223372036854775807; true
5; Member [192.168.0.4:5701]; 9223372036854775807; true
6; Member [192.168.0.4:5701]; 9223372036854775807; true
7; Member [192.168.0.4:5701]; 9223372036854775807; true
8; Member [192.168.0.4:5701]; 9223372036854775807; true
9; Member [192.168.0.4:5701]; 9223372036854775807; true
Number of keys: 10

Output after the second hazelcast server started up and before putting the keys into the map for the second time:

0; Member [192.168.0.4:5701]; 9223372036854775807; true
1; Member [192.168.0.4:5701]; 9223372036854775807; true
2; Member [192.168.0.7:5701]; 9223372036854775399; true
3; Member [192.168.0.4:5701]; 9223372036854775807; true
4; Member [192.168.0.4:5701]; 9223372036854775807; true
5; Member [192.168.0.7:5701]; 9223372036854775393; true
6; Member [192.168.0.4:5701]; 9223372036854775807; true
7; Member [192.168.0.4:5701]; 9223372036854775807; true
8; Member [192.168.0.7:5701]; 9223372036854775390; true
9; Member [192.168.0.7:5701]; 9223372036854775386; true
Number of keys: 10

Taking a look at the isValid(long now) function in AbstractRecord, this might be part of the problem.

@AlexDus
Copy link
Author

AlexDus commented Apr 9, 2012

Actually setting time-to-live-seconds for example to 10000 helps.

@AlexDus
Copy link
Author

AlexDus commented Apr 9, 2012

Here is another interesting thing. The system-time of the second machine has been a second behind. I put it 5 seconds before the system-time of the first machine, the problem is gone - almost. Now it works the other way around. So if the first machine joins the already running second machine, the problem occurs. The whole process seems to be very system-time sensitive. My guess would be, that it has something to do with the writeData und readData function in the DataRecordEntry class. In the writeData function you substract the current time und in the readData function you add the current time.

@mdogan
Copy link
Contributor

mdogan commented Apr 10, 2012

Thanks for your findings, you did an excellent job. I could reproduce the
issue.
On Apr 10, 2012 12:33 AM, "AlexDus" <
reply@reply.github.com>
wrote:

Here is another interesting thing. The system-time of the second machine
has been a second behind. I put it 5 seconds before the system-time of the
first machine, the problem is gone - almost. Now it works the other way
around. So if the first machine joins the already running second machine,
the problem occurs. The whole process seems to be very system-time
sensitive. My guess would be, that it has something to do with the
writeData und readData function in the DataRecordEntry class. In the
writeData function you subtract the current time und in the readData
function you add the current time.


Reply to this email directly or view it on GitHub:
#117 (comment)

@AlexDus
Copy link
Author

AlexDus commented Apr 11, 2012

Welcome, sorry i didn't come up with a fix - but i'm too unfamiliar with your code to start messing with it and i havn't setup a debug environment for hazelcast (yet). But this is a great framework - thinking about using it in one of my next projects.

@mdogan
Copy link
Contributor

mdogan commented Apr 11, 2012

I have uploaded two builds for you 2.0.3 and 2.1-SNAPSHOT including fix for this issue in downloads section.
Could you please try them and give us your feedback?

@AlexDus
Copy link
Author

AlexDus commented Apr 11, 2012

I'm going to test it this evening. I currently don't have the environment at hand.

@AlexDus
Copy link
Author

AlexDus commented Apr 11, 2012

The fixes seem to work just fine. I noticed two things:

  • In the test i used two machines. One using a jdk1.7u2 and one using a jdk1.7u3. The machine with the jdk1.7u3 came up with an ipv6 address when i was using the 2.1-SNAPSHOT. After setting the java parameter -Djava.net.preferIPv4Stack=true it came up with the ipv4 address. That's a behaviour i didn't notice when i was using the 2.1-SNAPHOT a couple of days ago - but i guess still wanted cause i read somewhere that you're working on ipv6 support.
  • When the second machine joins the cluster the ttls of the moved records still change. They represent the difference of the system times of the machines. I think this a wanted effect if you're using machines in different timezones.

Like i said before, putting the keys in a second time worked just fine and no entries were lost!

@mdogan
Copy link
Contributor

mdogan commented Apr 12, 2012

Thanks for testing. IPv6 support is being introduced in v2.1 and Hazelcast
can pick an IPv6 interface if it is available.

On Wed, Apr 11, 2012 at 9:30 PM, AlexDus <
reply@reply.github.com

wrote:

The fixes seem to work just fine. Two things i noticed:

  • In the test i had two machines. One using a jdk1.7u2 and one using a
    jdk1.7u3. The machine with the jdk1.7u3 came up with an ipv6 address when i
    was using the 2.1SNAPSHOT. After setting the java parameter
    -Djava.net.preferIPv4Stack=true it came up with the ipv4 address. Thats a
    behaviour i didn't noticed when i was using the 2.1SNAPHOT a couple of days
    ago.
  • When the second machine joins the cluster the ttls of the moved records
    still change. They represent the difference of the system times of the
    machines. I consider this a wanted effect if you're using machines in
    different timezones. But as i said before, putting the keys in a second
    time worked just fine and no entries are lost.

Reply to this email directly or view it on GitHub:
#117 (comment)

@mdogan mdogan closed this as completed in 4895af4 Apr 12, 2012
SeriyBg pushed a commit to SeriyBg/hazelcast that referenced this issue Jul 9, 2021
Discover all IPs of PODs by default
This issue was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants