Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ownerless client - client side #15859

Merged
merged 1 commit into from
Nov 22, 2019

Conversation

sancar
Copy link
Contributor

@sancar sancar commented Oct 24, 2019

Related protocol changes
hazelcast/hazelcast-client-protocol#254
==EDIT==
fixes #12841
fixes #15556

@sancar sancar added this to the 4.0 milestone Oct 24, 2019
@sancar sancar self-assigned this Oct 24, 2019
@sancar sancar force-pushed the enh/ownerless_clientSide/master branch 4 times, most recently from 2b47680 to 469a87a Compare October 31, 2019 09:36
@sancar sancar marked this pull request as ready for review October 31, 2019 11:55
@sancar sancar requested a review from a team as a code owner October 31, 2019 11:55
@sancar
Copy link
Contributor Author

sancar commented Nov 1, 2019

run-lab-run

1 similar comment
@sancar
Copy link
Contributor Author

sancar commented Nov 1, 2019

run-lab-run

@sancar sancar force-pushed the enh/ownerless_clientSide/master branch from a3febc2 to b8fd5e1 Compare November 1, 2019 14:46
@sancar sancar force-pushed the enh/ownerless_clientSide/master branch 8 times, most recently from c6fe274 to 955bd18 Compare November 12, 2019 08:01
sancar pushed a commit to sancar/hazelcast that referenced this pull request Nov 14, 2019
Invocations on cluster restart like registering listeners,
creating proxies are all urgent and they are not checked for
max invocation count.
recreateCachesOnCluster also will not checked with the changes
in this pr.

fix is backported from
hazelcast#15859

fixes hazelcast#15556
@Danny-Hazelcast
Copy link
Contributor

#15556

@Danny-Hazelcast
Copy link
Contributor

is this RP a possible fix for #15556 ?

sancar pushed a commit to sancar/hazelcast that referenced this pull request Nov 14, 2019
Invocations on cluster restart like registering listeners,
creating proxies are all urgent and they are not checked for
max invocation count.
recreateCachesOnCluster also will not checked with the changes
in this pr.

fix is backported from
hazelcast#15859

fixes hazelcast#15556
@sancar
Copy link
Contributor Author

sancar commented Nov 14, 2019

@Danny-Hazelcast Yes, this fixes #15556. I have updated the pr description accordingly.

sancar pushed a commit to sancar/hazelcast that referenced this pull request Nov 15, 2019
Invocations on cluster restart like registering listeners,
creating proxies are all urgent and they are not checked for
max invocation count.
recreateCachesOnCluster also will not checked with the changes
in this pr.

fix is backported from
hazelcast#15859

fixes hazelcast#15556
sancar pushed a commit to sancar/hazelcast that referenced this pull request Nov 15, 2019
Invocations on cluster restart like registering listeners,
creating proxies are all urgent and they are not checked for
max invocation count.
recreateCachesOnCluster also will not checked with the changes
in this pr.

fix is backported from
hazelcast#15859

fixes hazelcast#15556
sancar pushed a commit to sancar/hazelcast that referenced this pull request Nov 15, 2019
Invocations on cluster restart like registering listeners,
creating proxies are all urgent and they are not checked for
max invocation count.
recreateCachesOnCluster also will not checked with the changes
in this pr.

fix is backported from
hazelcast#15859

fixes hazelcast#15556
sancar pushed a commit that referenced this pull request Nov 18, 2019
Invocations on cluster restart like registering listeners,
creating proxies are all urgent and they are not checked for
max invocation count.
recreateCachesOnCluster also will not checked with the changes
in this pr.

fix is backported from
#15859

fixes #15556
@sancar sancar force-pushed the enh/ownerless_clientSide/master branch from 955bd18 to 56a389b Compare November 21, 2019 12:23
@sancar sancar force-pushed the enh/ownerless_clientSide/master branch 3 times, most recently from d978c48 to 4bea997 Compare November 22, 2019 12:14
@sancar sancar force-pushed the enh/ownerless_clientSide/master branch from 4bea997 to 1d108a1 Compare November 22, 2019 12:55
@sancar sancar merged commit 634dd99 into hazelcast:master Nov 22, 2019
@sancar sancar deleted the enh/ownerless_clientSide/master branch November 22, 2019 14:03
sancar pushed a commit to sancar/hazelcast that referenced this pull request Dec 4, 2019
Follow up to hazelcast#15859

The client failover behaviour got broken after ownerless client
changes. This pr restores the behaviour back.

related to https://github.com/hazelcast/hazelcast-enterprise/issues/3385

 - Resetting the internal client state explicitly when trying a next
 cluster. For tests it is too late to wait for a connection to be
 established and its uuid to be checked. Tests are expecting to see
 empty memberlist when initial cluster is gone and when a next cluster
 connection is not established.

 - We wait for the initial membership events after a cluster change is
 detected before CLIENT_CONNECTED or CLIENT_CHANGED_CLUSTER event.

 - Don't fire member removed events and don't print member-list when
 initial membership has reset.

- Only first connection after disconnection should check if a cluster
changed or not. Otherwise there was a race and a second connection
could prevent the first one to fire Connected/changed events
sancar pushed a commit to sancar/hazelcast that referenced this pull request Dec 4, 2019
Follow up to hazelcast#15859

The client failover behaviour got broken after ownerless client
changes. This pr restores the behaviour back.

related to https://github.com/hazelcast/hazelcast-enterprise/issues/3385

 - Resetting the internal client state explicitly when trying a next
 cluster. For tests it is too late to wait for a connection to be
 established and its uuid to be checked. Tests are expecting to see
 empty memberlist when initial cluster is gone and when a next cluster
 connection is not established.

 - We wait for the initial membership events after a cluster change is
 detected before CLIENT_CONNECTED or CLIENT_CHANGED_CLUSTER event.

 - Don't fire member removed events and don't print member-list when
 initial membership has reset.

- Only first connection after disconnection should check if a cluster
changed or not. Otherwise there was a race and a second connection
could prevent the first one to fire Connected/changed events
sancar pushed a commit to sancar/hazelcast that referenced this pull request Dec 4, 2019
Follow up to hazelcast#15859

The client failover behaviour got broken after ownerless client
changes. This pr restores the behaviour back.

related to https://github.com/hazelcast/hazelcast-enterprise/issues/3385

 - Resetting the internal client state explicitly when trying a next
 cluster. For tests it is too late to wait for a connection to be
 established and its uuid to be checked. Tests are expecting to see
 empty memberlist when initial cluster is gone and when a next cluster
 connection is not established.

 - We wait for the initial membership events after a cluster change is
 detected before CLIENT_CONNECTED or CLIENT_CHANGED_CLUSTER event.

- seperate cluster restart and cluster change reset behaviour.
When cluster change , we get initial membership event.
When cluster restart, we don't get initial membership event. Instead,
we get member added and removed events for each action.

- Only first connection after disconnection should check if a cluster
restarted or not. Otherwise there was a race and a second connection
could prevent the first one to fire Connected/changed events
sancar pushed a commit to sancar/hazelcast that referenced this pull request Dec 5, 2019
sancar pushed a commit to sancar/hazelcast that referenced this pull request Dec 9, 2019
Follow up to hazelcast#15859

The client failover behaviour got broken after ownerless client
changes. This pr restores the behaviour back.

related to https://github.com/hazelcast/hazelcast-enterprise/issues/3385

 - Resetting the internal client state explicitly when trying a next
 cluster. For tests it is too late to wait for a connection to be
 established and its uuid to be checked. Tests are expecting to see
 empty memberlist when initial cluster is gone and when a next cluster
 connection is not established.

 - We wait for the initial membership events after a cluster change is
 detected before CLIENT_CONNECTED or CLIENT_CHANGED_CLUSTER event.

- seperate cluster restart and cluster change reset behaviour.
When cluster change , we get initial membership event.
When cluster restart, we don't get initial membership event. Instead,
we get member added and removed events for each action.

- Only first connection after disconnection should check if a cluster
restarted or not. Otherwise there was a race and a second connection
could prevent the first one to fire Connected/changed events
sancar pushed a commit to sancar/hazelcast that referenced this pull request Dec 10, 2019
sancar pushed a commit to sancar/hazelcast that referenced this pull request Dec 10, 2019
sancar pushed a commit that referenced this pull request Dec 11, 2019
sancar pushed a commit that referenced this pull request Dec 11, 2019
Follow up to #15859

related to https://github.com/hazelcast/hazelcast-enterprise/issues/3385

- Add cluster listener to single connection as before

 - Resetting the internal client state explicitly when trying a next
 cluster. For tests it is too late to wait for a connection to be
 established and its uuid to be checked. Tests are expecting to see
 empty memberlist when initial cluster is gone and when a next cluster
 connection is not established.

 - We wait for the initial membership events after a cluster change is
 detected before CLIENT_CONNECTED or CLIENT_CHANGED_CLUSTER event.

- seperate cluster restart and cluster change reset behaviour.
When cluster change , we get initial membership event.
When cluster restart, we don't get initial membership event. Instead,
we get member added and removed events for each action.

- Only first connection after disconnection should check if a cluster
restarted or not. Otherwise there was a race and a second connection
could prevent the first one to fire Connected/changed events

- Test duration fixes.  use setClusterConnectTimeoutMillis instead of setMaxBackoffMillis

- Event handlers is moved to ClientConnection to avoid racy situations when 
tracking lastCorrelation id to clean remove event handler.

- ConnectionManager.getSize(MemberSelector) is removed since no usage is left.

- ClientClusterViewListener handles a failed registration attempt by
trying reregister on a random connection.
@mmedenjak mmedenjak added the Source: Internal PR or issue was opened by an employee label Apr 13, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
5 participants