Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support auto refresh a list of cluster nodes #136

Merged
merged 2 commits into from
Apr 18, 2019

Conversation

nicktorwald
Copy link

@nicktorwald nicktorwald commented Mar 18, 2019

Support auto refresh a list of cluster nodes

Refactor SocketChannelProvider implementations. Now we have two
SingleSocketChannelProviderImpl and RoundRobinSocketProviderImpl used by
simple and cluster clients respectively. To achieve this a
BaseSocketChannelProvider was extracted.

Add a service discovery implementation based on a tarantool stored
procedure which is called to obtain a new list of cluster nodes.

Integrate service discovery and current cluster client. The client now
is aware of potential nodes changing using a configurable background job
which periodically checks whether node addresses have changed. If so
the client refreshes the RoundRobinSocketProviderImpl and replaces old
nodes by new ones. It also requires some additional effort in case of
missing the current node in the list. We need to reconnect to another
node as soon as possible with a minimal delay between client requests.
To achieve this we currently try to catch a moment when the requests in
progress have been accomplished and we can finish reconnection process.

Closes: #34

@nicktorwald nicktorwald force-pushed the nicktorwald/gh-34-cluster-support branch from 3b56867 to a6193e1 Compare March 20, 2019 14:37
@nicktorwald nicktorwald force-pushed the nicktorwald/gh-34-cluster-support branch 2 times, most recently from 04d1ed0 to b7f43ac Compare March 21, 2019 19:02
@coveralls
Copy link

coveralls commented Mar 21, 2019

Coverage Status

Coverage increased (+2.3%) to 71.485% when pulling 7d00b2a on nicktorwald/gh-34-cluster-support into 2415808 on master.

Copy link
Member

@Totktonada Totktonada left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you!

  1. Please, change the commit header (and PR also) to reflect actual changes you made. Like so:

Support auto refresh a list of cluster nodes

I also changed #34 header to don't confuse anybody.

  1. Forgot a hash symbol in 'closes' at end of the commit message.

  2. From the commit message:

Fix a regression in TarantoolClientImpl. It is a wrong comparison
between response result code and original request operation code. To
perform a right thing TarantoolOp class was created to wrap an original
future (see TarantoolClientImpl.complete(packet, feature)).

Can you file an issue for this regression and fix it within a separate PR? Please also write a test case. AFAIU, the problem is that we'll check a response code against EXECUTE, not a request code? When they are different, in case of an error? We need to cleanly understand the issue and which releases are affected (I guess it is from 9471340, so 1.9.0 and 1.9.1). Maybe it worth to fix and make a new bugfix release.

I think it worth to provide a minimal description of the feature in README. It also worth to mention behaviour when:

  • No predefined addresses provided, but a discovery instance + a function name is provided.
  • If both provided, then whether a cluster client will connect to a first instance from predefined list and fetch a new list in background (or only after delay?).
  • Whether a reconnection is a trigger for instances list updating or only initial connect and delay?
  • Other tricky cases you know about?

@Totktonada
Copy link
Member

One Travis-CI job hungs for unknown reason: https://travis-ci.org/tarantool/tarantool-java/jobs/509586308 . Are you able to reproduce it locally (maybe with many test runs, restricting CPUs count for processes, background CPU-intensive tasks to change timings)?

@nicktorwald
Copy link
Author

Can you file an issue for this regression and fix it within a separate PR? Please also write a test case. AFAIU, the problem is that we'll check a response code against EXECUTE, not a request code? When they are different, in case of an error? We need to cleanly understand the issue and which releases are affected (I guess it is from 9471340, so 1.9.0 and 1.9.1). Maybe it worth to fix and make a new bugfix release.

Transferred to #141

@nicktorwald nicktorwald force-pushed the nicktorwald/gh-34-cluster-support branch from b7f43ac to ed4a00f Compare March 22, 2019 21:10
@nicktorwald nicktorwald changed the title WIP: Add support for tarantool clusters Support auto refresh a list of cluster nodes Mar 22, 2019
@nicktorwald nicktorwald force-pushed the nicktorwald/gh-34-cluster-support branch from ed4a00f to 7382d08 Compare March 22, 2019 21:14
@nicktorwald nicktorwald force-pushed the nicktorwald/gh-34-cluster-support branch from 7382d08 to 9f842e7 Compare March 24, 2019 15:40
@nicktorwald nicktorwald force-pushed the nicktorwald/gh-34-cluster-support branch 7 times, most recently from b747e0a to b293cf4 Compare March 26, 2019 14:48
README.md Outdated Show resolved Hide resolved
@nicktorwald nicktorwald force-pushed the nicktorwald/gh-34-cluster-support branch 8 times, most recently from cfec8b8 to d1d844d Compare March 27, 2019 11:26
nicktorwald pushed a commit that referenced this pull request Mar 29, 2019
- Avoid a possible race between reading, writing and reconnecting
threads when a reconnection process is started.
It might have happened that the lagged thread (reading or writing) could
reset the state to RECONNECT after the reconnecting thread has already
started and set the state to 0. As a result, all next attempts to
reconnect will never happen. Now the reconnect thread holds on the state
as long as it is required.

- Avoid another possible race between reading and writing threads when
they are started during the reconnection process.
It might have happened that one of the threads crashed when it was
starting and another slightly lagged thread set up its flag. It could
have led that the reconnecting thread saw RECONNECT + R/W state instead
of pure RECONNECT. Again, this case broke down all next reconnection
attempts. Now reading and writing threads take into account whether
RECONNECT state is already set or not.

- Avoid LockSupport class usage for a thread to be suspended and woken
up. Actually, LockSupport is more like an internal component to build
high-level blocking primitives. It is not recommended using this class
directly. It was replaced by ReentrantLock.Condition primitive based
on LockSupport but which has proper LockSupport usage inside.

Fixes: #142
Addects: #34, #136
nicktorwald added a commit that referenced this pull request Mar 29, 2019
- Avoid a possible race between reading, writing and reconnecting
threads when a reconnection process is started.
It might have happened that the lagged thread (reading or writing) could
reset the state to RECONNECT after the reconnecting thread has already
started and set the state to 0. As a result, all next attempts to
reconnect will never happen. Now the reconnect thread holds on the state
as long as it is required.

- Avoid another possible race between reading and writing threads when
they are started during the reconnection process.
It might have happened that one of the threads crashed when it was
starting and another slightly lagged thread set up its flag. It could
have led that the reconnecting thread saw RECONNECT + R/W state instead
of pure RECONNECT. Again, this case broke down all next reconnection
attempts. Now reading and writing threads take into account whether
RECONNECT state is already set or not.

- Avoid LockSupport class usage for a thread to be suspended and woken
up. Actually, LockSupport is more like an internal component to build
high-level blocking primitives. It is not recommended using this class
directly. It was replaced by ReentrantLock.Condition primitive based
on LockSupport but which has proper LockSupport usage inside.

Fixes: #142
Affects: #34, #136
nicktorwald added a commit that referenced this pull request Apr 1, 2019
- Avoid a possible race between reading, writing and reconnecting
threads when a reconnection process is started.
It might have happened that the lagged thread (reading or writing) could
reset the state to RECONNECT after the reconnecting thread has already
started and set the state to 0. As a result, all next attempts to
reconnect will never happen. Now the reconnect thread holds on the state
as long as it is required.

- Avoid another possible race between reading and writing threads when
they are started during the reconnection process.
It might have happened that one of the threads crashed when it was
starting and another slightly lagged thread set up its flag. It could
have led that the reconnecting thread saw RECONNECT + R/W state instead
of pure RECONNECT. Again, this case broke down all next reconnection
attempts. Now reading and writing threads take into account whether
RECONNECT state is already set or not.

- Replace LockSupport with ReentrantLock.Condition for a thread to be
suspended and woken up. Our cluster tests and standalone demo app show
that LockSupport is not a safe memory barrier as it could be. The
reconnect thread relies on a visibility guarantee between park-unpark
invocations which, actually, sometimes doesn't work. Also, according to
java-docs LockSupport is more like an internal component to build
high-level blocking primitives. It is not recommended using this class
directly. It was replaced by ReentrantLock.Condition primitive based
on LockSupport but which has proper LockSupport usage inside.

Fixes: #142
Affects: #34, #136
@nicktorwald nicktorwald force-pushed the nicktorwald/gh-34-cluster-support branch 2 times, most recently from 3fde64f to c6c402e Compare April 2, 2019 13:54
@nicktorwald
Copy link
Author

Now I understood that I missed the case when we initially connect a cluster client. At this point it worth to have several addresses to bootstrap a cluster client, because it is possible that one node is down at the moment.
Also there is no much need to split discovery and DQL/DML nodes when both are lists.

Done. A service discovery task uses the same active connection being established by a client socket provider.

nicktorwald added a commit that referenced this pull request Apr 2, 2019
- Avoid a possible race between reading, writing and reconnecting
threads when a reconnection process is started.
It might have happened that the lagged thread (reading or writing) could
reset the state to RECONNECT after the reconnecting thread has already
started and set the state to 0. As a result, all next attempts to
reconnect will never happen. Now the reconnect thread holds on the state
as long as it is required.

- Avoid another possible race between reading and writing threads when
they are started during the reconnection process.
It might have happened that one of the threads crashed when it was
starting and another slightly lagged thread set up its flag. It could
have led that the reconnecting thread saw RECONNECT + R/W state instead
of pure RECONNECT. Again, this case broke down all next reconnection
attempts. Now reading and writing threads take into account whether
RECONNECT state is already set or not.

- Replace LockSupport with ReentrantLock.Condition for a thread to be
suspended and woken up. Our cluster tests and standalone demo app show
that LockSupport is not a safe memory barrier as it could be. The
reconnect thread relies on a visibility guarantee between park-unpark
invocations which, actually, sometimes doesn't work. Also, according to
java-docs LockSupport is more like an internal component to build
high-level blocking primitives. It is not recommended using this class
directly. It was replaced by ReentrantLock.Condition primitive based
on LockSupport but which has proper LockSupport usage inside.

Fixes: #142
Affects: #34, #136
@nicktorwald nicktorwald force-pushed the nicktorwald/gh-34-cluster-support branch from c6c402e to d355210 Compare April 3, 2019 03:36
nicktorwald added a commit that referenced this pull request Apr 3, 2019
- Avoid a possible race between reading, writing and reconnecting
threads when a reconnection process is started.
It might have happened that the lagged thread (reading or writing) could
reset the state to RECONNECT after the reconnecting thread has already
started and set the state to 0. As a result, all next attempts to
reconnect will never happen. Now the reconnect thread holds on the state
as long as it is required.

- Avoid another possible race between reading and writing threads when
they are started during the reconnection process.
It might have happened that one of the threads crashed when it was
starting and another slightly lagged thread set up its flag. It could
have led that the reconnecting thread saw RECONNECT + R/W state instead
of pure RECONNECT. Again, this case broke down all next reconnection
attempts. Now reading and writing threads take into account whether
RECONNECT state is already set or not.

- Replace LockSupport with ReentrantLock.Condition for a thread to be
suspended and woken up. Our cluster tests and standalone demo app show
that LockSupport is not a safe memory barrier as it could be. The
reconnect thread relies on a visibility guarantee between park-unpark
invocations which, actually, sometimes doesn't work. Also, according to
java-docs LockSupport is more like an internal component to build
high-level blocking primitives. It is not recommended using this class
directly. It was replaced by ReentrantLock.Condition primitive based
on LockSupport but which has proper LockSupport usage inside.

Fixes: #142
Affects: #34, #136
nicktorwald added a commit that referenced this pull request Apr 3, 2019
- Avoid a possible race between reading, writing and reconnecting
threads when a reconnection process is started.
It might have happened that the lagged thread (reading or writing) could
reset the state to RECONNECT after the reconnecting thread has already
started and set the state to 0. As a result, all next attempts to
reconnect will never happen. Now the reconnect thread holds on the state
as long as it is required.

- Avoid another possible race between reading and writing threads when
they are started during the reconnection process.
It might have happened that one of the threads crashed when it was
starting and another slightly lagged thread set up its flag. It could
have led that the reconnecting thread saw RECONNECT + R/W state instead
of pure RECONNECT. Again, this case broke down all next reconnection
attempts. Now reading and writing threads take into account whether
RECONNECT state is already set or not.

- Replace LockSupport with ReentrantLock.Condition for a thread to be
suspended and woken up. Our cluster tests and standalone demo app show
that LockSupport is not a safe memory barrier as it could be. The
reconnect thread relies on a visibility guarantee between park-unpark
invocations which, actually, sometimes doesn't work. Also, according to
java-docs LockSupport is more like an internal component to build
high-level blocking primitives. It is not recommended using this class
directly. It was replaced by ReentrantLock.Condition primitive based
on LockSupport but which has proper LockSupport usage inside.

Fixes: #142
Affects: #34, #136
nicktorwald added a commit that referenced this pull request Apr 4, 2019
- Avoid a possible race between reading, writing and reconnecting
threads when a reconnection process is started.
It might have happened that the lagged thread (reading or writing) could
reset the state to RECONNECT after the reconnecting thread has already
started and set the state to 0. As a result, all next attempts to
reconnect will never happen. Now the reconnect thread holds on the state
as long as it is required.

- Avoid another possible race between reading and writing threads when
they are started during the reconnection process.
It might have happened that one of the threads crashed when it was
starting and another slightly lagged thread set up its flag. It could
have led that the reconnecting thread saw RECONNECT + R/W state instead
of pure RECONNECT. Again, this case broke down all next reconnection
attempts. Now reading and writing threads take into account whether
RECONNECT state is already set or not.

- Replace LockSupport with ReentrantLock.Condition for a thread to be
suspended and woken up. Our cluster tests and standalone demo app show
that LockSupport is not a safe memory barrier as it could be. The
reconnect thread relies on a visibility guarantee between park-unpark
invocations which, actually, sometimes doesn't work. Also, according to
java-docs LockSupport is more like an internal component to build
high-level blocking primitives. It is not recommended using this class
directly. It was replaced by ReentrantLock.Condition primitive based
on LockSupport but which has proper LockSupport usage inside.

Fixes: #142
Affects: #34, #136
nicktorwald added a commit that referenced this pull request Apr 4, 2019
- Avoid a possible race between reading, writing and reconnecting
threads when a reconnection process is started.
It might have happened that the lagged thread (reading or writing) could
reset the state to RECONNECT after the reconnecting thread has already
started and set the state to 0. As a result, all next attempts to
reconnect will never happen. Now the reconnect thread holds on the state
as long as it is required.

- Avoid another possible race between reading and writing threads when
they are started during the reconnection process.
It might have happened that one of the threads crashed when it was
starting and another slightly lagged thread set up its flag. It could
have led that the reconnecting thread saw RECONNECT + R/W state instead
of pure RECONNECT. Again, this case broke down all next reconnection
attempts. Now reading and writing threads take into account whether
RECONNECT state is already set or not.

- Replace LockSupport with ReentrantLock.Condition for a thread to be
suspended and woken up. Our cluster tests and standalone demo app show
that LockSupport is not a safe memory barrier as it could be. The
reconnect thread relies on a visibility guarantee between park-unpark
invocations which, actually, sometimes doesn't work. Also, according to
java-docs LockSupport is more like an internal component to build
high-level blocking primitives. It is not recommended using this class
directly. It was replaced by ReentrantLock.Condition primitive based
on LockSupport but which has proper LockSupport usage inside.

Fixes: #142
Affects: #34, #136
Copy link
Member

@Totktonada Totktonada left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! Almost everything looks okay for me, just a few questions about behaviour, some corner cases and one or two about the code.

README.md Outdated Show resolved Hide resolved
README.md Outdated Show resolved Hide resolved
README.md Outdated Show resolved Hide resolved
README.md Outdated Show resolved Hide resolved
README.md Outdated Show resolved Hide resolved
src/main/java/org/tarantool/TarantoolClusterClient.java Outdated Show resolved Hide resolved
README.md Outdated Show resolved Hide resolved
@nicktorwald nicktorwald force-pushed the nicktorwald/gh-34-cluster-support branch 2 times, most recently from f8b153c to 42227e7 Compare April 13, 2019 20:06
README.md Outdated Show resolved Hide resolved
Copy link
Member

@Totktonada Totktonada left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have no more questions and it looks good to me. I'll push the changes after PR #145.

@nicktorwald nicktorwald force-pushed the nicktorwald/gh-34-cluster-support branch from 42227e7 to 0ad6766 Compare April 14, 2019 14:51
nicktorwald added a commit that referenced this pull request Apr 15, 2019
A state of a client is a set of the following flags: {READING, WRITING,
RECONNECT, CLOSED}. Let's name a state when no flags are set
UNINITIALIZED.

A reader thread sets READING, performs reading until an error or an
interruption, drops READING and tries to trigger reconnection (if a
state allows, see below). A writer do quite same things, but with the
WRITING flag. The key point here is that a reconnection is triggered
from a reader/writer thread and only when certain conditions are met.

The prerequisite to set RECONNECT and signal (unpark) a connector thread
is that a client has UNINITIALIZED state.

There are several problems here:

- Say, a reader stalls a bit after dropping READING, then a writer drops
WRITING and trigger reconnection. Then reader wokes up and set RECONNECT
again.

- Calling unpark() N times for a connector thread when it is alive
doesn't lead to skipping next N park() calls, so the problem above is
not just about extra reconnection, but lead the connector thread to be
stuck.

- Say, a reader stalls just before setting READING. A writer is hit by
an IO error and triggers reconnection (set RECONNECT, unpark connector).
Then the reader wakes up and set READING+RECONNECT state that disallows
a connector thread to proceed further (it expects pure RECONNECT). Even
when the reader drops READING it will not wake up (unpark) the connector
thread, because RECONNECT was already set (state is not UNINITIALIZED).

This commit introduces several changes that eliminate the problems
above:

- Use ReentrantLock + Condition instead of park() / unpark() to never
miss signals to reconnect, does not matter whether a connector is
parked.

- Ensure a reader and a writer threads from one generation (that are
created on the same reconnection iteration) triggers reconnection once.

- Hold RECONNECT state most of time a connector works (while acquiring
a new socket, connecting and reading Tarantool greeting) and prevent to
set READING/WRITING while RECONNECT is set.

Fixes: #142
Affects: #34, #136
nicktorwald added a commit that referenced this pull request Apr 15, 2019
A state of a client is a set of the following flags: {READING, WRITING,
RECONNECT, CLOSED}. Let's name a state when no flags are set
UNINITIALIZED.

A reader thread sets READING, performs reading until an error or an
interruption, drops READING and tries to trigger reconnection (if a
state allows, see below). A writer do quite same things, but with the
WRITING flag. The key point here is that a reconnection is triggered
from a reader/writer thread and only when certain conditions are met.

The prerequisite to set RECONNECT and signal (unpark) a connector thread
is that a client has UNINITIALIZED state.

There are several problems here:

- Say, a reader stalls a bit after dropping READING, then a writer drops
WRITING and trigger reconnection. Then reader wokes up and set RECONNECT
again.

- Calling unpark() N times for a connector thread when it is alive
doesn't lead to skipping next N park() calls, so the problem above is
not just about extra reconnection, but lead the connector thread to be
stuck.

- Say, a reader stalls just before setting READING. A writer is hit by
an IO error and triggers reconnection (set RECONNECT, unpark connector).
Then the reader wakes up and set READING+RECONNECT state that disallows
a connector thread to proceed further (it expects pure RECONNECT). Even
when the reader drops READING it will not wake up (unpark) the connector
thread, because RECONNECT was already set (state is not UNINITIALIZED).

This commit introduces several changes that eliminate the problems
above:

- Use ReentrantLock + Condition instead of park() / unpark() to never
miss signals to reconnect, does not matter whether a connector is
parked.

- Ensure a reader and a writer threads from one generation (that are
created on the same reconnection iteration) triggers reconnection once.

- Hold RECONNECT state most of time a connector works (while acquiring
a new socket, connecting and reading Tarantool greeting) and prevent to
set READING/WRITING while RECONNECT is set.

- Ensure a new reconnection iteration will start only after old reader
and old writer threads exit (because we cannot receive a reconnection
signal until we send it).

Fixes: #142
Affects: #34, #136
nicktorwald added a commit that referenced this pull request Apr 18, 2019
A state of a client is a set of the following flags: {READING, WRITING,
RECONNECT, CLOSED}. Let's name a state when no flags are set
UNINITIALIZED.

A reader thread sets READING, performs reading until an error or an
interruption, drops READING and tries to trigger reconnection (if a
state allows, see below). A writer do quite same things, but with the
WRITING flag. The key point here is that a reconnection is triggered
from a reader/writer thread and only when certain conditions are met.

The prerequisite to set RECONNECT and signal (unpark) a connector thread
is that a client has UNINITIALIZED state.

There are several problems here:

- Say, a reader stalls a bit after dropping READING, then a writer drops
WRITING and trigger reconnection. Then reader wokes up and set RECONNECT
again.

- Calling unpark() N times for a connector thread when it is alive
doesn't lead to skipping next N park() calls, so the problem above is
not just about extra reconnection, but lead the connector thread to be
stuck.

- Say, a reader stalls just before setting READING. A writer is hit by
an IO error and triggers reconnection (set RECONNECT, unpark connector).
Then the reader wakes up and set READING+RECONNECT state that disallows
a connector thread to proceed further (it expects pure RECONNECT). Even
when the reader drops READING it will not wake up (unpark) the connector
thread, because RECONNECT was already set (state is not UNINITIALIZED).

This commit introduces several changes that eliminate the problems
above:

- Use ReentrantLock + Condition instead of park() / unpark() to never
miss signals to reconnect, does not matter whether a connector is
parked.

- Ensure a reader and a writer threads from one generation (that are
created on the same reconnection iteration) triggers reconnection once.

- Hold RECONNECT state most of time a connector works (while acquiring
a new socket, connecting and reading Tarantool greeting) and prevent to
set READING/WRITING while RECONNECT is set.

- Ensure a new reconnection iteration will start only after old reader
and old writer threads exit (because we cannot receive a reconnection
signal until we send it).

Fixes: #142
Affects: #34, #136
nicktorwald added a commit that referenced this pull request Apr 18, 2019
A state of a client is a set of the following flags: {READING, WRITING,
RECONNECT, CLOSED}. Let's name a state when no flags are set
UNINITIALIZED.

A reader thread sets READING, performs reading until an error or an
interruption, drops READING and tries to trigger reconnection (if a
state allows, see below). A writer do quite same things, but with the
WRITING flag. The key point here is that a reconnection is triggered
from a reader/writer thread and only when certain conditions are met.

The prerequisite to set RECONNECT and signal (unpark) a connector thread
is that a client has UNINITIALIZED state.

There are several problems here:

- Say, a reader stalls a bit after dropping READING, then a writer drops
WRITING and trigger reconnection. Then reader wokes up and set RECONNECT
again.

- Calling unpark() N times for a connector thread when it is alive
doesn't lead to skipping next N park() calls, so the problem above is
not just about extra reconnection, but lead the connector thread to be
stuck.

- Say, a reader stalls just before setting READING. A writer is hit by
an IO error and triggers reconnection (set RECONNECT, unpark connector).
Then the reader wakes up and set READING+RECONNECT state that disallows
a connector thread to proceed further (it expects pure RECONNECT). Even
when the reader drops READING it will not wake up (unpark) the connector
thread, because RECONNECT was already set (state is not UNINITIALIZED).

This commit introduces several changes that eliminate the problems
above:

- Use ReentrantLock + Condition instead of park() / unpark() to never
miss signals to reconnect, does not matter whether a connector is
parked.

- Ensure a reader and a writer threads from one generation (that are
created on the same reconnection iteration) triggers reconnection once.

- Hold RECONNECT state most of time a connector works (while acquiring
a new socket, connecting and reading Tarantool greeting) and prevent to
set READING/WRITING while RECONNECT is set.

- Ensure a new reconnection iteration will start only after old reader
and old writer threads exit (because we cannot receive a reconnection
signal until we send it).

Fixes: #142
Affects: #34, #136
Refactor SocketChannelProvider implementations. Now we have two
SingleSocketChannelProviderImpl and RoundRobinSocketProviderImpl used by
simple and cluster clients respectively. To achieve this a
BaseSocketChannelProvider was extracted.

Add a service discovery implementation based on a Tarantool stored
procedure which is called to obtain a new list of cluster nodes.

Integrate service discovery and current cluster client. The client now
is aware of potential nodes changing using a configurable background job
which periodically checks whether node addresses have changed. If so
the client refreshes the RoundRobinSocketProviderImpl and replaces old
nodes by new ones. It also requires some additional effort in case of
missing the current node in the list. We need to reconnect to another
node as soon as possible with a minimal delay between client requests.
To achieve this we currently try to catch a moment when the requests in
progress have been accomplished and we can finish reconnection process.

Closes: #34
See also: #142
@nicktorwald nicktorwald force-pushed the nicktorwald/gh-34-cluster-support branch from 0ad6766 to 7d00b2a Compare April 18, 2019 11:45
@nicktorwald nicktorwald merged commit 9e26d8a into master Apr 18, 2019
@Totktonada Totktonada deleted the nicktorwald/gh-34-cluster-support branch May 29, 2019 01:10
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Auto refresh a list of cluster nodes in TarantoolClusterClient
3 participants