Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Replication sessions can loop evaluating the same updates #2608

Open
389-ds-bot opened this issue Sep 13, 2020 · 1 comment
Open

Replication sessions can loop evaluating the same updates #2608

389-ds-bot opened this issue Sep 13, 2020 · 1 comment
Milestone

Comments

@389-ds-bot
Copy link

Cloned from Pagure issue: https://pagure.io/389-ds-base/issue/49549


Issue Description

Here are two scenario:

We have M1 <--> M2 with full replication and M1 --> C with fractional and no M2 --> C.
Scenario where C is in DMZ and only one supplier updates it.
Let the RUV being

M1
    {replica 1} csn_1_1000
    {replica 2} csn_2_1000

M2
    {replica 1} csn_1_1000
    {replica 2} csn_2_1000

C
    {replica 1} csn_1_1000
    {replica 2} csn_2_1

C is very late regarding all the updates generated on '2' because most of them are skipped.

On a replication session M1->C1, anchorcsn will be csn_2_1 and all the updates csn_2_1..csn_2_1000 will be evaluated (including likely all the updates csn_1_1..csn_1_1000).
At the end M1 has nothing to send to C because all is skipped, so it updates its keepAlive_1

The next session will start with

M1
    {replica 1} csn_1_1001
    {replica 2} csn_2_1000

M2
    {replica 1} csn_1_1001
    {replica 2} csn_2_1000

C
    {replica 1} csn_1_1000
    {replica 2} csn_2_1

So it will start again from csn_2_1. This, indefinitely until an update on M2 will be propagated to C.
The RC of the issue is that the keepAlive update should have be done on M2.

There is a quite similar issue when a supplier is rarely updated. It is less severe as it resolves by itself

Initial is

M1
    {replica 1} csn_1_1000
    {replica 2} csn_2_1

M2
    {replica 1} csn_1_1000
    {replica 2} csn_2_1

Then M2 is updated generating csn_2_2. M2 will update M1 starting with csn_2_1, so it will evaluate csn_1_1 to csn_1_1000 (skipping them as M1 already knows them), finally it will send csn_2_2.

I think both situations could be addressed if we implement a periodic update of the KeepAlive entry. By default periodicity would be infinite (no update). This would require a new replica config attribute.

Package Version and Platform

All

Steps to reproduce

see desciption

Actual results

Backlog of updates being high, it slows down replication.

Expected results

Reduce the backlog of updates to evaluate

@389-ds-bot 389-ds-bot added this to the 1.4 backlog milestone Sep 13, 2020
@389-ds-bot
Copy link
Author

Comment from tbordaz (@tbordaz) at 2018-01-25 11:30:41

Metadata Update from @tbordaz:

  • Custom field component adjusted to None
  • Custom field origin adjusted to None
  • Custom field reviewstatus adjusted to None
  • Custom field type adjusted to None
  • Custom field version adjusted to None
  • Issue set to the milestone: 1.4 backlog

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant