New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CleanAllRUV test suite fails to restore master4 in the topology #2922
Comments
Comment from mreynolds (@mreynolds389) at 2018-07-21 18:21:51 Simon try this patch out. It seems to work for me. The main part of the fix is the change to m4rid() |
Comment from mreynolds (@mreynolds389) at 2018-07-21 18:21:52 Metadata Update from @mreynolds389:
|
Comment from vashirov (@vashirov) at 2018-07-24 14:16:54 @mreynolds389, I've ran cleanallruv test with your patch several times on a machine where it previously failed, it now passes all the time. Thanks! |
Comment from spichugi (@droideck) at 2018-07-24 23:27:32 It still fails for me on the internal tool virtual machine. [root@host-172-16-36-17 ds]# py.test -v dirsrvtests/tests/suites/replication/cleanallruv_test.py rootdir: /mnt/tests/rhds/tests/upstream/ds, inifile: dirsrvtests/tests/suites/replication/cleanallruv_test.py::test_clean PASSED [ 12%] ============== ERRORS ================ It has the same ERROR as before:
|
Comment from spichugi (@droideck) at 2018-07-25 15:37:15 Okay, the tests does pass on a faster machine still fails on a slower. We can increase the timeouts then. |
Comment from mreynolds (@mreynolds389) at 2018-07-25 18:52:13 How is it failing? What are the errors being reported? |
Comment from spichugi (@droideck) at 2018-07-26 00:41:16
The same errors I reported in the issue description. |
Comment from spichugi (@droideck) at 2018-07-27 16:16:24 Mark, your diff was applied to #2905 |
Comment from mreynolds (@mreynolds389) at 2019-01-10 18:03:16 @droideck - if this is fixed and you close this ticket? |
Comment from mreynolds (@mreynolds389) at 2019-01-10 18:03:32 Metadata Update from @mreynolds389:
|
Comment from spichugi (@droideck) at 2019-05-30 15:23:45 The issue is no longer present |
Comment from spichugi (@droideck) at 2019-05-30 15:23:50 Metadata Update from @droideck:
|
Cloned from Pagure issue: https://pagure.io/389-ds-base/issue/49863
Issue Description
Sometimes after we successfully finish some test function, we try to restore master4
in the topology (restore_master4 function). But the tests start to fail during
test_replication function (after successful master4 restoration) -
https://pagure.io/389-ds-base/blob/master/f/dirsrvtests/tests/suites/replication/cleanallruv_test.py#_160
This mostly happens after test_abort or test_abort_restart.
But I've seen the failure happening after test_clean_restart too.
It looks like a timing issue but it happens also on fast machines.
We can see the error log having these lines in the end:
on master1
on master4:
The replicaID from dse.ldif:
Package Version and Platform
389-ds-base built on master with https://pagure.io/389-ds-base/pull-request/49846
Steps to reproduce
The issue is not 100% reproducible.
It can be reproduced by running suites/replication/cleanallruv_test.py many times.
The text was updated successfully, but these errors were encountered: