Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

integration/aof test unit fails on fedora 24 docker image #3810

Open
melissop opened this issue Feb 16, 2017 · 2 comments
Open

integration/aof test unit fails on fedora 24 docker image #3810

melissop opened this issue Feb 16, 2017 · 2 comments

Comments

@melissop
Copy link

Hi,

I try to run the test suite on a fedora 24 docker image and integration/aof suite consistently fails. To be more specific all "assert_equal 1 [is_alive $srv]" statements in tests/integration/aof.tcl fail.
I tried to debug start_server_aof function but my tcl knowledge is limited.

I experimented with ubuntu and centos 7 images also without problems.
Tests conducted over commit 6712bce

               The End

Execution time of different units:
2 seconds - integration/aof

!!! WARNING The following tests failed:

*** [err]: Unfinished MULTI: Server should start if load-truncated is yes in tests/integration/aof.tcl
Expected '0' to be equal to '1'
*** [err]: Short read: Server should start if load-truncated is yes in tests/integration/aof.tcl
Expected '0' to be equal to '1'
*** [err]: Short read + command: Server should start in tests/integration/aof.tcl
Expected '0' to be equal to '1'
*** [err]: Fixed AOF: Server should have been started in tests/integration/aof.tcl
Expected '0' to be equal to '1'
*** [err]: AOF+SPOP: Server should have been started in tests/integration/aof.tcl
Expected '0' to be equal to '1'
*** [err]: AOF+SPOP: Server should have been started in tests/integration/aof.tcl
Expected '0' to be equal to '1'
*** [err]: AOF+EXPIRE: Server should have been started in tests/integration/aof.tcl
Expected '0' to be equal to '1'
Cleanup: may take some time... OK

@stevelipinski
Copy link
Contributor

stevelipinski commented May 8, 2020

Having same problem on RH8 docker image when building 6.0.1. Did you get anywhere with this?
Best I can tell its related to the container environment (happens on one K8s build cluster, but not another). Maybe timing or security? Or the backing-volume? Just tossing out ideas.

@stevelipinski
Copy link
Contributor

stevelipinski commented May 10, 2020

Discovered that my issue was there was no 'ps' command in my docker image - the is_alive proc uses ps -p <pid>.
My solution was that I needed to yum install procps beforehand.
In case that helps anyone...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants