Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Two tests failing in unstable #2715

Closed
mariano-perez-rodriguez opened this Issue Aug 5, 2015 · 101 comments

Comments

Projects
None yet
@mariano-perez-rodriguez
Copy link
Contributor

mariano-perez-rodriguez commented Aug 5, 2015

Hello there!
Every day I re-clone Redis' repo and run make; make test as a matter of habit.
Today, the following errors appeared:

*** [err]: Test replication partial resync: ok psync (diskless: yes, reconnect: 1) in tests/integration/replication-psync.tcl
Expected condition '[s -1 sync_partial_ok] > 0' to be true ([s -1 sync_partial_ok] > 0)
*** [err]: Connect multiple slaves at the same time (issue #141), diskless=yes in tests/integration/replication.tcl
Slaves not correctly synchronized

I ran the tests 3 times already, and this consistently happens. Just thought you guys may want to hear about it ;)

Regards!

@antirez

This comment has been minimized.

Copy link
Owner

antirez commented Aug 5, 2015

Hello, please try again, I was in the middle of doing changes.

@mariano-perez-rodriguez

This comment has been minimized.

Copy link
Contributor Author

mariano-perez-rodriguez commented Aug 5, 2015

Hmmm... did so, I'm still getting:

*** [err]: Test replication partial resync: ok psync (diskless: yes, reconnect: 1) in tests/integration/replication-psync.tcl
Expected condition '[s -1 sync_partial_ok] > 0' to be true ([s -1 sync_partial_ok] > 0)

Are you still tinkering with it? Would you like me to test again in a couple of hours?

@open2

This comment has been minimized.

Copy link

open2 commented Sep 11, 2015

3.0.4 (Centos 6.7) same error

!!! WARNING The following tests failed:

*** [err]: Test replication partial resync: ok psync (diskless: yes, reconnect: 1) in tests/integration/replication-psync.tcl
Expected condition '[s -1 sync_partial_ok] > 0' to be true ([s -1 sync_partial_ok] > 0)
Cleanup: may take some time... OK
make[1]: *** [test] 오류 1
make[1]: Leaving directory `/root/redis-3.0.4/src'
make: *** [test] 오류 2

I do not use replication.
Redis works well.

@antirez

This comment has been minimized.

Copy link
Owner

antirez commented Sep 11, 2015

Probably due to slow instances or other issues I can't reproduce easily. If you have access to a system where the test fails consistently we can try a few things.

@open2

This comment has been minimized.

Copy link

open2 commented Sep 11, 2015

I mailed to antirez with system access info.

@antirez

This comment has been minimized.

Copy link
Owner

antirez commented Sep 11, 2015

Thanks @open2, I may not be able to do this in the next two days (sat/sun), but monday morning I'll be on it.

@AlixBarbosa

This comment has been minimized.

Copy link

AlixBarbosa commented Sep 18, 2015

Hello
I can confirm the error:

*** [err]: Test replication partial resync: ok psync (diskless: yes, reconnect: 1) in tests/integration/replication-psync.tcl
Expected condition '[s -1 sync_partial_ok] > 0' to be true ([s -1 sync_partial_ok] > 0)
Cleanup: may take some time... OK

On a fresh CentOS Linux release 7.1.1503 (Core) using today's redis-stable.tar.gz

@koizo

This comment has been minimized.

Copy link

koizo commented Sep 21, 2015

Hello
I can also confirm the error on CentOS Linux release 7.1.1503 (Core)
!!! WARNING The following tests failed:

*** [err]: Test replication partial resync: ok psync (diskless: no, reconnect: 1) in tests/integration/replication-psync.tcl
Expected condition '[s -1 sync_partial_ok] > 0' to be true ([s -1 sync_partial_ok] > 0)
*** [err]: Test replication partial resync: ok psync (diskless: yes, reconnect: 1) in tests/integration/replication-psync.tcl
Expected condition '[s -1 sync_partial_ok] > 0' to be true ([s -1 sync_partial_ok] > 0)
Cleanup: may take some time... OK

@mariano-perez-rodriguez

This comment has been minimized.

Copy link
Contributor Author

mariano-perez-rodriguez commented Sep 21, 2015

@JoeReelio

This comment has been minimized.

Copy link

JoeReelio commented Sep 22, 2015

Hi,
I encountered this error on Ubuntu 3.13.0-55-generic after making & make testing redis-3.0.4.tar.gz today:

*** [err]: Test replication partial resync: ok psync (diskless: yes, reconnect: 1) in tests/integration/replication-psync.tcl
Expected condition '[s -1 sync_partial_ok] > 0' to be true ([s -1 sync_partial_ok] > 0)

Best!

@ukr15

This comment has been minimized.

Copy link

ukr15 commented Sep 25, 2015

We tried to install redis 3.0.4 on different machines. On two bare metals running SMP Debian 3.2.60-1+deb7u3 x86_64 GNU/Linux and SMP Debian 3.16.7-ckt11-1 (2015-05-24) x86_64 GNU/Linux, respectively, make test 35 fails with

Test replication partial resync: ok psync (diskless: yes, reconnect: 1) in tests/integration/replication-psync.tcl
Expected condition '[s -1 sync_partial_ok] > 0' to be true ([s -1 sync_partial_ok] > 0)
*** [err]: Test replication partial resync: ok psync (diskless: yes, reconnect: 1) in tests/integration/replication-psync.tcl
Expected condition '[s -1 sync_partial_ok] > 0' to be true ([s -1 sync_partial_ok] > 0)
Cleanup: may take some time... OK

On a VM running SMP Debian 3.16.7-ckt11-1+deb8u3 (2015-08-04) x86_64 GNU/Linux and under MacOS10.10.5 all test work fine.

@DrEnter

This comment has been minimized.

Copy link

DrEnter commented Sep 28, 2015

I can confirm seeing what appears to be a similar error on Mac OSX Yosemite (10.10.5) after building 3.0.4, but not every time I run "make test". Also, I saw it on test 36, not 35...

*** [err]: Test replication partial resync: ok psync (diskless: yes, reconnect: 1) in tests/integration/replication-psync.tcl
Expected condition '[s -1 sync_partial_ok] > 0' to be true ([s -1 sync_partial_ok] > 0)

@Dowwie

This comment has been minimized.

Copy link

Dowwie commented Sep 30, 2015

Just ran into the same issue in debian 8

@scottstensland

This comment has been minimized.

Copy link

scottstensland commented Oct 3, 2015

seeing same issue on both redis-3.0.4 as well as fresh github clone

ubuntu 14.04
cc --version
cc (Ubuntu 4.8.4-2ubuntu1~14.04) 4.8.4

CVTJNII added a commit to CVTJNII/redis that referenced this issue Oct 7, 2015

@ifor

This comment has been minimized.

Copy link

ifor commented Oct 15, 2015

Same.. built from redis-3.0.5 -- repeatedly and consistently fails - on 3 different machines. Got the tests to pass only once.
Ubuntu 14.04.3 LTS,
cc --version
cc (Ubuntu 4.8.4-2ubuntu1~14.04) 4.8.4

@antirez

This comment has been minimized.

Copy link
Owner

antirez commented Oct 15, 2015

Sorry but can't replicate, @open2 provided my an SSH access that did not worked when I tried (temporary network issue I believe) and then I left for one week and forgot about the whole thing. So here we are again, how to replicate this easily? Is there a simple way in EC2 I can spin an instance and see the tests failing?

@CVTJNII

This comment has been minimized.

Copy link

CVTJNII commented Oct 16, 2015

I'm getting fairly consistent failures with the following Dockerfile:

FROM debian:jessie

RUN apt-get update && apt-get install -y wget gcc make tcl git ruby
RUN cd /tmp; git clone https://github.com/antirez/redis.git && cd redis && git checkout 3.0
RUN cd /tmp/redis && make
RUN cd /tmp/redis && make test

Can you see if that fails to build? I don't have an EC2 instance handy but if it works for you I can make one.

@antirez

This comment has been minimized.

Copy link
Owner

antirez commented Oct 16, 2015

I think it's just a matter of VM speed, not much of the distribution used. Let's try with a very slow VM...

@antirez

This comment has been minimized.

Copy link
Owner

antirez commented Oct 16, 2015

Can't reproduce with Ubuntu 14.04 (is my main dev Linux box!), nor with a slow VM. Doing a 10.04 upgrade in order to see if it was introduced by something more recent. I think it's just a matter of timing issues in slow boxes. Tried in a slow box but not slow enough, but nothing. If you could give me instructions about what kind of EC2 instance to spin with what image in order to trigger it, it's the simplest path...

@ukr15

This comment has been minimized.

Copy link

ukr15 commented Oct 16, 2015

We could replicate the failure found with redis-3.0.4 for redis-3.0.5:
All tests went through on MacOS 10.11 (El Capitan) and on a virtualized Debian 3.2.68-1+deb7u2 but failed on a bare metal with Intel® Core™ i7-4770 Quad-Core Haskell and GB DDR3 RAM (3 tries) and Debian 3.16.7-ckt11-1 (2015-05-24)

@antirez

This comment has been minimized.

Copy link
Owner

antirez commented Oct 16, 2015

Does it happen with:

cd /my/redis/source/dir
./runtest --clients 1

? Thanks.

@ukr15

This comment has been minimized.

Copy link

ukr15 commented Oct 16, 2015

Yes,

*** [err]: Test replication partial resync: ok psync (diskless: no, reconnect: 1) in tests/integration/replication-psync.tcl
Expected condition '[s -1 sync_partial_ok] > 0' to be true ([s -1 sync_partial_ok] > 0)
*** [err]: Test replication partial resync: ok psync (diskless: yes, reconnect: 1) in tests/integration/replication-psync.tcl
Expected condition '[s -1 sync_partial_ok] > 0' to be true ([s -1 sync_partial_ok] > 0)

@antirez

This comment has been minimized.

Copy link
Owner

antirez commented Oct 16, 2015

Ok, fails with a single client, your box is very fast, must be the exact environment than. I need to test with Debian 3.16.7-ckt11-1 specifically. I hope to find an image in EC2.

@ningappa

This comment has been minimized.

Copy link

ningappa commented Oct 19, 2015

Ran into same problem with stable version; so just Installed particular version http://download.redis.io/releases/redis-3.0.0.tar.gz and working fine now.

@enoch85

This comment has been minimized.

Copy link

enoch85 commented Oct 25, 2015

Same issue here with a fresh 3.0.5 installaation. Tried 10 times with make test, 1 of the tests passed. The others failed with

*** [err]: Test replication partial resync: ok psync (diskless: yes, reconnect: 1) in tests/integration/replication-psync.tcl
Expected condition '[s -1 sync_partial_ok] > 0' to be true ([s -1 sync_partial_ok] > 0)

I'm on a server with 6 cores CPU 3.5 GHZ and 16 GB RAM. Did the test remotely via SSH.

@fedot1325

This comment has been minimized.

Copy link

fedot1325 commented Oct 26, 2015

I have same error with E5-1620v2 3,7 / 3,9 GHz, 64 GB of RAM

*** [err]: Test replication partial resync: ok psync (diskless: yes, reconnect: 1) in tests/integration/replication-psync.tcl
Expected condition '[s -1 sync_partial_ok] > 0' to be true ([s -1 sync_partial_ok] > 0)

@navytux

This comment has been minimized.

Copy link

navytux commented Oct 27, 2015

Some more info on the bug:

  • it reproduces reliably with ~90% probability on 3 separate machines with 8 cpus (i7-3770S CPU @ 3.10GHz):
$ ./runtest --clients 1 --single integration/replication-psync
Cleanup: may take some time... OK
Starting test server at port 11111
[ready]: 3092
Testing integration/replication-psync
[ok]: Slave should be able to synchronize with the master
[ok]: Detect write load to master
[err]: Test replication partial resync: ok psync in tests/integration/replication-psync.tcl
Expected condition '[s -1 sync_partial_ok] > 0' to be true ([s -1 sync_partial_ok] > 0)
[ok]: Slave should be able to synchronize with the master
[ok]: Detect write load to master
[ok]: Test replication partial resync: no backlog
[ok]: Slave should be able to synchronize with the master
[ok]: Detect write load to master
[ok]: Test replication partial resync: ok after delay
[ok]: Slave should be able to synchronize with the master
[ok]: Detect write load to master
[ok]: Test replication partial resync: backlog expired
[1/1 done]: integration/replication-psync (42 seconds)

                   The End

Execution time of different units:
  42 seconds - integration/replication-psync

!!! WARNING The following tests failed:

*** [err]: Test replication partial resync: ok psync in tests/integration/replication-psync.tcl
Expected condition '[s -1 sync_partial_ok] > 0' to be true ([s -1 sync_partial_ok] > 0)
Cleanup: may take some time... OK

Two of the machines are Debian 7 with kernel

$ uname -a
Linux COMP-1926 3.2.0-4-amd64 #1 SMP Debian 3.2.51-1 x86_64 GNU/Linux

and one is openSUSE 12.1 with kernel

Linux linux-pk4c-1 3.1.10-1.10-default #1 SMP Mon May 28 14:19:15 UTC 2012 (94036a4) x86_64 x86_64 x86_64 GNU/Linux

  • The bug is reproducable with all Redis versions: I can reliably trigger it with 3.0.5, 3.0.2, 3.0.0, 2.8.21 (not tried other versions)
  • The bug reliably goes away if I pin test processes to only one CPU:
$ schedtool -v -a 0 -e ./runtest --clients 1 --single integration/replication-psync
PID  8101: PRIO   0, POLICY N: SCHED_NORMAL  , NICE   0, AFFINITY 0x1
Cleanup: may take some time... OK
Starting test server at port 11111
[ready]: 8106
Testing integration/replication-psync
[ok]: Slave should be able to synchronize with the master
[ok]: Detect write load to master
[ok]: Test replication partial resync: no reconnection, just sync (diskless: no, reconnect: 0)
[ok]: Slave should be able to synchronize with the master
[ok]: Detect write load to master
[ok]: Test replication partial resync: ok psync (diskless: no, reconnect: 1)
[ok]: Slave should be able to synchronize with the master
[ok]: Detect write load to master
[ok]: Test replication partial resync: no backlog (diskless: no, reconnect: 1)
[ok]: Slave should be able to synchronize with the master
[ok]: Detect write load to master
[ok]: Test replication partial resync: ok after delay (diskless: no, reconnect: 1)
[ok]: Slave should be able to synchronize with the master
[ok]: Detect write load to master
[ok]: Test replication partial resync: backlog expired (diskless: no, reconnect: 1)
[ok]: Slave should be able to synchronize with the master
[ok]: Detect write load to master
[ok]: Test replication partial resync: no reconnection, just sync (diskless: yes, reconnect: 0)
[ok]: Slave should be able to synchronize with the master
[ok]: Detect write load to master
[ok]: Test replication partial resync: ok psync (diskless: yes, reconnect: 1)
[ok]: Slave should be able to synchronize with the master
[ok]: Detect write load to master
[ok]: Test replication partial resync: no backlog (diskless: yes, reconnect: 1)
[ok]: Slave should be able to synchronize with the master
[ok]: Detect write load to master
[ok]: Test replication partial resync: ok after delay (diskless: yes, reconnect: 1)
[ok]: Slave should be able to synchronize with the master
[ok]: Detect write load to master
[ok]: Test replication partial resync: backlog expired (diskless: yes, reconnect: 1)
[1/1 done]: integration/replication-psync (96 seconds)

                   The End

Execution time of different units:
  96 seconds - integration/replication-psync

\o/ All tests passed without errors!
  • inside a KVM with Debian 8 installed:
    • the bug does not show itself if VM has 2 CPUs
    • the bug starts to show itself if VM has 4 CPUs
    • if I pin process to only one CPU, the bug does not show itself, even in VM with 4 CPUs

(the kernel inside VM is

$ uname -a
Linux gitlab-test 3.16.0-4-amd64 #1 SMP Debian 3.16.7-ckt11-1+deb8u5 (2015-10-09) x86_64 GNU/Linux

)

So imho it looks to be some kind of race condition, which triggers if we have not small amount of parallelism on host machine.

@ddo

This comment has been minimized.

Copy link

ddo commented Mar 27, 2017

redis 3.2.8
ubuntu 16.04 LTS
64 bit
Intel® Core™ i7-4790 CPU @ 3.60GHz × 8

i got the same issue

!!! WARNING The following tests failed:

*** [err]: Test replication partial resync: ok psync (diskless: yes, reconnect: 1) in tests/integration/replication-psync.tcl
Expected condition '[s -1 sync_partial_ok] > 0' to be true ([s -1 sync_partial_ok] > 0)
Cleanup: may take some time... OK
Makefile:225: recipe for target 'test' failed
make[1]: *** [test] Error 1
make[1]: Leaving directory '/tmp/redis-3.2.8/src'
Makefile:6: recipe for target 'test' failed
make: *** [test] Error 2

all tests passed when test with

taskset -c 0 make test
@sant123

This comment has been minimized.

Copy link

sant123 commented Apr 4, 2017

redis 3.2.8
ubuntu 16.10
64 bit
Intel® Core™ i7-6700 CPU @ 3.40GHz × 8

I got the same 😞

image

@chilejiang1024

This comment has been minimized.

Copy link

chilejiang1024 commented Apr 13, 2017

*** [err]: Test replication partial resync: ok psync (diskless: no, reconnect: 1) in tests/integration/replication-psync.tcl Expected condition '[s -1 sync_partial_ok] > 0' to be true ([s -1 sync_partial_ok] > 0) *** [err]: Test replication partial resync: ok psync (diskless: yes, reconnect: 1) in tests/integration/replication-psync.tcl Expected condition '[s -1 sync_partial_ok] > 0' to be true ([s -1 sync_partial_ok] > 0) Cleanup: may take some time... OK
got this too...
want to know whether it will affect redis running ...

CPU : E7-4850v4 * 2
RAM : 128GB

@antirez

This comment has been minimized.

Copy link
Owner

antirez commented Apr 13, 2017

Anyone can give me instructions on how to reproduce with EC2 systematically?

  1. Instance type.
  2. Exact Linux distro to install.

I can fix it easily given that infos.

@antirez

This comment has been minimized.

Copy link
Owner

antirez commented Apr 13, 2017

@chilejiang1024 no.. this is just a false positive. Nothing will be affected.

@kerneljake

This comment has been minimized.

Copy link

kerneljake commented Apr 13, 2017

@antirez see #2715 (comment) to reproduce the symptom.

@antirez

This comment has been minimized.

Copy link
Owner

antirez commented Apr 13, 2017

@kerneljake The bug is time dependent, I already tested with different ubuntu versions locally and it does not happen easily, so I also need the right instance combination to test on the cloud. Thanks.

@kerneljake

This comment has been minimized.

Copy link

kerneljake commented Apr 13, 2017

@antirez I was able to reproduce it consistently in version 3.2.5 on c4.8xlarge running Ubuntu 16.04.1 LTS in EC2.

@0xmohit

This comment has been minimized.

Copy link

0xmohit commented Apr 13, 2017

@antirez It happens consistently on AWS ec2 c3.8xlarge (Ubuntu 15.04) which is a 36 vCPU system.

@antirez

This comment has been minimized.

Copy link
Owner

antirez commented Apr 14, 2017

@0xmohit good! Thanks.

antirez added a commit that referenced this issue Apr 14, 2017

Test: fix, hopefully, false PSYNC failure like in issue #2715.
And many other related Github issues... all reporting the same problem.
There was probably just not enough backlog in certain unlucky runs.
I'll ask people that can reporduce if they see now this as fixed as
well.
@antirez

This comment has been minimized.

Copy link
Owner

antirez commented Apr 14, 2017

Hopefully fixed, please try again with latest 3.2 branch commit, or cherry picking commit 6a33952 in your branch.

antirez added a commit that referenced this issue Apr 14, 2017

Test: fix, hopefully, false PSYNC failure like in issue #2715.
And many other related Github issues... all reporting the same problem.
There was probably just not enough backlog in certain unlucky runs.
I'll ask people that can reporduce if they see now this as fixed as
well.
@sant123

This comment has been minimized.

Copy link

sant123 commented Apr 14, 2017

Tests working @antirez

image

Thank you!

@antirez

This comment has been minimized.

Copy link
Owner

antirez commented Apr 14, 2017

Cool 💃 can't believe it's finally over with this bug. Since I was not able to reproduce first-hand (even now!) I always checked the logic and looked fine. Now opening the file again, after NOT being able to reproduce with the same setup, I noticed there were a few less zeroes in the backlog size of the first test. I searched this issue to see if it was always the first to fail and... bingo 👯

@antirez

This comment has been minimized.

Copy link
Owner

antirez commented Apr 14, 2017

Note: waiting for a few more acknowledges before closing since otherwise I will be to unhappy to close and reopen if its' actually not fixed :-)

@sant123

This comment has been minimized.

Copy link

sant123 commented Apr 14, 2017

Sure is working @antirez 😄 ok ok!

antirez added a commit that referenced this issue Apr 18, 2017

Test: fix, hopefully, false PSYNC failure like in issue #2715.
And many other related Github issues... all reporting the same problem.
There was probably just not enough backlog in certain unlucky runs.
I'll ask people that can reporduce if they see now this as fixed as
well.

@antirez antirez closed this Apr 18, 2017

@sant123

This comment has been minimized.

Copy link

sant123 commented May 2, 2017

@antirez, this change will be reflected on version 3.2.8 at redis.io?

@antirez

This comment has been minimized.

Copy link
Owner

antirez commented May 2, 2017

@sant123 not in 3.2.8 but in 3.2.9, the next version. 3.2.8 was already out before this fix was created.

@antirez

This comment has been minimized.

Copy link
Owner

antirez commented May 2, 2017

@sant123 you can simply apply the commit to the 3.2.8 source code if you can't wait, but I suggest using the latest 3.2 commit directly, since there are other fixes. Anyway 3.2.9 will be out in a couple of days hopefully.

@sant123

This comment has been minimized.

Copy link

sant123 commented May 2, 2017

Thank you @antirez 😄

JackieXie168 pushed a commit to JackieXie168/redis that referenced this issue May 16, 2017

Test: fix, hopefully, false PSYNC failure like in issue antirez#2715.
And many other related Github issues... all reporting the same problem.
There was probably just not enough backlog in certain unlucky runs.
I'll ask people that can reporduce if they see now this as fixed as
well.

GitHubMota added a commit to GitHubMota/redis that referenced this issue Jul 25, 2017

Test: fix, hopefully, false PSYNC failure like in issue antirez#2715.
And many other related Github issues... all reporting the same problem.
There was probably just not enough backlog in certain unlucky runs.
I'll ask people that can reporduce if they see now this as fixed as
well.

JackieXie168 pushed a commit to JackieXie168/redis that referenced this issue Aug 20, 2017

Test: fix, hopefully, false PSYNC failure like in issue antirez#2715.
And many other related Github issues... all reporting the same problem.
There was probably just not enough backlog in certain unlucky runs.
I'll ask people that can reporduce if they see now this as fixed as
well.
@vvmspace

This comment has been minimized.

Copy link

vvmspace commented Sep 1, 2017

`!!! WARNING The following tests failed:

*** [err]: Slave should be able to synchronize with the master in tests/integration/replication-psync.tcl
Replication not started.
`

I am newbie in Redis, so what is "replication" in this context? Is it necessary?

JackieXie168 pushed a commit to JackieXie168/redis that referenced this issue Jan 13, 2018

Test: fix, hopefully, false PSYNC failure like in issue antirez#2715.
And many other related Github issues... all reporting the same problem.
There was probably just not enough backlog in certain unlucky runs.
I'll ask people that can reporduce if they see now this as fixed as
well.
@cryptozeny

This comment has been minimized.

Copy link

cryptozeny commented Apr 13, 2019

I found solution. taskset -c 0 make test is OK.

============================

Install Redis (source)

  • use version 3.0.6 and verify sha256sum.
cd && \
sudo apt-get install -y build-essential tcl && \
wget http://download.redis.io/releases/redis-3.0.6.tar.gz && \
echo "6f1e1523194558480c3782d84d88c2decf08a8e4b930c56d4df038e565b75624" \
redis-3.0.6.tar.gz | sha256sum -c && \
tar xvzf redis-3.0.6.tar.gz && \
cd redis-3.0.6 && \
make -j$(nproc) && \
taskset -c 0 make test
  • install redis globally - check port [6379] - sudo required
sudo make install && \
cd ./utils && \
sudo ./install_server.sh `# enter all to default`
  • check installed version - build= IDM
redis-cli ping && \
redis-server --version

Redis server v=3.0.6 sha=00000000:0 malloc=jemalloc-3.6.0 bits=64 build=c44c1162d0f94dca

  • run redis (needed only at the first time)
sudo service redis_6379 start && \
sudo service redis_6379 status
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.