Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] several tests fail while building redis 6.2.4 #9035

Open
ss2 opened this issue Jun 2, 2021 · 11 comments
Open

[BUG] several tests fail while building redis 6.2.4 #9035

ss2 opened this issue Jun 2, 2021 · 11 comments

Comments

@ss2
Copy link

ss2 commented Jun 2, 2021

Several tests:

  • integration/failover,
  • integration/replication-4,
  • integration/replication-psync,
  • integration/replication[^-],

will fail while trying to build redis 6.2.4.

I'm pasting only part of the build log. The relevant part is as follows:

!!! WARNING The following tests failed:

*** [err]: Test replication partial resync: no reconnection, just sync (diskless: no, disabled, reconnect: 0) in tests/integration/replication-psync.tcl
Expected 'b7b5ae9e36e3a810959128fd50f16c332a21a696' to be equal to '621e9919c7d7eb998dad8056d2cc9a704dba9b98' (context: type eval line 72 cmd {assert_equal [r debug digest] [r -1 debug digest]} proc ::test)
*** [err]: Test replication with parallel clients writing in different DBs in tests/integration/replication-4.tcl
Expected 'e54a6d6d55780f7945233b753db3e7a951862529' to be equal to '1a8cde924fbeb3314004295a8f05c1399d944a84' (context: type eval line 26 cmd {assert_equal [r debug digest] [r -1 debug digest]} proc ::test)
*** [err]: Test replication partial resync: ok psync (diskless: no, disabled, reconnect: 1) in tests/integration/replication-psync.tcl
Expected '33fdbe26839b826ad34241d3222fd8103b2a154a' to be equal to '13d267b3582a6f1e0b8ae65c6efbfeed82092551' (context: type eval line 72 cmd {assert_equal [r debug digest] [r -1 debug digest]} proc ::test)
*** [err]: failover to a replica with force works in tests/integration/failover.tcl
Expected '1' to be equal to '2' (context: type eval line 43 cmd {assert_equal [expr [s 0 sync_partial_ok] - $initial_psyncs] 2} proc ::test)
*** [err]: Test replication partial resync: no backlog (diskless: no, disabled, reconnect: 1) in tests/integration/replication-psync.tcl
Expected '04b2ca93a9835ba3df6d5ebe4f38e2f96bf85974' to be equal to '3c9f05c74b27ac14f52a461dc60b67cc21001e01' (context: type eval line 72 cmd {assert_equal [r debug digest] [r -1 debug digest]} proc ::test)
*** [err]: failover with timeout aborts if replica never catches up in tests/integration/failover.tcl
Expected '69e86ff1a6a2152ef49579b907c9e4331521000e' to be equal to 'e9dd21aa2a01057a15a0346c6ca4c4f0dbba25f4' (context: type proc line 3 cmd {assert_equal [$n2 debug digest] [$n3 debug digest]} proc ::assert_digests_match level 2)
*** [err]: Test replication partial resync: ok after delay (diskless: no, disabled, reconnect: 1) in tests/integration/replication-psync.tcl
Expected '6a09614983cd0b8eadd0a9329262a7cb548bebfd' to be equal to '293a04b4e43ab995ed986a834a2cadbb57083ab1' (context: type eval line 72 cmd {assert_equal [r debug digest] [r -1 debug digest]} proc ::test)
*** [err]: Connect multiple replicas at the same time (issue #141), master diskless=no, replica diskless=disabled in tests/integration/replication.tcl
Expected 59502f971a42b01cde4dad4962a0b2788824b204 eq e77e29da0e7690d61feab7f2866d3faf5604ee82 (context: type eval line 69 cmd {assert {$digest eq $digest1}} proc ::test)
*** [err]: Test replication partial resync: backlog expired (diskless: no, disabled, reconnect: 1) in tests/integration/replication-psync.tcl
Expected 'b492f48281a1e62312d80a29251287b81047a5d1' to be equal to 'c0753ee36935360664bdc08d1a044de069e5e3e7' (context: type eval line 72 cmd {assert_equal [r debug digest] [r -1 debug digest]} proc ::test)
*** [err]: Test replication partial resync: no reconnection, just sync (diskless: no, swapdb, reconnect: 0) in tests/integration/replication-psync.tcl
Expected '4c1f3de4626a5f256fd783891deb6f6f66901aae' to be equal to 'eec22b810e7318f4fa09cbcc3d769333df4237b2' (context: type eval line 72 cmd {assert_equal [r debug digest] [r -1 debug digest]} proc ::test)
*** [err]: Test replication partial resync: ok psync (diskless: no, swapdb, reconnect: 1) in tests/integration/replication-psync.tcl
Expected 'af32798e46f1203d664646e4bfc015cabede5d89' to be equal to 'ea14a2606c703717fbf3d2ff127affba9b8f5e6d' (context: type eval line 72 cmd {assert_equal [r debug digest] [r -1 debug digest]} proc ::test)
*** [err]: Test replication partial resync: no backlog (diskless: no, swapdb, reconnect: 1) in tests/integration/replication-psync.tcl
Expected '6f29880099ffc1bbb95916a28c4158a975f87739' to be equal to '666b05bf6560ba783975c3b9eaf750c827e9ab0f' (context: type eval line 72 cmd {assert_equal [r debug digest] [r -1 debug digest]} proc ::test)
*** [err]: Connect multiple replicas at the same time (issue #141), master diskless=no, replica diskless=swapdb in tests/integration/replication.tcl
Expected 5490397edf819e2f23d4a74fe9d7f7bde7203f02 eq 3c0cd0fe3d711745aefc1418dd8196236fa76e70 (context: type eval line 69 cmd {assert {$digest eq $digest1}} proc ::test)
*** [err]: Test replication partial resync: ok after delay (diskless: no, swapdb, reconnect: 1) in tests/integration/replication-psync.tcl
Expected '356a981646e4bdfefe66a0c938e24be8fbd2284e' to be equal to '61a196930bfed1e98738a5dd48be02f70bc6f10c' (context: type eval line 72 cmd {assert_equal [r debug digest] [r -1 debug digest]} proc ::test)
*** [err]: Test replication partial resync: backlog expired (diskless: no, swapdb, reconnect: 1) in tests/integration/replication-psync.tcl
Expected '0d2561866c4e4ab12e4ee01b76460a63b77a0afe' to be equal to 'b60ec12766139dbfc1a62bf09c8e9b6c02f279f8' (context: type eval line 72 cmd {assert_equal [r debug digest] [r -1 debug digest]} proc ::test)
*** [err]: Test replication partial resync: no reconnection, just sync (diskless: yes, disabled, reconnect: 0) in tests/integration/replication-psync.tcl
Expected '53f598f15b8b4dbcd3598663d99de5f608ceb061' to be equal to '5d90f73ccff21afb58d063386f8a7d6b3ec4448f' (context: type eval line 72 cmd {assert_equal [r debug digest] [r -1 debug digest]} proc ::test)
*** [err]: Connect multiple replicas at the same time (issue #141), master diskless=yes, replica diskless=disabled in tests/integration/replication.tcl
Expected 5c3a68736089987689d19184b61f11f9e3e4e02e eq db911991d306e98e8a0399323e725330bea138df (context: type eval line 69 cmd {assert {$digest eq $digest1}} proc ::test)
*** [err]: Test replication partial resync: ok psync (diskless: yes, disabled, reconnect: 1) in tests/integration/replication-psync.tcl
Expected '1a5feefc76ffbaf33524369f6f2bd167052bb6ea' to be equal to 'b09a01ec07db9b44f969a6614d0b83b5c361aae7' (context: type eval line 72 cmd {assert_equal [r debug digest] [r -1 debug digest]} proc ::test)
*** [err]: Test replication partial resync: no backlog (diskless: yes, disabled, reconnect: 1) in tests/integration/replication-psync.tcl
Expected 'fca03251ecd25ae64864b42a35f18294e885b446' to be equal to 'd2e663610b273f5582a7ef15a668b33ead382314' (context: type eval line 72 cmd {assert_equal [r debug digest] [r -1 debug digest]} proc ::test)
*** [err]: Test replication partial resync: ok after delay (diskless: yes, disabled, reconnect: 1) in tests/integration/replication-psync.tcl
Expected '7c41c86fbee680b20018f24b0a624155884862e1' to be equal to '6e27ad48a6e6defea62f0114316f9df588973aa4' (context: type eval line 72 cmd {assert_equal [r debug digest] [r -1 debug digest]} proc ::test)
*** [err]: Connect multiple replicas at the same time (issue #141), master diskless=yes, replica diskless=swapdb in tests/integration/replication.tcl
Expected af5fbca24c01606edf276643a3c50a986ed0a463 eq b2775714fcd6ddb14d1daddb2ba2f7e9ae9c10c9 (context: type eval line 69 cmd {assert {$digest eq $digest1}} proc ::test)
*** [err]: Test replication partial resync: backlog expired (diskless: yes, disabled, reconnect: 1) in tests/integration/replication-psync.tcl
Expected 'f25d28cfe0462d8d3f32f84c78d18219d31e4592' to be equal to '6007fa808b861ab2e8751f4568b44e69301d290e' (context: type eval line 72 cmd {assert_equal [r debug digest] [r -1 debug digest]} proc ::test)
*** [err]: Test replication partial resync: no reconnection, just sync (diskless: yes, swapdb, reconnect: 0) in tests/integration/replication-psync.tcl
Expected '268acd2ef443f235a7697f7987d4f1565242e601' to be equal to '1a95b8602767df34b0fd63f272e588766eecbb8e' (context: type eval line 72 cmd {assert_equal [r debug digest] [r -1 debug digest]} proc ::test)
*** [err]: Test replication partial resync: ok psync (diskless: yes, swapdb, reconnect: 1) in tests/integration/replication-psync.tcl
Expected '564ac0584436233bec9cf19ab7ea0a31ab11c507' to be equal to '8d8caed9bf1d341a50812247162ef90ce2336da1' (context: type eval line 72 cmd {assert_equal [r debug digest] [r -1 debug digest]} proc ::test)
*** [err]: Test replication partial resync: no backlog (diskless: yes, swapdb, reconnect: 1) in tests/integration/replication-psync.tcl
Expected 'fe40a7b5206c84ef4e9a493ba8ca3de767b2fdb5' to be equal to '5a216137172711ece93e093bc3e1ecdb3e1cd37a' (context: type eval line 72 cmd {assert_equal [r debug digest] [r -1 debug digest]} proc ::test)
*** [err]: Test replication partial resync: ok after delay (diskless: yes, swapdb, reconnect: 1) in tests/integration/replication-psync.tcl
Expected 'f416f9be9f7f164b17ccde9408850a1e8c463c85' to be equal to 'a3b406609f9f096a364c167387c443d24a2e7cc3' (context: type eval line 72 cmd {assert_equal [r debug digest] [r -1 debug digest]} proc ::test)
*** [err]: Test replication partial resync: backlog expired (diskless: yes, swapdb, reconnect: 1) in tests/integration/replication-psync.tcl
Expected '1f9ed0029a9708ec7bc1c576b684b583d9965215' to be equal to '177d70442823f607a5e143e53c4dd3e574b82f22' (context: type eval line 72 cmd {assert_equal [r debug digest] [r -1 debug digest]} proc ::test)
Cleanup: may take some time... OK
make[1]: *** [Makefile:383: test] Error 1
make[1]: Leaving directory '/tmp/guix-build-redis-6.2.4.drv-0/redis-6.2.4/src'
make: *** [Makefile:6: check] Error 2

Test suite failed, dumping logs.
command "make" "check" "-j" "8" "CC=gcc" "MALLOC=libc" "LDFLAGS=-ldl" "PREFIX=/gnu/store/0qb1v832ncn1grhbrs7s80jdmnlwsixz-redis-6.2.4" failed with status 2

To reproduce this, is the following package declaration used in Guix:

(define-public redis
  (package
    (name "redis")
    (version "6.2.4")
    (source (origin
              (method url-fetch)
              (uri (string-append "http://download.redis.io/releases/redis-"
                                  version".tar.gz"))
              (sha256
               (base32
                "0vp1d9mlfsppry3nsj9f7bmh9wjgsy3jggp24sac1hhgl43c8cms"))
              (modules '((guix build utils)))
              (snippet
               ;; Delete bundled jemalloc, as the package will use the libc one
               '(begin (delete-file-recursively "deps/jemalloc")
                       #t))))
    (build-system gnu-build-system)
    (native-inputs
     `(("procps" ,procps)               ; for tests
       ("tcl" ,tcl)))                   ; for tests
    (arguments
     '(#:phases
       (modify-phases %standard-phases
         (delete 'configure)
         (add-after 'unpack 'use-correct-tclsh
           (lambda* (#:key inputs #:allow-other-keys)
             (substitute* "runtest"
               (("^TCLSH=.*")
                (string-append "TCLSH="
                               (assoc-ref inputs "tcl")
                               "/bin/tclsh")))
             #t))
         (add-after 'unpack 'adjust-tests
           (lambda _
             ;; Disable failing tests
             (substitute* "tests/test_helper.tcl"
               ;; (("integration/failover") "")
               ;; (("integration/replication-4") "")
               ;; (("integration/replication-psync") "")
               ;; (("integration/replication[^-]") "")
               )
             #t)))
       #:make-flags `("CC=gcc"
                      "MALLOC=libc"
                      "LDFLAGS=-ldl"
                      ,(string-append "PREFIX="
                                      (assoc-ref %outputs "out")))))
    (synopsis "Key-value cache and store")
    (description "Redis is an advanced key-value cache and store.  Redis
supports many data structures including strings, hashes, lists, sets, sorted
sets, bitmaps and hyperloglogs.")
    (home-page "https://redis.io/")
    (license license:bsd-3)))

while in Guix the package has been pushed keeping the relevant tests disabled [1].

[1] https://git.savannah.gnu.org/cgit/guix.git/tree/gnu/packages/databases.scm#n2137

@oranagra
Copy link
Member

oranagra commented Jun 2, 2021

@ss2 did the previous releases of 6.2 pass the tests cleanly?
does it happen on any specific platform (e.g. slow one or bigendian)?

@ss2
Copy link
Author

ss2 commented Jun 7, 2021

I've only being building it on amd64 so far and just compiled all the versions down to 6.0.9, where failover finally passes.

The tests had been only introduced to the package declaration [1] where redis was at 6.0.9. hence the said replication tests would have failed at least since version 6.0.9. I didn't package it then, and don't know more details yet.

[1] http://git.savannah.gnu.org/cgit/guix.git/commit/?id=3a1cb921c9209d77ae1af38ef5bfa1620fd99899

@oranagra
Copy link
Member

oranagra commented Jun 7, 2021

@ss2 so you're saying that 6.2.0 - 6.2.3, and also 6.0.9 - 6.0.13 all fail on many of these tests.
can you try to apply this fix: #8967

@ss2
Copy link
Author

ss2 commented Jun 7, 2021

Sorry, I just realised, that I had only tested from 6.2.0--6.2.3, and 6.0.9 only. But I missed the numbers .10--.13.

The Fix from #8967 passes.

@oranagra
Copy link
Member

oranagra commented Jun 7, 2021

The Fix from #8967 passes.

ok great. so we know what's the problem.
i'll consider taking that test fix to the next version of 6.2.
@YaacovHazan FYI

@oranagra oranagra closed this as completed Jun 7, 2021
@ss2
Copy link
Author

ss2 commented Jun 7, 2021

Just a follow up: only the failover test passes.

Removing integration/replication[^-] will let the build pass. But if I leave replication[^-] in and remove replication-psync, or replication-4, the test suite will hang at:
[62/63 done]: integration/replication (218 seconds)

Just want to be clear about this in case I may have have made the impression that all tests had passed.

@oranagra
Copy link
Member

oranagra commented Jun 7, 2021

@ss2 so the fix in 8967 only fixed one test, and all the rest still fail?
Is it with the same errors that are shown at the top?
From your text above it's not clear to me which tests still fail.

Can you please post the failures (errors) you're getting after applying the mentioned patch.

@oranagra oranagra reopened this Jun 7, 2021
@ss2
Copy link
Author

ss2 commented Jun 8, 2021

I'm so sorry about this confusion now. It may well be that I was getting mixed up with what is working, or not.

It looks like, that the build will pass if the four tests are disabled. If I enable failover, the build will sometimes pass. Other times it doesn't.

Unfortunately I noticed that the build process sometimes passes and at other times doesn't. Here's the tail end of a build log:

[33/61 done]: unit/introspection (4 seconds)
Testing unit/bitops
[ok]: BITCOUNT returns 0 against non existing key
[ok]: BITCOUNT returns 0 with out of range indexes
[ok]: BITCOUNT returns 0 with negative indexes where start > end
[ok]: BITCOUNT against test vector #1
[ok]: BITCOUNT against test vector #2
[ok]: BITCOUNT against test vector #3
[ok]: BITCOUNT against test vector #4
[ok]: BITCOUNT against test vector #5
[ok]: test various edge cases of repl topology changes with missing pings at the end
[ok]: Check if maxclients works refusing connections
[ok]: PSYNC2 #3899 regression: kill chained replica
[ok]: maxmemory - only allkeys-* should remove non-volatile keys (allkeys-random)
[err]: failover command to any replica works in tests/integration/failover.tcl
Failover from node 1 to node 2 did not finish
[34/61 done]: unit/limits (1 seconds)
Testing unit/bitfield
[ok]: BITFIELD signed SET and GET basics
[ok]: BITFIELD unsigned SET and GET basics
[ok]: BITFIELD #<idx> form
[ok]: BITFIELD basic INCRBY form
[ok]: BITFIELD chaining of multiple commands
[ok]: BITFIELD unsigned overflow wrap
[ok]: BITFIELD unsigned overflow sat
[ok]: BITFIELD signed overflow wrap
[ok]: BITFIELD signed overflow sat
[ok]: PSYNC2 #3899 regression: kill first replica
[ok]: BITCOUNT fuzzing without start/end
[ok]: BITFIELD overflow detection fuzzing
[ok]: maxmemory - only allkeys-* should remove non-volatile keys (allkeys-lru)
[ok]: Test replication with blocking lists and sorted sets operations
[ok]: EVAL timeout from AOF
[ok]: We can call scripts rewriting client->argv from Lua
[ok]: Call Redis command with many args from Lua (issue #1764)
[ok]: Number conversion precision test (issue #1118)
[ok]: String containing number precision test (regression of issue #1118)
[ok]: Verify negative arg count is error instead of crash (issue #1842)
[ok]: Correct handling of reused argv (issue #1939)
[ok]: Functions in the Redis namespace are able to report errors
[ok]: Script with RESP3 map
[ok]: BITFIELD overflow wrap fuzzing
[ok]: BITFIELD regression for #3221
[ok]: BITFIELD regression for #3564
I/O error reading reply
    while executing
"$r bzpopmin $k 2"
    ("uplevel" body line 2)
    invoked from within
"uplevel 1 [lindex $args $path]"
    (procedure "randpath" line 3)
    invoked from within
"randpath {
            $r zadd $k [randomInt 10000] $v
        } {
            $r zadd $k [randomInt 10000] $v [randomInt 10000] $v2
        } {
     ..."
    (procedure "bg_block_op" line 31)
    invoked from within
"bg_block_op [lindex $argv 0] [lindex $argv 1] [lindex $argv 2] [lindex $argv 3] [lindex $argv 4]"
    (file "tests/helpers/bg_block_op.tcl" line 55)
I/O error reading reply
    while executing
"$r blpop $k $k2 2"
    ("uplevel" body line 2)
    invoked from within
"uplevel 1 [lindex $args $path]"
    (procedure "randpath" line 3)
    invoked from within
"randpath {
            randpath {
                $r rpush $k $v
            } {
                $r lpush $k $v
            }
        } {
            ..."
    (procedure "bg_block_op" line 13)
    invoked from within
"bg_block_op [lindex $argv 0] [lindex $argv 1] [lindex $argv 2] [lindex $argv 3] [lindex $argv 4]"
    (file "tests/helpers/bg_block_op.tcl" line 55)
I/O error reading reply
    while executing
"$r blpop $k 2"
    ("uplevel" body line 2)
    invoked from within
"uplevel 1 [lindex $args $path]"
    (procedure "randpath" line 3)
    invoked from within
"randpath {
            randpath {
                $r rpush $k $v
            } {
                $r lpush $k $v
            }
        } {
            ..."
    (procedure "bg_block_op" line 13)
    invoked from within
"bg_block_op [lindex $argv 0] [lindex $argv 1] [lindex $argv 2] [lindex $argv 3] [lindex $argv 4]"
    (file "tests/helpers/bg_block_op.tcl" line 55)
[ok]: BITCOUNT fuzzing with start/end
[ok]: BITCOUNT with start, end
[ok]: BITCOUNT syntax error #1
[ok]: BITCOUNT regression test for github issue #582
[ok]: BITCOUNT misaligned prefix
[ok]: BITCOUNT misaligned prefix + full words + remainder
[ok]: BITOP NOT (empty string)
[ok]: BITOP NOT (known string)
[ok]: BITOP where dest and target are the same key
[ok]: BITOP AND|OR|XOR don't change the string with single input key
[ok]: BITOP missing key is considered a stream of zero
[ok]: BITOP shorter keys are zero-padded to the key with max length
[35/61 done]: integration/block-repl (27 seconds)
Testing unit/geo
[ok]: GEOADD create
[ok]: GEOADD update
[ok]: GEOADD update with CH option
[ok]: GEOADD update with NX option
[ok]: GEOADD update with XX option
[ok]: GEOADD update with CH NX option
[ok]: GEOADD update with CH XX option
[ok]: GEOADD update with XX NX option will return syntax error
[ok]: GEOADD update with invalid option
[ok]: GEOADD invalid coordinates
[ok]: GEOADD multi add
[ok]: Check geoset values
[ok]: GEORADIUS simple (sorted)
[ok]: GEOSEARCH simple (sorted)
[ok]: GEOSEARCH FROMLONLAT and FROMMEMBER cannot exist at the same time
[ok]: GEOSEARCH FROMLONLAT and FROMMEMBER one must exist
[ok]: GEOSEARCH BYRADIUS and BYBOX cannot exist at the same time
[ok]: GEOSEARCH BYRADIUS and BYBOX one must exist
[ok]: GEOSEARCH with STOREDIST option
[ok]: GEORADIUS withdist (sorted)
[ok]: GEOSEARCH withdist (sorted)
[ok]: GEORADIUS with COUNT
[ok]: Piping raw protocol
[ok]: GEORADIUS with ANY not sorted by default
[ok]: GEORADIUS with ANY sorted by ASC
[ok]: GEORADIUS with ANY but no COUNT
[ok]: GEORADIUS with COUNT but missing integer argument
[ok]: GEORADIUS with COUNT DESC
[ok]: GEORADIUS HUGE, issue #2767
[ok]: GEORADIUSBYMEMBER simple (sorted)
[ok]: GEOSEARCH FROMMEMBER simple (sorted)
[ok]: GEOSEARCH vs GEORADIUS
[ok]: GEOSEARCH non square, long and narrow
[ok]: GEOSEARCH corner point test
[ok]: GEORADIUSBYMEMBER withdist (sorted)
[ok]: GEOHASH is able to return geohash strings
[ok]: GEOPOS simple
[ok]: GEOPOS missing element
[ok]: GEODIST simple & unit
[ok]: GEODIST missing elements
[ok]: GEORADIUS STORE option: syntax error
[ok]: GEOSEARCHSTORE STORE option: syntax error
[ok]: GEORANGE STORE option: incompatible options
[ok]: GEORANGE STORE option: plain usage
[ok]: GEOSEARCHSTORE STORE option: plain usage
[ok]: GEORANGE STOREDIST option: plain usage
[ok]: GEOSEARCHSTORE STOREDIST option: plain usage
[ok]: GEORANGE STOREDIST option: COUNT ASC and DESC
[ok]: GEOSEARCH the box spans -180? or 180?
[ok]: PSYNC2 #3899 regression: kill first replica
[ok]: PSYNC2 #3899 regression: kill chained replica
[ok]: maxmemory - only allkeys-* should remove non-volatile keys (volatile-lru)
[ok]: Timedout read-only scripts can be killed by SCRIPT KILL
[36/61 done]: integration/redis-cli (11 seconds)
Testing unit/memefficiency
[ok]: Timedout read-only scripts can be killed by SCRIPT KILL even when use pcall
[ok]: BITFIELD: setup slave
[ok]: BITFIELD: write on master, read on slave
[ok]: Timedout script does not cause a false dead client
[ok]: BITFIELD_RO fails when write option is used
[ok]: TOUCH alters the last access time of a key
[ok]: TOUCH returns the number of existing keys specified
[ok]: command stats for GEOADD
[ok]: command stats for EXPIRE
[ok]: command stats for BRPOP
[ok]: command stats for MULTI
[ok]: command stats for scripts
[ok]: BITOP and fuzzing
[ok]: Timedout script link is still usable after Lua returns
[ok]: Fuzzer corrupt restore payloads - sanitize_dump: yes
[37/61 done]: integration/corrupt-dump-fuzzer (20 seconds)
Testing unit/hyperloglog
[38/61 done]: unit/bitfield (3 seconds)
Testing unit/lazyfree
[ok]: Timedout scripts that modified data can't be killed by SCRIPT KILL
[39/61 done]: unit/introspection-2 (6 seconds)
Testing unit/wait
[ok]: SHUTDOWN NOSAVE can kill a timedout script anyway
[ok]: PSYNC2: generate load while killing replication links
[ok]: PSYNC2: cluster is consistent after load (x = 78410)
[ok]: PSYNC2: total sum of full synchronizations is exactly 4
[ok]: PSYNC2: --- CYCLE 4 ---
[ok]: PSYNC2: [NEW LAYOUT] Set #4 as master
[ok]: PSYNC2: Set #0 to replicate from #4
[ok]: PSYNC2: Set #1 to replicate from #4
[ok]: PSYNC2: Set #3 to replicate from #0
[ok]: PSYNC2: Set #2 to replicate from #1
[ok]: PSYNC2: cluster is consistent after failover
[ok]: PSYNC2 #3899 regression: kill first replica
[ok]: Before the replica connects we issue two EVAL commands (scripts replication)
[ok]: Setup slave
[ok]: WAIT should acknowledge 1 additional copy of the data
[ok]: Connect a replica to the master instance (scripts replication)
[ok]: Now use EVALSHA against the master, with both SHAs (scripts replication)
[ok]: If EVALSHA was replicated as EVAL, 'x' should be '4' (scripts replication)
[ok]: Replication of script multiple pushes to list with BLPOP (scripts replication)
[ok]: EVALSHA replication when first call is readonly (scripts replication)
[ok]: Lua scripts using SELECT are replicated correctly (scripts replication)
[ok]: maxmemory - only allkeys-* should remove non-volatile keys (volatile-random)
[ok]: BITOP or fuzzing
[ok]: Chained replicas disconnect when replica re-connect with the same master
[ok]: UNLINK can reclaim memory in background
[ok]: Memory efficiency with values in range 32
[ok]: FLUSHDB ASYNC can reclaim memory in background
[ok]: WAIT should not acknowledge 2 additional copies of the data
[40/61 done]: integration/psync2-pingoff (18 seconds)
Testing unit/pendingquerybuf
[ok]: lazy free a stream with all types of metadata
[ok]: Before the replica connects we issue two EVAL commands (commands replication)
[ok]: lazy free a stream with deleted cgroup
[ok]: Connect a replica to the master instance (commands replication)
[ok]: Now use EVALSHA against the master, with both SHAs (commands replication)
[ok]: If EVALSHA was replicated as EVAL, 'x' should be '4' (commands replication)
[ok]: Replication of script multiple pushes to list with BLPOP (commands replication)
[ok]: EVALSHA replication when first call is readonly (commands replication)
[ok]: Lua scripts using SELECT are replicated correctly (commands replication)
[ok]: BITOP xor fuzzing
[ok]: maxmemory - only allkeys-* should remove non-volatile keys (volatile-ttl)
[ok]: PSYNC2 #3899 regression: kill chained replica
[41/61 done]: unit/lazyfree (2 seconds)
Testing unit/tls
[ok]: BITOP NOT fuzzing
[ok]: BITOP with integer encoded source objects
[ok]: BITOP with non string source key
[ok]: BITOP with empty string after non empty string (issue #529)
[ok]: BITPOS bit=0 with empty key returns 0
[ok]: BITPOS bit=1 with empty key returns -1
[ok]: BITPOS bit=0 with string less than 1 word works
[ok]: BITPOS bit=1 with string less than 1 word works
[ok]: BITPOS bit=0 starting at unaligned address
[ok]: BITPOS bit=1 starting at unaligned address
[ok]: BITPOS bit=0 unaligned+full word+reminder
[ok]: BITPOS bit=1 unaligned+full word+reminder
[ok]: BITPOS bit=1 returns -1 if string is all 0 bits
[ok]: BITPOS bit=0 works with intervals
[ok]: BITPOS bit=1 works with intervals
[ok]: BITPOS bit=0 changes behavior if end is given
[ok]: PSYNC2 #3899 regression: kill first replica
[42/61 done]: unit/tls (0 seconds)
Testing unit/tracking
[ok]: BITPOS bit=1 fuzzy testing using SETBIT
[ok]: XRANGE fuzzing
[ok]: XREVRANGE regression test for issue #5006
[ok]: XREAD streamID edge (no-blocking)
[ok]: XREAD streamID edge (blocking)
[ok]: XADD streamID edge
[ok]: Clients are able to enable tracking and redirect it
[ok]: The other connection is able to get invalidations
[ok]: The client is now able to disable tracking
[ok]: Clients can enable the BCAST mode with the empty prefix
[ok]: The connection gets invalidation messages about all the keys
[ok]: Clients can enable the BCAST mode with prefixes
[ok]: Adding prefixes to BCAST mode works
[ok]: Tracking NOLOOP mode in standard mode works
[ok]: Tracking NOLOOP mode in BCAST mode works
[ok]: WAIT should not acknowledge 1 additional copy if slave is blocked
[ok]: Memory efficiency with values in range 64
[ok]: XTRIM with MAXLEN option basic test
[ok]: XADD with LIMIT consecutive calls
[ok]: BITPOS bit=0 fuzzy testing using SETBIT
[ok]: XTRIM with ~ is limited
[ok]: XTRIM without ~ is not limited
[ok]: XTRIM without ~ and with LIMIT
[ok]: maxmemory - policy volatile-lru should only remove volatile keys.
[ok]: Connect a replica to the master instance
[ok]: Redis.replicate_commands() must be issued before any write
[ok]: Redis.replicate_commands() must be issued before any write (2)
[ok]: Redis.set_repl() must be issued after replicate_commands()
[ok]: Redis.set_repl() don't accept invalid values
[ok]: Test selective replication of certain Redis commands from Lua
[ok]: PRNG is seeded randomly for command replication
[ok]: Using side effects is not a problem with command replication
[43/61 done]: unit/bitops (6 seconds)
Testing unit/oom-score-adj
[ok]: CONFIG SET oom-score-adj works as expected
[ok]: CONFIG SET oom-score-adj handles configuration failures
[ok]: XADD with MAXLEN > xlen can propagate correctly
[ok]: HyperLogLog self test passes
[ok]: PFADD without arguments creates an HLL value
[ok]: Approximated cardinality after creation is zero
[ok]: PFADD returns 1 when at least 1 reg was modified
[ok]: PFADD returns 0 when no reg was modified
[ok]: PFADD works with empty string (regression)
[ok]: PFCOUNT returns approximated cardinality of set
[ok]: Tracking gets notification of expired keys
[ok]: HELLO 3 reply is correct
[ok]: HELLO without protover
[ok]: RESP3 based basic invalidation
[ok]: RESP3 tracking redirection
[ok]: Invalidations of previous keys can be redirected after switching to RESP3
[ok]: Invalidations of new keys can be redirected after switching to RESP3
[ok]: RESP3 Client gets tracking-redir-broken push message after cached key changed when rediretion client is terminated
[ok]: Different clients can redirect to the same connection
[ok]: Different clients using different protocols can track the same key
[ok]: No invalidation message when using OPTIN option
[ok]: Invalidation message sent when using OPTIN option with CLIENT CACHING yes
[ok]: Invalidation message sent when using OPTOUT option
[ok]: No invalidation message when using OPTOUT option with CLIENT CACHING no
[ok]: Able to redirect to a RESP3 client
[ok]: After switching from normal tracking to BCAST mode, no invalidation message is produced for pre-BCAST keys
[ok]: BCAST with prefix collisions throw errors
[ok]: Tracking gets notification on tracking table key eviction
[ok]: Invalidation message received for flushall
[ok]: Invalidation message received for flushdb
[ok]: maxmemory - policy volatile-lfu should only remove volatile keys.
[ok]: Test ASYNC flushall
[ok]: WAIT implicitly blocks on client pause since ACKs aren't sent
[ok]: XADD with MINID > lastid can propagate correctly
[44/61 done]: unit/scripting (13 seconds)
Testing unit/shutdown
[ok]: PSYNC2 #3899 regression: verify consistency
[ok]: Server is able to evacuate enough keys when num of keys surpasses limit by more than defined initial effort
[ok]: Tracking info is correct
[ok]: CLIENT GETREDIR provides correct client id
[ok]: CLIENT TRACKINGINFO provides reasonable results when tracking off
[ok]: CLIENT TRACKINGINFO provides reasonable results when tracking on
[ok]: CLIENT TRACKINGINFO provides reasonable results when tracking on with options
[ok]: CLIENT TRACKINGINFO provides reasonable results when tracking optin
[ok]: CLIENT TRACKINGINFO provides reasonable results when tracking optout
[ok]: CLIENT TRACKINGINFO provides reasonable results when tracking bcast mode
[ok]: CLIENT TRACKINGINFO provides reasonable results when tracking redir broken
[ok]: XADD with ~ MAXLEN can propagate correctly
[ok]: Memory efficiency with values in range 128
[ok]: Temp rdb will be deleted if we use bg_unlink when shutdown
[45/61 done]: unit/tracking (2 seconds)
Testing unit/networking
[46/61 done]: unit/wait (4 seconds)
[ok]: maxmemory - policy volatile-random should only remove volatile keys.
[47/61 done]: unit/oom-score-adj (1 seconds)
[ok]: Temp rdb will be deleted in signal handle
[48/61 done]: integration/psync2-reg (23 seconds)
[49/61 done]: unit/shutdown (1 seconds)
[ok]: XADD with ~ MAXLEN and LIMIT can propagate correctly
[ok]: CONFIG SET port number
[ok]: HyperLogLogs are promote from sparse to dense
[ok]: maxmemory - policy volatile-ttl should only remove volatile keys.
[ok]: XADD with ~ MINID can propagate correctly
[ok]: PSYNC2: generate load while killing replication links
[ok]: PSYNC2: cluster is consistent after load (x = 106949)
[ok]: PSYNC2: total sum of full synchronizations is exactly 4
[ok]: CONFIG SET bind address
[50/61 done]: unit/networking (1 seconds)
[ok]: PSYNC2: --- CYCLE 5 ---
[ok]: PSYNC2: [NEW LAYOUT] Set #0 as master
[ok]: PSYNC2: Set #3 to replicate from #0
[ok]: PSYNC2: Set #4 to replicate from #3
[ok]: PSYNC2: Set #2 to replicate from #0
[ok]: PSYNC2: Set #1 to replicate from #0
[ok]: XADD with ~ MINID and LIMIT can propagate correctly
[ok]: XTRIM with ~ MAXLEN can propagate correctly
[ok]: Memory efficiency with values in range 1024
[ok]: XADD can CREATE an empty stream
[ok]: XSETID can set a specific ID
[ok]: XSETID cannot SETID with smaller ID
[ok]: XSETID cannot SETID on non-existent key
[ok]: PSYNC2: cluster is consistent after failover
[ok]: HyperLogLog sparse encoding stress test
[ok]: Corrupted sparse HyperLogLogs are detected: Additional at tail
[ok]: Corrupted sparse HyperLogLogs are detected: Broken magic
[ok]: Corrupted sparse HyperLogLogs are detected: Invalid encoding
[ok]: Corrupted dense HyperLogLogs are detected: Wrong length
[ok]: Client output buffer hard limit is enforced
[ok]: Empty stream can be rewrite into AOF correctly
[ok]: Memory efficiency with values in range 16384
[51/61 done]: unit/memefficiency (9 seconds)
[ok]: Stream can be rewrite into AOF correctly after XDEL lastid
[ok]: XGROUP HELP should not have unexpected options
[52/61 done]: unit/type/stream (45 seconds)
[ok]: pending querybuf: check size of pending_querybuf after set a big value
[ok]: Client output buffer soft limit is enforced if time is overreached
[53/61 done]: unit/pendingquerybuf (9 seconds)
[ok]: PSYNC2: generate load while killing replication links
[ok]: PSYNC2: cluster is consistent after load (x = 141608)
[ok]: PSYNC2: total sum of full synchronizations is exactly 4
[ok]: PSYNC2: --- CYCLE 6 ---
[ok]: PSYNC2: [NEW LAYOUT] Set #3 as master
[ok]: PSYNC2: Set #2 to replicate from #3
[ok]: PSYNC2: Set #0 to replicate from #3
[ok]: PSYNC2: Set #4 to replicate from #3
[ok]: PSYNC2: Set #1 to replicate from #4
[ok]: PSYNC2: cluster is consistent after failover
[ok]: MASTER and SLAVE consistency with EVALSHA replication
[ok]: AOF rewrite during write load: RDB preamble=yes
[ok]: Client output buffer soft limit is not enforced too early and is enforced when no traffic
[ok]: No response for single command if client output buffer hard limit is enforced
[ok]: PSYNC2: generate load while killing replication links
[ok]: PSYNC2: cluster is consistent after load (x = 179678)
[ok]: PSYNC2: total sum of full synchronizations is exactly 4
[ok]: PSYNC2: --- CYCLE 7 ---
[ok]: PSYNC2: [NEW LAYOUT] Set #4 as master
[ok]: PSYNC2: Set #2 to replicate from #4
[ok]: PSYNC2: Set #3 to replicate from #2
[ok]: PSYNC2: Set #1 to replicate from #4
[ok]: PSYNC2: Set #0 to replicate from #1
[ok]: SLAVE can reload "lua" AUX RDB fields of duplicated scripts
[54/61 done]: integration/replication-3 (44 seconds)
[ok]: No response for multi commands in pipeline if client output buffer limit is enforced
[ok]: Execute transactions completely even if client output buffer limit is enforced
[55/61 done]: unit/obuf-limits (20 seconds)
[ok]: PSYNC2: cluster is consistent after failover
[ok]: PSYNC2: generate load while killing replication links
[ok]: PSYNC2: cluster is consistent after load (x = 212895)
[ok]: PSYNC2: total sum of full synchronizations is exactly 4
[ok]: PSYNC2: --- CYCLE 8 ---
[ok]: PSYNC2: [NEW LAYOUT] Set #3 as master
[ok]: PSYNC2: Set #1 to replicate from #3
[ok]: PSYNC2: Set #0 to replicate from #3
[ok]: PSYNC2: Set #2 to replicate from #0
[ok]: PSYNC2: Set #4 to replicate from #0
[ok]: PSYNC2: cluster is consistent after failover
[ok]: Fuzzing dense/sparse encoding: Redis should always detect errors
[ok]: PFADD, PFCOUNT, PFMERGE type checking works
[ok]: PFMERGE results on the cardinality of union of sets
[ok]: PSYNC2: generate load while killing replication links
[ok]: PSYNC2: cluster is consistent after load (x = 255632)
[ok]: PSYNC2: total sum of full synchronizations is exactly 4
[ok]: GEOSEARCH fuzzy test - byradius
[ok]: PSYNC2: Bring the master back again for next test
[ok]: PSYNC2: Partial resync after restart using RDB aux fields
[ok]: PSYNC2: Replica RDB restart with EVALSHA in backlog issue #4483
[56/61 done]: integration/psync2 (50 seconds)
[ok]: PFCOUNT multiple-keys merge returns cardinality of union #1
[ok]: slave buffer are counted correctly
[ok]: PFCOUNT multiple-keys merge returns cardinality of union #2
[ok]: PFDEBUG GETREG returns the HyperLogLog raw registers
[ok]: PFADD / PFCOUNT cache invalidation works
[57/61 done]: unit/hyperloglog (35 seconds)
[ok]: replica buffer don't induce eviction
[ok]: Don't rehash if used memory exceeds maxmemory after rehash
[ok]: client tracking don't cause eviction feedback loop
[58/61 done]: unit/maxmemory (48 seconds)
[ok]: GEOSEARCH fuzzy test - bybox
[ok]: GEOSEARCH box edges fuzzy test
[59/61 done]: north (41 seconds)
[ok]: AOF rewrite during write load: RDB preamble=no
Waiting for process 1452 to exit...
[ok]: Turning off AOF kills the background writing child if any
[ok]: AOF rewrite of list with quicklist encoding, string data
[ok]: AOF rewrite of list with quicklist encoding, int data
[ok]: AOF rewrite of set with intset encoding, string data
[ok]: AOF rewrite of set with hashtable encoding, string data
[ok]: AOF rewrite of set with intset encoding, int data
[ok]: AOF rewrite of set with hashtable encoding, int data
[ok]: AOF rewrite of hash with ziplist encoding, string data
[ok]: AOF rewrite of hash with hashtable encoding, string data
[ok]: AOF rewrite of hash with ziplist encoding, int data
[ok]: AOF rewrite of hash with hashtable encoding, int data
[ok]: AOF rewrite of zset with ziplist encoding, string data
[ok]: AOF rewrite of zset with skiplist encoding, string data
[ok]: AOF rewrite of zset with ziplist encoding, int data
[ok]: AOF rewrite of zset with skiplist encoding, int data
[ok]: BGREWRITEAOF is delayed if BGSAVE is in progress
[ok]: BGREWRITEAOF is refused if already in progress
[60/61 done]: unit/aofrw (104 seconds)

@oranagra
Copy link
Member

oranagra commented Jun 8, 2021

@ss2 so just to be clear, this run was with the patch from #8967 and you say that the replication tests are commented?
i wanna see the failures when running the full test suite (no tests are skipped), on a version that includes that fix.
i.e. in your opening post you mentioned things like:

*** [err]: Test replication partial resync: no reconnection, just sync (diskless: no, disabled, reconnect: 0) in tests/integration/replication-psync.tcl
Expected 'b7b5ae9e36e3a810959128fd50f16c332a21a696' to be equal to '621e9919c7d7eb998dad8056d2cc9a704dba9b98' (context: type eval line 72 cmd {assert_equal [r debug digest] [r -1 debug digest]} proc ::test)

i wanna see the exact failures, and the full list of failures (if there are any) on the version that has that fix.

@ss2
Copy link
Author

ss2 commented Jun 8, 2021

There you go:

Cleanup: may take some time... OK
Starting test server at port 21079
[ready]: 962
Testing unit/printver
[ready]: 963
Testing unit/dump
[ready]: 964
Testing unit/auth
[ready]: 965
Testing unit/protocol
[ready]: 966
Testing unit/keyspace
[ready]: 967
Testing unit/scan
[ready]: 968
Testing unit/info
[ready]: 969
Testing unit/type/string
[ready]: 970
Testing unit/type/incr
[ready]: 971
Testing unit/type/list
[ready]: 972
Testing unit/type/list-2
[ready]: 973
Testing unit/type/list-3
[ready]: 974
Testing unit/type/set
[ready]: 975
Testing unit/type/zset
[ready]: 976
Testing unit/type/hash
[ready]: 977
Testing unit/type/stream
Testing Redis version 255.255.255 (00000000)
[ok]: Handle an empty query
[ok]: Negative multibulk length
[ok]: Out of range multibulk length
[ok]: Wrong multibulk payload header
[ok]: Negative multibulk payload length
[ok]: Out of range multibulk payload length
[ok]: Non-number multibulk payload length
[ok]: Multi bulk request not followed by bulk arguments
[ok]: Generic wrong number of args
[ok]: DUMP / RESTORE are able to serialize / unserialize a simple key
[ok]: Explicit regression for a list bug
[ok]: RESTORE can set an arbitrary expire to the materialized key
[ok]: Unbalanced number of quotes
[ok]: RESTORE can set an expire that overflows a 32 bit integer
[ok]: RESTORE can set an absolute expire
[ok]: AUTH fails if there is no password configured server side
[ok]: RESTORE with ABSTTL in the past
[ok]: RESTORE can set LRU
[ok]: RESTORE can set LFU
[ok]: RESTORE returns an error of the key already exists
[ok]: RESTORE can overwrite an existing key with REPLACE
[ok]: RESTORE can detect a syntax error for unrecongized options
[ok]: DUMP of non existing key returns nil
[ok]: LPOS basic usage
[ok]: LPOS RANK (positive and negative rank) option
[ok]: LPOS COUNT option
[ok]: LPOS COUNT + RANK option
[ok]: LPOS non existing key
[ok]: LPOS no match
[ok]: LPOS MAXLEN
[ok]: LPOS when RANK is greater than matches
[ok]: LPUSH, RPUSH, LLENGTH, LINDEX, LPOP - ziplist
[ok]: LPUSH, RPUSH, LLENGTH, LINDEX, LPOP - regular list
[ok]: XADD can add entries into a stream that XRANGE can fetch
[ok]: R/LPOP against empty list
[ok]: SADD, SCARD, SISMEMBER, SMISMEMBER, SMEMBERS basics - regular set
[ok]: XADD IDs are incremental
[ok]: DEL against a single item
[ok]: Vararg DEL
[ok]: XADD IDs are incremental when ms is the same as well
[ok]: SADD, SCARD, SISMEMBER, SMISMEMBER, SMEMBERS basics - intset
[ok]: KEYS with pattern
[ok]: XADD IDs correctly report an error when overflowing
[ok]: SMISMEMBER against non set
[ok]: KEYS to get all keys
[ok]: SMISMEMBER non existing key
[ok]: DBSIZE
[ok]: SMISMEMBER requires one or more members
[ok]: R/LPOP with the optional count argument
[ok]: DEL all keys
[ok]: SADD against non set
[ok]: Variadic RPUSH/LPUSH
[ok]: DEL a list
[ok]: SADD a non-integer against an intset
[ok]: SADD an integer larger than 64 bits
[ok]: BLPOP, BRPOP: single existing list - linkedlist
[ok]: BLPOP, BRPOP: multiple existing lists - linkedlist
[ok]: BLPOP, BRPOP: second list has an entry - linkedlist
[ok]: BRPOPLPUSH - linkedlist
[ok]: BLMOVE left left - linkedlist
[ok]: BLMOVE left right - linkedlist
[ok]: BLMOVE right left - linkedlist
[ok]: BLMOVE right right - linkedlist
[ok]: BLPOP, BRPOP: single existing list - ziplist
[ok]: BLPOP, BRPOP: multiple existing lists - ziplist
[ok]: BLPOP, BRPOP: second list has an entry - ziplist
[ok]: BRPOPLPUSH - ziplist
[ok]: BLMOVE left left - ziplist
[ok]: BLMOVE left right - ziplist
[ok]: BLMOVE right left - ziplist
[ok]: BLMOVE right right - ziplist
[ok]: BLPOP, LPUSH + DEL should not awake blocked client
[ok]: SADD overflows the maximum allowed integers in an intset
[ok]: Variadic SADD
[ok]: SCAN basic
[ok]: HSET/HLEN - Small hash creation
[ok]: Is the small hash encoded with a ziplist?
[ok]: HRANDFIELD - ziplist
[ok]: HRANDFIELD - hashtable
[ok]: HRANDFIELD with RESP3
[ok]: HRANDFIELD count of 0 is handled correctly
[ok]: HRANDFIELD with <count> against non existing key
[ok]: SCAN COUNT
[ok]: XADD with MAXLEN option
[ok]: Protocol desync regression test #1
[ok]: SCAN MATCH
[ok]: Regression for quicklist #3343 bug
[ok]: BLPOP, LPUSH + DEL + SET should not awake blocked client
[ok]: BLPOP with same key multiple times should work (issue #801)
[ok]: MULTI/EXEC is isolated from the point of view of BLPOP
[ok]: BLPOP with variadic LPUSH
[ok]: BRPOPLPUSH with zero timeout should block indefinitely
[ok]: BLMOVE left left with zero timeout should block indefinitely
[ok]: BLMOVE left right with zero timeout should block indefinitely
[ok]: BLMOVE right left with zero timeout should block indefinitely
[ok]: BLMOVE right right with zero timeout should block indefinitely
[ok]: BLMOVE (left, left) with a client BLPOPing the target list
[ok]: BLMOVE (left, right) with a client BLPOPing the target list
[ok]: BLMOVE (right, left) with a client BLPOPing the target list
[ok]: BLMOVE (right, right) with a client BLPOPing the target list
[ok]: BRPOPLPUSH with wrong source type
[ok]: BRPOPLPUSH with wrong destination type
[ok]: BRPOPLPUSH maintains order of elements after failure
[ok]: SET and GET an item
[ok]: BRPOPLPUSH with multiple blocked clients
[ok]: SET and GET an empty item
[1/64 done]: unit/printver (0 seconds)
Testing unit/type/stream-cgroups
[ok]: Linked LMOVEs
[ok]: Circular BRPOPLPUSH
[ok]: Self-referential BRPOPLPUSH
[ok]: BRPOPLPUSH inside a transaction
[ok]: PUSH resulting from BRPOPLPUSH affect WATCH
[ok]: BRPOPLPUSH does not affect WATCH while still blocked
[ok]: INCR against non existing key
[ok]: INCR against key created by incr itself
[ok]: INCR against key originally set with SET
[ok]: INCR over 32bit value
[ok]: INCRBY over 32bit value with over 32bit increment
[ok]: INCR fails against key with spaces (left)
[ok]: INCR fails against key with spaces (right)
[ok]: INCR fails against key with spaces (both)
[ok]: INCR fails against a key holding a list
[ok]: DECRBY over 32bit value with over 32bit increment, negative res
[ok]: INCR uses shared objects in the 0-9999 range
[ok]: INCR can modify objects in-place
[ok]: INCRBYFLOAT against non existing key
[ok]: INCRBYFLOAT against key originally set with SET
[ok]: INCRBYFLOAT over 32bit value
[ok]: INCRBYFLOAT over 32bit value with over 32bit increment
[ok]: INCRBYFLOAT fails against key with spaces (left)
[ok]: errorstats: failed call authentication error
[ok]: INCRBYFLOAT fails against key with spaces (right)
[ok]: INCRBYFLOAT fails against key with spaces (both)
[ok]: INCRBYFLOAT fails against a key holding a list
[ok]: INCRBYFLOAT does not allow NaN or Infinity
[ok]: INCRBYFLOAT decrement
[ok]: string to double with null terminator
[ok]: No negative zero
[ok]: errorstats: failed call within MULTI/EXEC
[ok]: errorstats: failed call within LUA
[ok]: errorstats: failed call NOSCRIPT error
[ok]: errorstats: failed call NOGROUP error
[ok]: errorstats: rejected call unknown command
[ok]: errorstats: rejected call within MULTI/EXEC
[ok]: errorstats: rejected call due to wrong arity
[ok]: errorstats: rejected call by OOM error
[ok]: errorstats: rejected call by authorization error
[ok]: Check encoding - ziplist
[ok]: ZSET basic ZADD and score update - ziplist
[ok]: ZSET element can't be set to NaN with ZADD - ziplist
[ok]: ZSET element can't be set to NaN with ZINCRBY - ziplist
[ok]: ZADD with options syntax error with incomplete pair - ziplist
[ok]: ZADD XX option without key - ziplist
[ok]: ZADD XX existing key - ziplist
[ok]: ZADD XX returns the number of elements actually added - ziplist
[ok]: SCAN TYPE
[ok]: ZADD XX updates existing elements score - ziplist
[ok]: ZADD GT updates existing elements when new scores are greater - ziplist
[ok]: ZADD LT updates existing elements when new scores are lower - ziplist
[ok]: ZADD GT XX updates existing elements when new scores are greater and skips new elements - ziplist
[ok]: SSCAN with encoding intset
[ok]: ZADD LT XX updates existing elements when new scores are lower and skips new elements - ziplist
[ok]: ZADD XX and NX are not compatible - ziplist
[ok]: ZADD NX with non existing key - ziplist
[ok]: ZADD NX only add new elements without updating old ones - ziplist
[ok]: ZADD GT and NX are not compatible - ziplist
[ok]: ZADD LT and NX are not compatible - ziplist
[ok]: ZADD LT and GT are not compatible - ziplist
[ok]: ZADD INCR LT/GT replies with nill if score not updated - ziplist
[ok]: SSCAN with encoding hashtable
[ok]: HSCAN with encoding ziplist
[ok]: ZADD INCR LT/GT with inf - ziplist
[ok]: ZADD INCR works like ZINCRBY - ziplist
[ok]: ZADD INCR works with a single score-elemenet pair - ziplist
[ok]: ZADD CH option changes return value to all changed elements - ziplist
[ok]: ZINCRBY calls leading to NaN result in error - ziplist
[ok]: ZADD - Variadic version base case - $encoding
[ok]: Set encoding after DEBUG RELOAD
[ok]: ZADD - Return value is the number of actually added items - $encoding
[ok]: SREM basics - regular set
[ok]: ZADD - Variadic version does not add nothing on single parsing err - $encoding
[ok]: SREM basics - intset
[ok]: ZADD - Variadic version will raise error on missing arg - $encoding
[ok]: SREM with multiple arguments
[ok]: ZINCRBY does not work variadic even if shares ZADD implementation - $encoding
[ok]: SREM variadic version with more args needed to destroy the key
[ok]: ZCARD basics - ziplist
[ok]: ZREM removes key after last element is removed - ziplist
[ok]: ZREM variadic version - ziplist
[ok]: ZREM variadic version -- remove elements after key deletion - ziplist
[ok]: MIGRATE is caching connections
[ok]: ZRANGE basics - ziplist
[ok]: ZREVRANGE basics - ziplist
[ok]: ZRANK/ZREVRANK basics - ziplist
[ok]: ZRANK - after deletion - ziplist
[ok]: ZINCRBY - can create a new sorted set - ziplist
[ok]: XADD with MAXLEN option and the '=' argument
[ok]: ZINCRBY - increment and decrement - ziplist
[ok]: ZINCRBY return value - ziplist
[ok]: ZRANGEBYSCORE/ZREVRANGEBYSCORE/ZCOUNT basics - ziplist
[ok]: ZRANGEBYSCORE with WITHSCORES - ziplist
[ok]: ZRANGEBYSCORE with LIMIT - ziplist
[ok]: ZRANGEBYSCORE with LIMIT and WITHSCORES - ziplist
[ok]: ZRANGEBYSCORE with non-value min or max - ziplist
[ok]: ZRANGEBYLEX/ZREVRANGEBYLEX/ZLEXCOUNT basics - ziplist
[ok]: ZLEXCOUNT advanced - ziplist
[ok]: ZRANGEBYSLEX with LIMIT - ziplist
[ok]: ZRANGEBYLEX with invalid lex range specifiers - ziplist
[ok]: Generated sets must be encoded as hashtable
[ok]: SINTER with two sets - hashtable
[ok]: SINTERSTORE with two sets - hashtable
[ok]: Very big payload in GET/SET
[ok]: SINTERSTORE with two sets, after a DEBUG RELOAD - hashtable
[ok]: ZREMRANGEBYSCORE basics - ziplist
[ok]: ZREMRANGEBYSCORE with non-value min or max - ziplist
[ok]: SUNION with two sets - hashtable
[ok]: ZREMRANGEBYRANK basics - ziplist
[ok]: ZUNIONSTORE against non-existing key doesn't set destination - ziplist
[ok]: ZUNION/ZINTER/ZDIFF against non-existing key - ziplist
[ok]: HSCAN with encoding hashtable
[ok]: SUNIONSTORE with two sets - hashtable
[ok]: ZUNIONSTORE with empty set - ziplist
[ok]: ZUNION/ZINTER/ZDIFF with empty set - ziplist
[ok]: SINTER against three sets - hashtable
[ok]: SINTERSTORE with three sets - hashtable
[ok]: ZSCAN with encoding ziplist
[ok]: ZUNIONSTORE basics - ziplist
[ok]: AUTH fails when a wrong password is given
[ok]: Arbitrary command gives an error when AUTH is required
[ok]: ZUNION/ZINTER/ZDIFF with integer members - ziplist
[ok]: AUTH succeeds when the right password is given
[ok]: Once AUTH succeeded we can actually send commands to the server
[ok]: ZUNIONSTORE with weights - ziplist
[ok]: ZUNION with weights - ziplist
[ok]: ZUNIONSTORE with a regular set and weights - ziplist
[ok]: ZUNIONSTORE with AGGREGATE MIN - ziplist
[ok]: ZUNION/ZINTER with AGGREGATE MIN - ziplist
[ok]: SUNION with non existing keys - hashtable
[ok]: ZUNIONSTORE with AGGREGATE MAX - ziplist
[ok]: ZUNION/ZINTER with AGGREGATE MAX - ziplist
[ok]: ZINTERSTORE basics - ziplist
[ok]: ZINTER basics - ziplist
[ok]: SDIFF with two sets - hashtable
[ok]: SDIFF with three sets - hashtable
[ok]: ZINTER RESP3 - ziplist
[ok]: SDIFFSTORE with three sets - hashtable
[ok]: ZINTERSTORE with weights - ziplist
[ok]: ZINTER with weights - ziplist
[ok]: ZINTERSTORE with a regular set and weights - ziplist
[ok]: ZINTERSTORE with AGGREGATE MIN - ziplist
[ok]: ZINTERSTORE with AGGREGATE MAX - ziplist
[ok]: ZUNIONSTORE with +inf/-inf scores - ziplist
[ok]: ZUNIONSTORE with NaN weights - ziplist
[ok]: ZINTERSTORE with +inf/-inf scores - ziplist
[ok]: ZINTERSTORE with NaN weights - ziplist
[ok]: XADD with MAXLEN option and the '~' argument
[ok]: ZDIFFSTORE basics - ziplist
[ok]: ZDIFF basics - ziplist
[ok]: XADD with NOMKSTREAM option
[ok]: ZDIFFSTORE with a regular set - ziplist
[ok]: ZDIFF subtracting set from itself - ziplist
[ok]: ZDIFF algorithm 1 - ziplist
[ok]: XGROUP CREATE: creation and duplicate group name detection
[ok]: XGROUP CREATE: automatic stream creation fails without MKSTREAM
[ok]: XGROUP CREATE: automatic stream creation works with MKSTREAM
[ok]: XREADGROUP will return only new elements
[ok]: ZDIFF algorithm 2 - ziplist
[ok]: XREADGROUP can read the history of the elements we own
[ok]: XPENDING is able to return pending items
[ok]: XPENDING can return single consumer items
[ok]: XPENDING only group
[ok]: HRANDFIELD with <count> - hashtable
[ok]: XPENDING with IDLE
[ok]: XPENDING with exclusive range intervals works as expected
[ok]: XACK is able to remove items from the consumer/group PEL
[ok]: XACK can't remove the same item multiple times
[ok]: XACK is able to accept multiple arguments
[ok]: XACK should fail if got at least one invalid ID
[ok]: PEL NACK reassignment after XGROUP SETID event
[ok]: XREADGROUP will not report data on empty history. Bug #5577
[ok]: XREADGROUP history reporting of deleted entries. Bug #5570
[ok]: Protocol desync regression test #2
[ok]: Generated sets must be encoded as intset
[ok]: SINTER with two sets - intset
[ok]: SINTERSTORE with two sets - intset
[ok]: SINTERSTORE with two sets, after a DEBUG RELOAD - intset
[ok]: ZSCAN with encoding skiplist
[ok]: SUNION with two sets - intset
[2/64 done]: unit/type/incr (1 seconds)
Testing unit/sort
[ok]: SUNIONSTORE with two sets - intset
[ok]: SCAN guarantees check under write load
[ok]: SSCAN with integer encoded object (issue #1345)
[ok]: SSCAN with PATTERN
[ok]: HSCAN with PATTERN
[ok]: ZSCAN with PATTERN
[ok]: SINTER against three sets - intset
[ok]: SINTERSTORE with three sets - intset
[ok]: SUNION with non existing keys - intset
[ok]: SDIFF with two sets - intset
[ok]: SDIFF with three sets - intset
[ok]: SDIFFSTORE with three sets - intset
[ok]: SDIFF with first set empty
[ok]: SDIFF with same set two times
[ok]: XADD with MINID option
[ok]: XTRIM with MINID option
[ok]: Blocking XREADGROUP will not reply with an empty array
[ok]: XGROUP DESTROY should unblock XREADGROUP with -NOGROUP
[ok]: RENAME can unblock XREADGROUP with data
[ok]: RENAME can unblock XREADGROUP with -NOGROUP
[ok]: ZSCAN scores: regression test for issue #2175
[ok]: Unsafe command names are sanitized in INFO output
[ok]: HRANDFIELD with <count> - ziplist
[ok]: AUTH fails when binary password is wrong
[ok]: AUTH succeeds when binary password is correct
[ok]: Old Ziplist: SORT BY key
[ok]: Old Ziplist: SORT BY key with limit
[ok]: Old Ziplist: SORT BY hash field
[ok]: Protocol desync regression test #3
[ok]: HSET/HLEN - Big hash creation
[ok]: Is the big hash encoded with an hash table?
[ok]: HGET against the small hash
[ok]: HGET against the big hash
[ok]: HGET against non existing key
[ok]: HSET in update and insert mode
[ok]: HSETNX target key missing - small hash
[ok]: HSETNX target key exists - small hash
[ok]: HSETNX target key missing - big hash
[ok]: HSETNX target key exists - big hash
[ok]: HMSET wrong number of args
[ok]: HMSET - small hash
[3/64 done]: unit/info (1 seconds)
Testing unit/expire
[ok]: HMSET - big hash
[ok]: HMGET against non existing key and fields
[ok]: HMGET against wrong type
[ok]: HMGET - small hash
[ok]: EXPIRE - set timeouts multiple times
[ok]: EXPIRE - It should be still possible to read 'x'
[ok]: HMGET - big hash
[ok]: HKEYS - small hash
[ok]: HKEYS - big hash
[ok]: HVALS - small hash
[ok]: HVALS - big hash
[ok]: HGETALL - small hash
[ok]: Old Linked list: SORT BY key
[ok]: Old Linked list: SORT BY key with limit
[ok]: HGETALL - big hash
[ok]: HDEL and return value
[ok]: HDEL - more than a single value
[ok]: HDEL - hash becomes empty before deleting all specified fields
[ok]: HEXISTS
[ok]: Is a ziplist encoded Hash promoted on big payload?
[ok]: HINCRBY against non existing database key
[ok]: HINCRBY against non existing hash key
[ok]: Regression for a crash with blocking ops and pipelining
[ok]: HINCRBY against hash key created by hincrby itself
[ok]: HINCRBY against hash key originally set with HSET
[ok]: HINCRBY over 32bit value
[ok]: HINCRBY over 32bit value with over 32bit increment
[ok]: HINCRBY fails against hash value with spaces (left)
[ok]: HINCRBY fails against hash value with spaces (right)
[ok]: HINCRBY can detect overflows
[ok]: HINCRBYFLOAT against non existing database key
[ok]: HINCRBYFLOAT against non existing hash key
[ok]: Old Linked list: SORT BY hash field
[ok]: HINCRBYFLOAT against hash key created by hincrby itself
[ok]: HINCRBYFLOAT against hash key originally set with HSET
[ok]: HINCRBYFLOAT over 32bit value
[ok]: HINCRBYFLOAT over 32bit value with over 32bit increment
[ok]: HINCRBYFLOAT fails against hash value with spaces (left)
[ok]: HINCRBYFLOAT fails against hash value with spaces (right)
[ok]: HINCRBYFLOAT fails against hash value that contains a null-terminator in the middle
[ok]: HSTRLEN against the small hash
[ok]: XCLAIM can claim PEL items from another consumer
[ok]: HSTRLEN against the big hash
[ok]: HSTRLEN against non existing field
[ok]: HSTRLEN corner cases
[ok]: Hash ziplist regression test for large keys
[ok]: DEL against expired key
[ok]: EXISTS
[ok]: Zero length value in key. SET/GET/EXISTS
[ok]: Commands pipelining
[ok]: Non existing command
[ok]: RENAME basic usage
[ok]: RENAME source key should no longer exist
[ok]: RENAME against already existing key
[ok]: RENAMENX basic usage
[ok]: RENAMENX against already existing key
[ok]: RENAMENX against already existing key (2)
[ok]: RENAME against non existing source key
[ok]: RENAME where source and dest key are the same (existing)
[ok]: RENAMENX where source and dest key are the same (existing)
[ok]: RENAME where source and dest key are the same (non existing)
[ok]: RENAME with volatile key, should move the TTL as well
[ok]: RENAME with volatile key, should not inherit TTL of target key
[ok]: DEL all keys again (DB 0)
[ok]: DEL all keys again (DB 1)
[ok]: COPY basic usage for string
[ok]: COPY for string does not replace an existing key without REPLACE option
[ok]: COPY for string can replace an existing key with REPLACE option
[ok]: COPY for string ensures that copied data is independent of copying data
[ok]: COPY for string does not copy data to no-integer DB
[ok]: COPY can copy key expire metadata as well
[ok]: COPY does not create an expire if it does not exist
[ok]: COPY basic usage for list
[ok]: COPY basic usage for intset set
[ok]: COPY basic usage for hashtable set
[ok]: COPY basic usage for ziplist sorted set
[ok]: Hash fuzzing #1 - 10 fields
[ok]: COPY basic usage for skiplist sorted set
[ok]: COPY basic usage for ziplist hash
[ok]: BRPOPLPUSH timeout
[ok]: BLPOP when new key is moved into place
[ok]: BLPOP when result key is created by SORT..STORE
[ok]: Hash fuzzing #2 - 10 fields
[ok]: COPY basic usage for hashtable hash
[ok]: BLPOP: with single empty list argument
[ok]: BLPOP: with negative timeout
[ok]: BLPOP: with non-integer timeout
[4/64 done]: unit/protocol (1 seconds)
Testing unit/other
[ok]: SAVE - make sure there are all the types as values
[ok]: COPY basic usage for stream
[ok]: COPY basic usage for stream-cgroups
[ok]: MOVE basic usage
[ok]: MOVE against key existing in the target DB
[ok]: MOVE against non-integer DB (#1428)
[ok]: MOVE can move key expire metadata as well
[ok]: MOVE does not create an expire if it does not exist
[ok]: SET/GET keys in different DBs
[ok]: XCLAIM without JUSTID increments delivery count
[ok]: RANDOMKEY
[ok]: RANDOMKEY against empty DB
[ok]: RANDOMKEY regression 1
[ok]: KEYS * two times with long key, Github issue #1208
[5/64 done]: unit/keyspace (2 seconds)
Testing unit/multi
[ok]: XCLAIM same consumer
[ok]: MUTLI / EXEC basics
[ok]: DISCARD
[ok]: Nested MULTI are not allowed
[ok]: MULTI where commands alter argc/argv
[ok]: WATCH inside MULTI is not allowed
[ok]: EXEC fails if there are errors while queueing commands #1
[ok]: EXEC fails if there are errors while queueing commands #2
[ok]: If EXEC aborts, the client MULTI state is cleared
[ok]: EXEC works on WATCHed key not modified
[ok]: EXEC fail on WATCHed key modified (1 key of 1 watched)
[ok]: EXEC fail on WATCHed key modified (1 key of 5 watched)
[ok]: EXEC fail on WATCHed key modified by SORT with STORE even if the result is empty
[ok]: After successful EXEC key is no longer watched
[ok]: After failed EXEC key is no longer watched
[ok]: It is possible to UNWATCH
[ok]: UNWATCH when there is nothing watched works as expected
[ok]: FLUSHALL is able to touch the watched keys
[ok]: MASTERAUTH test with binary password
[ok]: FLUSHALL does not touch non affected keys
[ok]: FLUSHDB is able to touch the watched keys
[ok]: FLUSHDB does not touch non affected keys
[ok]: SWAPDB is able to touch the watched keys that exist
[ok]: SWAPDB is able to touch the watched keys that do not exist
[ok]: WATCH is able to remember the DB a key belongs to
[ok]: WATCH will consider touched keys target of EXPIRE
[6/64 done]: unit/auth (2 seconds)
Testing unit/quit
[ok]: BLPOP: with zero timeout should block indefinitely
[ok]: BLPOP: second argument is not a list
[ok]: QUIT returns OK
[ok]: Pipelined commands after QUIT must not be executed
[ok]: Pipelined commands after QUIT that exceed read buffer size
[ok]: XAUTOCLAIM can claim PEL items from another consumer
[ok]: Very big payload random access
[7/64 done]: unit/quit (1 seconds)
Testing unit/aofrw
[ok]: XAUTOCLAIM as an iterator
[ok]: XAUTOCLAIM COUNT must be > 0
[ok]: XINFO FULL output
[ok]: XGROUP CREATECONSUMER: create consumer if does not exist
[ok]: XGROUP CREATECONSUMER: group must exist
[ok]: FUZZ stresser with data model binary
[ok]: XREADGROUP with NOACK creates consumer
[ok]: WATCH will consider touched expired keys
[ok]: DISCARD should clear the WATCH dirty flag on the client
[ok]: DISCARD should UNWATCH all the keys
[ok]: MULTI / EXEC is propagated correctly (single write command)
[ok]: EXPIRE - After 2.1 seconds the key should no longer be here
[ok]: EXPIRE - write on expire should work
[ok]: EXPIREAT - Check for EXPIRE alike behavior
[ok]: SETEX - Set + Expire combo operation. Check for TTL
[ok]: SETEX - Check value
[ok]: SETEX - Overwrite old key
[ok]: MULTI / EXEC is propagated correctly (empty transaction)
[ok]: MULTI / EXEC is propagated correctly (read-only commands)
[ok]: BLPOP: timeout
[ok]: BLPOP: arguments are empty
[ok]: BRPOP: with single empty list argument
[ok]: BRPOP: with negative timeout
[ok]: BRPOP: with non-integer timeout
[ok]: MULTI / EXEC is propagated correctly (write command, no effect)
[ok]: DISCARD should not fail during OOM
[ok]: Hash fuzzing #1 - 512 fields
[ok]: Consumer without PEL is present in AOF after AOFRW
[ok]: FUZZ stresser with data model alpha
[ok]: MULTI and script timeout
[ok]: Consumer group last ID propagation to slave (NOACK=0)
[ok]: SETEX - Wait for the key to expire
[ok]: SETEX - Wrong time parameter
[ok]: PERSIST can undo an EXPIRE
[ok]: PERSIST returns 0 against non existing or non volatile keys
[ok]: EXEC and script timeout
[ok]: Consumer group last ID propagation to slave (NOACK=1)
[ok]: SET 10000 numeric keys and access all them in reverse order
[ok]: DBSIZE should be 10000 now
[ok]: SETNX target key missing
[ok]: SETNX target key exists
[ok]: SETNX against not-expired volatile key
[ok]: BRPOP: with zero timeout should block indefinitely
[ok]: BRPOP: second argument is not a list
[ok]: MULTI-EXEC body and script timeout
[ok]: XADD mass insertion and XLEN
[ok]: XADD with ID 0-0
[ok]: XRANGE COUNT works as expected
[ok]: XREVRANGE COUNT works as expected
[ok]: just EXEC and script timeout
[ok]: exec with write commands and state change
[ok]: exec with read commands and stale replica state change
[ok]: EXEC with only read commands should not be rejected when OOM
[ok]: EXEC with at least one use-memory command should fail
[ok]: Blocking commands ignores the timeout
[ok]: MULTI propagation of PUBLISH
[ok]: FUZZ stresser with data model compr
[ok]: MULTI propagation of SCRIPT LOAD
[ok]: MULTI propagation of SCRIPT LOAD
[ok]: MULTI propagation of XREADGROUP
[ok]: BRPOP: timeout
[ok]: BRPOP: arguments are empty
[ok]: BLPOP inside a transaction
[ok]: LPUSHX, RPUSHX - generic
[ok]: LPUSHX, RPUSHX - linkedlist
[ok]: LINSERT - linkedlist
[ok]: LPUSHX, RPUSHX - ziplist
[ok]: LINSERT - ziplist
[ok]: LINSERT raise error on bad syntax
[ok]: LINDEX consistency test - quicklist
[ok]: Empty stream with no lastid can be rewrite into AOF correctly
[8/64 done]: unit/multi (4 seconds)
Testing unit/acl
[ok]: Old Big Linked list: SORT BY key
[ok]: Old Big Linked list: SORT BY key with limit
[ok]: LINDEX random access - quicklist
[ok]: XRANGE can be used to iterate the whole stream
[ok]: Connections start with the default user
[ok]: It is possible to create new users
[ok]: New users start disabled
[ok]: Enabling the user allows the login
[ok]: Only the set of correct passwords work
[ok]: It is possible to remove passwords from the set of valid ones
[ok]: Test password hashes can be added
[ok]: Test password hashes validate input
[ok]: ACL GETUSER returns the password hash instead of the actual password
[ok]: Test hashed passwords removal
[ok]: By default users are not able to access any command
[ok]: By default users are not able to access any key
[ok]: It's possible to allow the access of a subset of keys
[ok]: By default users are able to publish to any channel
[ok]: By default users are able to subscribe to any channel
[ok]: By default users are able to subscribe to any pattern
[ok]: It's possible to allow publishing to a subset of channels
[ok]: Validate subset of channels is prefixed with resetchannels flag
[ok]: In transaction queue publish/subscribe/psubscribe to unauthorized channel will fail
[ok]: It's possible to allow subscribing to a subset of channels
[ok]: It's possible to allow subscribing to a subset of channel patterns
[ok]: Subscribers are killed when revoked of channel permission
[ok]: Subscribers are killed when revoked of pattern permission
[ok]: Subscribers are pardoned if literal permissions are retained and/or gaining allchannels
[ok]: Users can be configured to authenticate with any password
[ok]: ACLs can exclude single commands
[ok]: ACLs can include or exclude whole classes of commands
[ok]: ACLs can include single subcommands
[ok]: ACLs set can include subcommands, if already full command exists
[ok]: ACL GETUSER is able to translate back command permissions
[ok]: ACL GETUSER provides reasonable results
[ok]: ACL #5998 regression: memory leaks adding / removing subcommands
[ok]: ACL LOG shows failed command executions at toplevel
[ok]: ACL LOG is able to test similar events
[ok]: ACL LOG is able to log keys access violations and key name
[ok]: ACL LOG is able to log channel access violations and channel name
[ok]: ACL LOG RESET is able to flush the entries in the log
[ok]: ACL LOG can distinguish the transaction context (1)
[ok]: ACL LOG can distinguish the transaction context (2)
[ok]: ACL can log errors in the context of Lua scripting
[ok]: ACL LOG can accept a numerical argument to show less entries
[ok]: ACL LOG can log failed auth attempts
[ok]: ACL LOG entries are limited to a maximum amount
[ok]: When default user is off, new connections are not authenticated
[ok]: When default user has no command permission, hello command still works for other users
[ok]: ACL HELP should not have unexpected options
[ok]: Delete a user that the client doesn't use
[ok]: Delete a user that the client is using
[ok]: Hash fuzzing #2 - 512 fields
[ok]: Check if list is still ok after a DEBUG RELOAD - quicklist
[9/64 done]: unit/type/stream-cgroups (6 seconds)
Testing unit/latency-monitor
[ok]: LINDEX consistency test - quicklist
[ok]: EXPIRE precision is now the millisecond
[ok]: BGSAVE
[ok]: SELECT an out of range DB
[ok]: default: load from include file, can access any channels
[ok]: default: with config acl-pubsub-default allchannels after reset, can access any channels
[ok]: default: with config acl-pubsub-default resetchannels after reset, can not access any channels
[ok]: Alice: can execute all command
[ok]: Bob: just execute @set and acl command
[ok]: ACL load and save
[ok]: ACL load and save with restricted channels
[ok]: LINDEX random access - quicklist
[ok]: Check if list is still ok after a DEBUG RELOAD - quicklist
[ok]: LLEN against non-list value error
[ok]: LLEN against non existing key
[ok]: LINDEX against non-list value error
[ok]: LINDEX against non existing key
[ok]: LPUSH against non-list value error
[ok]: RPUSH against non-list value error
[ok]: RPOPLPUSH base case - linkedlist
[ok]: LMOVE left left base case - linkedlist
[ok]: LMOVE left right base case - linkedlist
[ok]: Default user has access to all channels irrespective of flag
[ok]: Update acl-pubsub-default, existing users shouldn't get affected
[ok]: Single channel is valid
[ok]: Single channel is not valid with allchannels
[ok]: LMOVE right left base case - linkedlist
[ok]: LMOVE right right base case - linkedlist
[ok]: RPOPLPUSH with the same list as src and dst - linkedlist
[ok]: LMOVE left left with the same list as src and dst - linkedlist
[ok]: LMOVE left right with the same list as src and dst - linkedlist
[ok]: LMOVE right left with the same list as src and dst - linkedlist
[ok]: LMOVE right right with the same list as src and dst - linkedlist
[ok]: RPOPLPUSH with linkedlist source and existing target linkedlist
[ok]: LMOVE left left with linkedlist source and existing target linkedlist
[ok]: LMOVE left right with linkedlist source and existing target linkedlist
[ok]: LMOVE right left with linkedlist source and existing target linkedlist
[ok]: LMOVE right right with linkedlist source and existing target linkedlist
[ok]: RPOPLPUSH with linkedlist source and existing target ziplist
[ok]: LMOVE left left with linkedlist source and existing target ziplist
[ok]: LMOVE left right with linkedlist source and existing target ziplist
[ok]: LMOVE right left with linkedlist source and existing target ziplist
[ok]: LMOVE right right with linkedlist source and existing target ziplist
[ok]: RPOPLPUSH base case - ziplist
[ok]: LMOVE left left base case - ziplist
[ok]: LMOVE left right base case - ziplist
[ok]: LMOVE right left base case - ziplist
[ok]: LMOVE right right base case - ziplist
[ok]: RPOPLPUSH with the same list as src and dst - ziplist
[ok]: LMOVE left left with the same list as src and dst - ziplist
[ok]: LMOVE left right with the same list as src and dst - ziplist
[ok]: LMOVE right left with the same list as src and dst - ziplist
[ok]: LMOVE right right with the same list as src and dst - ziplist
[ok]: RPOPLPUSH with ziplist source and existing target linkedlist
[ok]: LMOVE left left with ziplist source and existing target linkedlist
[ok]: LMOVE left right with ziplist source and existing target linkedlist
[ok]: LMOVE right left with ziplist source and existing target linkedlist
[ok]: LMOVE right right with ziplist source and existing target linkedlist
[ok]: RPOPLPUSH with ziplist source and existing target ziplist
[ok]: LMOVE left left with ziplist source and existing target ziplist
[ok]: LMOVE left right with ziplist source and existing target ziplist
[ok]: LMOVE right left with ziplist source and existing target ziplist
[ok]: LMOVE right right with ziplist source and existing target ziplist
[ok]: RPOPLPUSH against non existing key
[ok]: RPOPLPUSH against non list src key
[ok]: RPOPLPUSH against non list dst key
[ok]: RPOPLPUSH against non existing src key
[ok]: Basic LPOP/RPOP - linkedlist
[ok]: Basic LPOP/RPOP - ziplist
[ok]: LPOP/RPOP against non list value
[ok]: PEXPIRE/PSETEX/PEXPIREAT can set sub-second expires
[ok]: TTL returns time to live in seconds
[ok]: PTTL returns time to live in milliseconds
[ok]: TTL / PTTL return -1 if key has no expire
[ok]: TTL / PTTL return -2 if key does not exit
[ok]: Mass RPOP/LPOP - quicklist
[ok]: Mass RPOP/LPOP - quicklist
[ok]: LRANGE basics - linkedlist
[ok]: LRANGE inverted indexes - linkedlist
[ok]: LRANGE out of range indexes including the full list - linkedlist
[ok]: LRANGE out of range negative end index - linkedlist
[ok]: LRANGE basics - ziplist
[ok]: LRANGE inverted indexes - ziplist
[ok]: LRANGE out of range indexes including the full list - ziplist
[ok]: LRANGE out of range negative end index - ziplist
[ok]: LRANGE against non existing key
[ok]: LRANGE with start > end yields an empty array for backward compatibility
[ok]: LTRIM basics - linkedlist
[ok]: LTRIM out of range negative end index - linkedlist
[ok]: LTRIM basics - ziplist
[ok]: Old Big Linked list: SORT BY hash field
[ok]: LTRIM out of range negative end index - ziplist
[ok]: LSET - linkedlist
[ok]: LSET out of range index - linkedlist
[ok]: LSET - ziplist
[ok]: LSET out of range index - ziplist
[ok]: LSET against non existing key
[ok]: LSET against non list value
[ok]: LREM remove all the occurrences - linkedlist
[ok]: LREM remove the first occurrence - linkedlist
[ok]: LREM remove non existing element - linkedlist
[ok]: Intset: SORT BY key
[ok]: Intset: SORT BY key with limit
[ok]: Intset: SORT BY hash field
[ok]: LREM starting from tail with negative count - linkedlist
[ok]: LREM starting from tail with negative count (2) - linkedlist
[ok]: LREM deleting objects that may be int encoded - linkedlist
[ok]: LREM remove all the occurrences - ziplist
[ok]: LREM remove the first occurrence - ziplist
[ok]: LREM remove non existing element - ziplist
[ok]: LREM starting from tail with negative count - ziplist
[ok]: LREM starting from tail with negative count (2) - ziplist
[ok]: LREM deleting objects that may be int encoded - ziplist
[ok]: Only default user has access to all channels irrespective of flag
[ok]: SETNX against expired volatile key
[ok]: GETEX EX option
[ok]: GETEX PX option
[ok]: GETEX EXAT option
[ok]: GETEX PXAT option
[ok]: GETEX PERSIST option
[ok]: GETEX no option
[ok]: GETEX syntax errors
[ok]: GETEX no arguments
[ok]: GETDEL command
[ok]: default: load from config file, can access any channels
[ok]: Hash table: SORT BY key
[ok]: GETDEL propagate as DEL command to replica
[ok]: Hash table: SORT BY key with limit
[ok]: Hash table: SORT BY hash field
[10/64 done]: unit/acl (2 seconds)
Testing integration/block-repl
[ok]: Check consistency of different data types after a reload
[ok]: Redis should actively expire keys incrementally
[ok]: ZDIFF fuzzing - ziplist
[ok]: Basic ZPOP with a single key - ziplist
[ok]: ZPOP with count - ziplist
[ok]: BZPOP with a single existing sorted set - ziplist
[ok]: BZPOP with multiple existing sorted sets - ziplist
[ok]: BZPOP second sorted set has members - ziplist
[ok]: Check encoding - skiplist
[ok]: ZSET basic ZADD and score update - skiplist
[ok]: ZSET element can't be set to NaN with ZADD - skiplist
[ok]: ZSET element can't be set to NaN with ZINCRBY - skiplist
[ok]: ZADD with options syntax error with incomplete pair - skiplist
[ok]: ZADD XX option without key - skiplist
[ok]: ZADD XX existing key - skiplist
[ok]: ZADD XX returns the number of elements actually added - skiplist
[ok]: ZADD XX updates existing elements score - skiplist
[ok]: ZADD GT updates existing elements when new scores are greater - skiplist
[ok]: ZADD LT updates existing elements when new scores are lower - skiplist
[ok]: ZADD GT XX updates existing elements when new scores are greater and skips new elements - skiplist
[ok]: ZADD LT XX updates existing elements when new scores are lower and skips new elements - skiplist
[ok]: ZADD XX and NX are not compatible - skiplist
[ok]: ZADD NX with non existing key - skiplist
[ok]: ZADD NX only add new elements without updating old ones - skiplist
[ok]: ZADD GT and NX are not compatible - skiplist
[ok]: ZADD LT and NX are not compatible - skiplist
[ok]: ZADD LT and GT are not compatible - skiplist
[ok]: ZADD INCR LT/GT replies with nill if score not updated - skiplist
[ok]: ZADD INCR LT/GT with inf - skiplist
[ok]: ZADD INCR works like ZINCRBY - skiplist
[ok]: ZADD INCR works with a single score-elemenet pair - skiplist
[ok]: ZADD CH option changes return value to all changed elements - skiplist
[ok]: ZINCRBY calls leading to NaN result in error - skiplist
[ok]: ZADD - Variadic version base case - $encoding
[ok]: ZADD - Return value is the number of actually added items - $encoding
[ok]: ZADD - Variadic version does not add nothing on single parsing err - $encoding
[ok]: ZADD - Variadic version will raise error on missing arg - $encoding
[ok]: ZINCRBY does not work variadic even if shares ZADD implementation - $encoding
[ok]: ZCARD basics - skiplist
[ok]: ZREM removes key after last element is removed - skiplist
[ok]: ZREM variadic version - skiplist
[ok]: ZREM variadic version -- remove elements after key deletion - skiplist
[ok]: ZRANGE basics - skiplist
[ok]: ZREVRANGE basics - skiplist
[ok]: ZRANK/ZREVRANK basics - skiplist
[ok]: ZRANK - after deletion - skiplist
[ok]: ZINCRBY - can create a new sorted set - skiplist
[ok]: ZINCRBY - increment and decrement - skiplist
[ok]: ZINCRBY return value - skiplist
[ok]: ZRANGEBYSCORE/ZREVRANGEBYSCORE/ZCOUNT basics - skiplist
[ok]: ZRANGEBYSCORE with WITHSCORES - skiplist
[ok]: ZRANGEBYSCORE with LIMIT - skiplist
[ok]: ZRANGEBYSCORE with LIMIT and WITHSCORES - skiplist
[ok]: ZRANGEBYSCORE with non-value min or max - skiplist
[ok]: ZRANGEBYLEX/ZREVRANGEBYLEX/ZLEXCOUNT basics - skiplist
[ok]: ZLEXCOUNT advanced - skiplist
[ok]: ZRANGEBYSLEX with LIMIT - skiplist
[ok]: ZRANGEBYLEX with invalid lex range specifiers - skiplist
[ok]: ZREMRANGEBYSCORE basics - skiplist
[ok]: ZREMRANGEBYSCORE with non-value min or max - skiplist
[ok]: ZREMRANGEBYRANK basics - skiplist
[ok]: ZUNIONSTORE against non-existing key doesn't set destination - skiplist
[ok]: ZUNION/ZINTER/ZDIFF against non-existing key - skiplist
[ok]: ZUNIONSTORE with empty set - skiplist
[ok]: ZUNION/ZINTER/ZDIFF with empty set - skiplist
[ok]: ZUNIONSTORE basics - skiplist
[ok]: ZUNION/ZINTER/ZDIFF with integer members - skiplist
[ok]: ZUNIONSTORE with weights - skiplist
[ok]: ZUNION with weights - skiplist
[ok]: ZUNIONSTORE with a regular set and weights - skiplist
[ok]: ZUNIONSTORE with AGGREGATE MIN - skiplist
[ok]: ZUNION/ZINTER with AGGREGATE MIN - skiplist
[ok]: ZUNIONSTORE with AGGREGATE MAX - skiplist
[ok]: ZUNION/ZINTER with AGGREGATE MAX - skiplist
[ok]: ZINTERSTORE basics - skiplist
[ok]: ZINTER basics - skiplist
[ok]: ZINTER RESP3 - skiplist
[ok]: ZINTERSTORE with weights - skiplist
[ok]: ZINTER with weights - skiplist
[ok]: ZINTERSTORE with a regular set and weights - skiplist
[ok]: ZINTERSTORE with AGGREGATE MIN - skiplist
[ok]: ZINTERSTORE with AGGREGATE MAX - skiplist
[ok]: ZUNIONSTORE with +inf/-inf scores - skiplist
[ok]: ZUNIONSTORE with NaN weights - skiplist
[ok]: ZINTERSTORE with +inf/-inf scores - skiplist
[ok]: ZINTERSTORE with NaN weights - skiplist
[ok]: ZDIFFSTORE basics - skiplist
[ok]: ZDIFF basics - skiplist
[ok]: ZDIFFSTORE with a regular set - skiplist
[ok]: ZDIFF subtracting set from itself - skiplist
[ok]: ZDIFF algorithm 1 - skiplist
[ok]: ZDIFF algorithm 2 - skiplist
[ok]: Regression for bug 593 - chaining BRPOPLPUSH with other blocking cmds
[ok]: client unblock tests
[ok]: List ziplist of various encodings
[ok]: List ziplist of various encodings - sanitize dump
[ok]: SDIFF fuzzing
[ok]: SINTER against non-set should throw error
[ok]: SUNION against non-set should throw error
[ok]: SINTER should handle non existing key as empty
[ok]: SINTER with same integer elements but different encoding
[ok]: SINTERSTORE against non existing keys should delete dstkey
[ok]: SUNIONSTORE against non existing keys should delete dstkey
[ok]: SPOP basics - hashtable
[ok]: SPOP with <count>=1 - hashtable
[ok]: SRANDMEMBER - hashtable
[ok]: SPOP basics - intset
[ok]: SPOP with <count>=1 - intset
[ok]: SRANDMEMBER - intset
[ok]: SPOP with <count>
[ok]: SPOP with <count>
[ok]: SPOP using integers, testing Knuth's and Floyd's algorithm
[ok]: SPOP using integers with Knuth's algorithm
[ok]: SPOP new implementation: code path #1
[ok]: SPOP new implementation: code path #2
[ok]: SPOP new implementation: code path #3
[ok]: SRANDMEMBER with <count> against non existing key
[ok]: GETEX without argument does not propagate to replica
[ok]: MGET
[ok]: MGET against non existing key
[ok]: MGET against non-string key
[ok]: GETSET (set new value)
[ok]: GETSET (replace old value)
[ok]: MSET base case
[ok]: MSET wrong number of args
[ok]: MSETNX with already existent key
[ok]: MSETNX with not existing keys
[ok]: STRLEN against non-existing key
[ok]: STRLEN against integer-encoded value
[ok]: STRLEN against plain string
[ok]: SETBIT against non-existing key
[ok]: SETBIT against string-encoded key
[ok]: SETBIT against integer-encoded key
[ok]: SETBIT against key with wrong type
[ok]: SETBIT with out of range bit offset
[ok]: SETBIT with non-bit argument
[ok]: SRANDMEMBER with <count> - hashtable
[ok]: SRANDMEMBER with <count> - intset
[11/64 done]: unit/type/list (9 seconds)
Testing integration/replication
[ok]: SRANDMEMBER histogram distribution - hashtable
[ok]: Stress test the hash ziplist -> hashtable encoding conversion
[ok]: Test HINCRBYFLOAT for correct float representation (issue #2846)
[ok]: Same dataset digest if saving/reloading as AOF?
[ok]: Redis should lazy expire keys
[ok]: Hash ziplist of various encodings
[ok]: Hash ziplist of various encodings - sanitize dump
[ok]: SRANDMEMBER histogram distribution - intset
[ok]: SMOVE basics - from regular set to intset
[ok]: SMOVE basics - from intset to regular set
[ok]: Slave enters handshake
[ok]: SMOVE non existing key
[ok]: SMOVE non existing src set
[ok]: SMOVE from regular set to non existing destination set
[ok]: SMOVE from intset to non existing destination set
[ok]: SMOVE wrong src key type
[ok]: SMOVE wrong dst key type
[ok]: SMOVE with identical source and destination
[ok]: First server should have role slave after SLAVEOF
[ok]: SETBIT fuzzing
[ok]: GETBIT against non-existing key
[ok]: GETBIT against string-encoded key
[ok]: GETBIT against integer-encoded key
[ok]: SETRANGE against non-existing key
[ok]: SETRANGE against string-encoded key
[ok]: SETRANGE against integer-encoded key
[ok]: SETRANGE against key with wrong type
[ok]: SETRANGE with out of range offset
[ok]: GETRANGE against non-existing key
[ok]: GETRANGE against string value
[ok]: GETRANGE against integer-encoded value
[12/64 done]: unit/type/hash (9 seconds)
Testing integration/replication-2
[ok]: XREVRANGE returns the reverse of XRANGE
[ok]: XRANGE exclusive ranges
[ok]: XREAD with non empty stream
[ok]: Non blocking XREAD with empty streams
[ok]: XREAD with non empty second stream
[ok]: Blocking XREAD waiting new data
[ok]: Blocking XREAD waiting old data
[ok]: Blocking XREAD will not reply with an empty array
[ok]: XREAD: XADD + DEL should not awake client
[ok]: XREAD: XADD + DEL + LPUSH should not awake client
[ok]: XREAD with same stream name multiple times should work
[ok]: XREAD + multiple XADD inside transaction
[ok]: XDEL basic test
[ok]: Test latency events logging
[ok]: LATENCY HISTORY output is ok
[ok]: LATENCY LATEST output is ok
[ok]: LATENCY HISTORY / RESET with wrong event name is fine
[ok]: LATENCY DOCTOR produces some output
[ok]: LATENCY RESET is able to reset events
[ok]: First server should have role slave after SLAVEOF
[ok]: If min-slaves-to-write is honored, write is accepted
[ok]: No write if min-slaves-to-write is < attached slaves
[ok]: If min-slaves-to-write is honored, write is accepted (again)
[ok]: EXPIRE should not resurrect keys (issue #1026)
[ok]: 5 keys in, 5 keys out
[ok]: EXPIRE with empty string as TTL should report an error
[ok]: SET with EX with big integer should report an error
[ok]: SET with EX with smallest integer should report an error
[ok]: GETEX with big integer should report an error
[ok]: GETEX with smallest integer should report an error
[ok]: EXPIRE with big integer overflows when converted to milliseconds
[ok]: PEXPIRE with big integer overflow when basetime is added
[ok]: EXPIRE with big negative integer
[ok]: PEXPIREAT with big integer works
[ok]: PEXPIREAT with big negative integer works
[ok]: EXPIRES after a reload (snapshot + append only file rewrite)
[ok]: Stress tester for #3343-alike bugs
[ok]: SCAN regression test for issue #4906
[13/64 done]: unit/scan (11 seconds)
Testing integration/replication-3
[ok]: GETRANGE fuzzing
[ok]: Extended SET can detect syntax errors
[ok]: Extended SET NX option
[ok]: Extended SET XX option
[ok]: Extended SET GET option
[ok]: Extended SET GET option with no previous value
[ok]: Extended SET GET with NX option should result in syntax err
[ok]: Extended SET GET with incorrect type should result in wrong type error
[ok]: Extended SET EX option
[ok]: Extended SET PX option
[ok]: Extended SET EXAT option
[ok]: Extended SET PXAT option
[ok]: Extended SET using multiple options at once
[ok]: GETRANGE with huge ranges, Github issue #1844
[ok]: STRALGO LCS string output with STRINGS option
[ok]: STRALGO LCS len
[ok]: LCS with KEYS option
[ok]: LCS indexes
[ok]: LCS indexes with match len
[ok]: LCS indexes with match len and minimum match len
[14/64 done]: unit/type/string (12 seconds)
Testing integration/replication-4
[ok]: intsets implementation stress testing
[ok]: EXPIRE and SET/GETEX EX/PX/EXAT/PXAT option, TTL should not be reset after loadaof
[ok]: First server should have role slave after SLAVEOF
[ok]: EXPIRE relative and absolute propagation to replicas
[ok]: SET command will remove expire
[ok]: SET - use KEEPTTL option, TTL should not be removed
[15/64 done]: unit/type/set (12 seconds)
Testing integration/replication-psync
[ok]: XDEL fuzz test
[ok]: LTRIM stress testing - linkedlist
[ok]: Slave should be able to synchronize with the master
[ok]: Detect write load to master
[ok]: Test replication partial resync: no reconnection, just sync (diskless: no, disabled, reconnect: 0)
[ok]: First server should have role slave after SLAVEOF
[ok]: EXPIRES after AOF reload (without rewrite)
I/O error reading reply
    while executing
"{*}$r type $k"
    (procedure "createComplexDataset" line 43)
    invoked from within
"createComplexDataset $r $ops"
    (procedure "bg_complex_data" line 5)
    invoked from within
"bg_complex_data [lindex $argv 0] [lindex $argv 1] [lindex $argv 2] [lindex $argv 3] [lindex $argv 4]"
    (file "tests/helpers/bg_complex_data.tcl" line 13)
I/O error reading reply
    while executing
"{*}$r hset $k $f $v"
    ("uplevel" body line 2)
    invoked from within
"uplevel 1 [lindex $args $path]"
    (procedure "randpath" line 3)
    invoked from within
"randpath {
                {*}$r set $k $v
            } {
                {*}$r lpush $k $v
            } {
                {*}$r sadd $k $v
        ..."
    (procedure "createComplexDataset" line 30)
    invoked from within
"createComplexDataset $r $ops"
    (procedure "bg_complex_data" line 5)
    invoked from within
"bg_complex_data [lindex $argv 0] [lindex $argv 1] [lindex $argv 2] [lindex $argv 3] [lindex $argv 4]"
    (file "tests/helpers/bg_complex_data.tcl" line 13)
I/O error reading reply
    while executing
"{*}$r type $k"
    (procedure "createComplexDataset" line 43)
    invoked from within
"createComplexDataset $r $ops"
    (procedure "bg_complex_data" line 5)
    invoked from within
"bg_complex_data [lindex $argv 0] [lindex $argv 1] [lindex $argv 2] [lindex $argv 3] [lindex $argv 4]"
    (file "tests/helpers/bg_complex_data.tcl" line 13)
[ok]: No write if min-slaves-max-lag is > of the slave lag
[ok]: min-slaves-to-write is ignored by slaves
[ok]: Big Hash table: SORT BY key
[ok]: Big Hash table: SORT BY key with limit
[ok]: SET - use KEEPTTL option, TTL should not be removed after loadaof
[ok]: GETEX use of PERSIST option should remove TTL
[ok]: ziplist implementation: value encoding and backlink
[ok]: Slave should be able to synchronize with the master
[ok]: Detect write load to master
[ok]: MIGRATE cached connections are released after some time
[ok]: Big Hash table: SORT BY hash field
[ok]: SORT GET #
[ok]: SORT GET <const>
[ok]: SORT GET (key and hash) with sanity check
[ok]: SORT BY key STORE
[ok]: SORT BY hash field STORE
[ok]: SORT extracts STORE correctly
[ok]: SORT extracts multiple STORE correctly
[ok]: SORT DESC
[ok]: SORT ALPHA against integer encoded strings
[ok]: SORT sorted set
[ok]: SORT sorted set BY nosort should retain ordering
[ok]: SORT sorted set BY nosort + LIMIT
[ok]: SORT sorted set BY nosort works as expected from scripts
[ok]: SORT sorted set: +inf and -inf handling
[ok]: SORT regression for issue #19, sorting floats
[ok]: SORT with STORE returns zero if result is empty (github issue 224)
[ok]: SORT with STORE does not create empty lists (github issue 224)
[ok]: SORT with STORE removes key if result is empty (github issue 227)
[ok]: SORT with BY <constant> and STORE should still order output
[ok]: SORT will complain with numerical sorting and bad doubles (1)
[ok]: SORT will complain with numerical sorting and bad doubles (2)
[ok]: SORT BY sub-sorts lexicographically if score is the same
[ok]: SORT GET with pattern ending with just -> does not get hash field
[ok]: SORT by nosort retains native order for lists
[ok]: SORT by nosort plus store retains native order for lists
[ok]: SORT by nosort with limit returns based on original list order
[ok]: MIGRATE is able to migrate a key between two instances
[ok]: SORT speed, 100 element list BY key, 100 times
[ok]: SORT speed, 100 element list BY hash field, 100 times
[ok]: SORT speed, 100 element list directly, 100 times
[ok]: SORT speed, 100 element list BY <const>, 100 times
[ok]: PIPELINING stresser (also a regression for the old epoll bug)
[ok]: APPEND basics
[ok]: APPEND basics, integer encoded values
[ok]: GETEX use of PERSIST option should remove TTL after loadaof
[ok]: GETEX propagate as to replica as PERSIST, DEL, or nothing
[ok]: MIGRATE is able to copy a key between two instances
[16/64 done]: unit/sort (15 seconds)
Testing integration/aof
[17/64 done]: unit/expire (15 seconds)
Testing integration/rdb
[ok]: Unfinished MULTI: Server should start if load-truncated is yes
[ok]: APPEND fuzzing
[ok]: MIGRATE will not overwrite existing keys, unless REPLACE is used
[ok]: RDB encoding loading test
[ok]: FLUSHDB
[ok]: Short read: Server should start if load-truncated is yes
[ok]: Truncated AOF loaded: we expect foo to be equal to 5
[ok]: Append a new command after loading an incomplete AOF
[ok]: Perform a final SAVE to leave a clean DB on disk
[ok]: RESET clears client state
[ok]: RESET clears MONITOR state
[ok]: RESET clears and discards MULTI state
[ok]: RESET clears Pub/Sub state
[ok]: RESET clears authenticated state
[ok]: MIGRATE propagates TTL correctly
[ok]: Short read + command: Server should start
[ok]: Truncated AOF loaded: we expect foo to be equal to 6 now
[ok]: ZDIFF fuzzing - skiplist
[ok]: Basic ZPOP with a single key - skiplist
[ok]: ZPOP with count - skiplist
[ok]: BZPOP with a single existing sorted set - skiplist
[ok]: BZPOP with multiple existing sorted sets - skiplist
[ok]: BZPOP second sorted set has members - skiplist
[ok]: ZINTERSTORE regression with two sets, intset+hashtable
[ok]: ZUNIONSTORE regression, should not create NaN in scores
[ok]: ZINTERSTORE #516 regression, mixed sets and ziplist zsets
[ok]: Server started empty with non-existing RDB file
[ok]: ZUNIONSTORE result is sorted
[ok]: ZUNIONSTORE/ZINTERSTORE/ZDIFFSTORE error if using WITHSCORES 
[ok]: ZMSCORE retrieve
[ok]: ZMSCORE retrieve from empty set
[ok]: ZMSCORE retrieve with missing member
[ok]: ZMSCORE retrieve single member
[ok]: ZMSCORE retrieve requires one or more members
[ok]: ZSET commands don't accept the empty strings as valid score
[ok]: ZSCORE - ziplist
[ok]: ZMSCORE - ziplist
[ok]: Bad format: Server should have logged an error
[ok]: ZSCORE after a DEBUG RELOAD - ziplist
[ok]: ZSET sorting stresser - ziplist
[ok]: Unfinished MULTI: Server should have logged an error
[ok]: Server started empty with empty RDB file
[ok]: Short read: Server should have logged an error
[ok]: Short read: Utility should confirm the AOF is not valid
[ok]: Short read: Utility should show the abnormal line num in AOF
[ok]: Short read: Utility should be able to fix the AOF
[ok]: Fixed AOF: Server should have been started
[ok]: Fixed AOF: Keyspace should contain values that were parseable
[ok]: Don't rehash if redis has child proecess
[ok]: Test replication with parallel clients writing in different DBs
[ok]: AOF+SPOP: Server should have been started
[ok]: AOF+SPOP: Set should have 1 member
[ok]: Process title set as expected
[ok]: Test RDB stream encoding
[ok]: Test RDB stream encoding - sanitize dump
[ok]: AOF+SPOP: Server should have been started
[ok]: AOF+SPOP: Set should have 1 member
[ok]: Slave is able to detect timeout during handshake
[18/64 done]: unit/other (18 seconds)
Testing integration/corrupt-dump
I/O error reading reply
    while executing
"{*}$r zadd $k $d $v"
    ("uplevel" body line 2)
    invoked from within
"uplevel 1 [lindex $args $path]"
    (procedure "randpath" line 3)
    invoked from within
"randpath {
                {*}$r set $k $v
            } {
                {*}$r lpush $k $v
            } {
                {*}$r sadd $k $v
        ..."
    (procedure "createComplexDataset" line 30)
    invoked from within
"createComplexDataset $r $ops"
    (procedure "bg_complex_data" line 5)
    invoked from within
"bg_complex_data [lindex $argv 0] [lindex $argv 1] [lindex $argv 2] [lindex $argv 3] [lindex $argv 4]"
    (file "tests/helpers/bg_complex_data.tcl" line 13)I/O error reading reply
    while executing
"{*}$r type $k"
    (procedure "createComplexDataset" line 27)
    invoked from within
"createComplexDataset $r $ops"
    (procedure "bg_complex_data" line 5)
    invoked from within
"bg_complex_data [lindex $argv 0] [lindex $argv 1] [lindex $argv 2] [lindex $argv 3] [lindex $argv 4]"
    (file "tests/helpers/bg_complex_data.tcl" line 13)
I/O error reading reply
    while executing
"{*}$r type $k"
    (procedure "createComplexDataset" line 27)
    invoked from within
"createComplexDataset $r $ops"
    (procedure "bg_complex_data" line 5)
    invoked from within
"bg_complex_data [lindex $argv 0] [lindex $argv 1] [lindex $argv 2] [lindex $argv 3] [lindex $argv 4]"
    (file "tests/helpers/bg_complex_data.tcl" line 13)

[ok]: Server should not start if RDB file can't be open
[ok]: Server should not start if RDB is corrupted
[ok]: AOF+EXPIRE: Server should have been started
[ok]: AOF+EXPIRE: List should be empty
[ok]: Test FLUSHALL aborts bgsave
[ok]: corrupt payload: #7445 - with sanitize
[ok]: First server should have role slave after SLAVEOF
[ok]: With min-slaves-to-write (1,3): master should be writable
[ok]: With min-slaves-to-write (2,3): master should not be writable
[ok]: bgsave resets the change counter
[ok]: Redis should not try to convert DEL into EXPIREAT for EXPIRE -1
[ok]: corrupt payload: #7445 - without sanitize - 1
[ok]: Set instance A as slave of B
[ok]: corrupt payload: #7445 - without sanitize - 2
[ok]: corrupt payload: hash with valid zip list header, invalid entry len
[ok]: corrupt payload: invalid zlbytes header
[ok]: corrupt payload: valid zipped hash header, dup records
[ok]: INCRBYFLOAT replication, should not remove expire
[ok]: GETSET replication
[ok]: BRPOPLPUSH replication, when blocking against empty list
[ok]: corrupt payload: quicklist big ziplist prev len
[ok]: corrupt payload: quicklist small ziplist prev len
[ok]: Test replication partial resync: ok psync (diskless: no, disabled, reconnect: 1)
[ok]: corrupt payload: quicklist ziplist wrong count
[ok]: BRPOPLPUSH replication, list exists
[ok]: BLMOVE (left, left) replication, when blocking against empty list
I/O error reading reply
    while executing
"{*}$r hset $k $f $v"
    ("uplevel" body line 1)
    invoked from within
"uplevel 1 [lindex $args $path]"
    (procedure "randpath" line 3)
    invoked from within
"randpath {{*}$r hset $k $f $v}  {{*}$r hdel $k $f}"
    (procedure "createComplexDataset" line 80)
    invoked from within
"createComplexDataset $r $ops"
    (procedure "bg_complex_data" line 5)
    invoked from within
"bg_complex_data [lindex $argv 0] [lindex $argv 1] [lindex $argv 2] [lindex $argv 3] [lindex $argv 4]"
    (file "tests/helpers/bg_complex_data.tcl" line 13)I/O error reading reply
    while executing
"{*}$r hset $k $f $v"
    ("uplevel" body line 2)
    invoked from within
"uplevel 1 [lindex $args $path]"
    (procedure "randpath" line 3)
    invoked from within
"randpath {
                {*}$r set $k $v
            } {
                {*}$r lpush $k $v
            } {
                {*}$r sadd $k $v
        ..."
    (procedure "createComplexDataset" line 30)
    invoked from within
"createComplexDataset $r $ops"
    (procedure "bg_complex_data" line 5)
    invoked from within
"bg_complex_data [lindex $argv 0] [lindex $argv 1] [lindex $argv 2] [lindex $argv 3] [lindex $argv 4]"
    (file "tests/helpers/bg_complex_data.tcl" line 13)
I/O error reading reply
    while executing
"{*}$r type $k"
    (procedure "createComplexDataset" line 43)
    invoked from within
"createComplexDataset $r $ops"
    (procedure "bg_complex_data" line 5)
    invoked from within
"bg_complex_data [lindex $argv 0] [lindex $argv 1] [lindex $argv 2] [lindex $argv 3] [lindex $argv 4]"
    (file "tests/helpers/bg_complex_data.tcl" line 13)

[ok]: ZRANGEBYSCORE fuzzy test, 100 ranges in 128 element sorted set - ziplist
[ok]: corrupt payload: #3080 - quicklist
[ok]: ZRANGEBYLEX fuzzy test, 100 ranges in 128 element sorted set - ziplist
[ok]: corrupt payload: #3080 - ziplist
[ok]: Slave should be able to synchronize with the master
[ok]: Detect write load to master
[ok]: corrupt payload: load corrupted rdb with no CRC - #3505
[ok]: ZREMRANGEBYLEX fuzzy test, 100 ranges in 128 element sorted set - ziplist
[ok]: ZSETs skiplist implementation backlink consistency test - ziplist
[ok]: BLMOVE (left, left) replication, list exists
[ok]: BLMOVE (left, right) replication, when blocking against empty list
[ok]: With min-slaves-to-write: master not writable with lagged slave
[ok]: corrupt payload: listpack invalid size header
[ok]: corrupt payload: listpack too long entry len
[ok]: corrupt payload: listpack very long entry len
[ok]: First server should have role slave after SLAVEOF
[ok]: BLMOVE (left, right) replication, list exists
[ok]: BLMOVE (right, left) replication, when blocking against empty list
[ok]: corrupt payload: listpack too long entry prev len
[ok]: LATENCY of expire events are correctly collected
[ok]: LATENCY HELP should not have unexpected options
[ok]: ZSETs ZRANK augmented skip list stress testing - ziplist
[ok]: BZPOPMIN, ZADD + DEL should not awake blocked client
[ok]: BZPOPMIN, ZADD + DEL + SET should not awake blocked client
[ok]: BZPOPMIN with same key multiple times should work
[ok]: MULTI/EXEC is isolated from the point of view of BZPOPMIN
[ok]: BZPOPMIN with variadic ZADD
[19/64 done]: unit/latency-monitor (19 seconds)
Testing integration/corrupt-dump-fuzzer
[ok]: corrupt payload: hash ziplist with duplicate records
[ok]: corrupt payload: hash ziplist uneven record count
[ok]: BLMOVE (right, left) replication, list exists
[ok]: BLMOVE (right, right) replication, when blocking against empty list
[ok]: MASTER and SLAVE consistency with expire
[ok]: client freed during loading
[ok]: corrupt payload: hash dupliacte records
[ok]: corrupt payload: fuzzer findings - NPD in streamIteratorGetID
[ok]: BZPOPMIN with zero timeout should block indefinitely
[ok]: ZSCORE - skiplist
[ok]: ZMSCORE - skiplist
[ok]: ZSCORE after a DEBUG RELOAD - skiplist
[ok]: ZSET sorting stresser - skiplist
[ok]: corrupt payload: fuzzer findings - listpack NPD on invalid stream
[ok]: BLMOVE (right, right) replication, list exists
[ok]: BLPOP followed by role change, issue #2473
[ok]: corrupt payload: fuzzer findings - NPD in quicklistIndex
[ok]: corrupt payload: fuzzer findings - invalid read in ziplistFind
[ok]: AOF fsync always barrier issue
[ok]: Second server should have role master at first
[ok]: SLAVEOF should start with link status "down"
[ok]: The role should immediately be changed to "replica"
[ok]: Sync should have transferred keys from master
[ok]: The link status should be up
[ok]: SET on the master should immediately propagate
[ok]: corrupt payload: fuzzer findings - invalid ziplist encoding
[ok]: FLUSHALL should replicate
[ok]: ROLE in master reports master with a slave
[ok]: ROLE in slave reports slave in connected state
[ok]: GETEX should not append to AOF
[20/64 done]: integration/aof (11 seconds)
Testing integration/convert-zipmap-hash-on-load
[ok]: corrupt payload: fuzzer findings - hash crash
[ok]: MASTER and SLAVE dataset should be identical after complex ops
[ok]: RDB load zipmap hash: converts to ziplist
[ok]: corrupt payload: fuzzer findings - uneven entry count in hash
[ok]: RDB load zipmap hash: converts to hash table when hash-max-ziplist-entries is exceeded
[21/64 done]: integration/replication-2 (19 seconds)
Testing integration/logging
[ok]: corrupt payload: fuzzer findings - invalid read in lzf_decompress
[ok]: Test child sending info
[ok]: LTRIM stress testing - ziplist
[ok]: RDB load zipmap hash: converts to hash table when hash-max-ziplist-value is exceeded
[ok]: corrupt payload: fuzzer findings - leak in rdbloading due to dup entry in set
[22/64 done]: integration/rdb (13 seconds)
Testing integration/psync2
[ok]: Server is able to generate a stack trace on selected systems
[23/64 done]: unit/type/list-2 (29 seconds)
Testing integration/psync2-reg
[24/64 done]: integration/convert-zipmap-hash-on-load (2 seconds)
Testing integration/psync2-pingoff
[ok]: corrupt payload: fuzzer findings - empty intset div by zero
[ok]: corrupt payload: fuzzer findings - valgrind ziplist - crash report prints freed memory
[ok]: Crash report generated on SIGABRT
[ok]: corrupt payload: fuzzer findings - valgrind ziplist prevlen reaches outside the ziplist
[ok]: MIGRATE can correctly transfer large values
[25/64 done]: integration/logging (1 seconds)
Testing integration/failover
[ok]: corrupt payload: fuzzer findings - valgrind - bad rdbLoadDoubleValue
[ok]: PSYNC2: --- CYCLE 1 ---
[ok]: MIGRATE can correctly transfer hashes
[ok]: PSYNC2: [NEW LAYOUT] Set #0 as master
[ok]: PSYNC2: Set #2 to replicate from #0
[ok]: PSYNC2: Set #4 to replicate from #2
[ok]: PSYNC2: Set #1 to replicate from #2
[ok]: PSYNC2: Set #3 to replicate from #0
[ok]: corrupt payload: fuzzer findings - valgrind ziplist prev too big
[ok]: failover command fails without connected replica
[ok]: PSYNC2 pingoff: setup
[ok]: PSYNC2 pingoff: write and wait replication
[ok]: PSYNC2 #3899 regression: setup
[ok]: setup replication for following tests
[ok]: failover command fails with invalid host
[ok]: failover command fails with invalid port
[ok]: failover command fails with just force and timeout
[ok]: failover command fails when sent to a replica
[ok]: failover command fails with force without timeout
[ok]: corrupt payload: fuzzer findings - lzf decompression fails, avoid valgrind invalid read
[ok]: ZRANGEBYSCORE fuzzy test, 100 ranges in 100 element sorted set - skiplist
[ok]: Test replication partial resync: no backlog (diskless: no, disabled, reconnect: 1)
[ok]: PSYNC2 #3899 regression: kill chained replica
[ok]: ZRANGEBYLEX fuzzy test, 100 ranges in 100 element sorted set - skiplist
[ok]: ziplist implementation: encoding stress testing
[ok]: MIGRATE timeout actually works
[ok]: corrupt payload: fuzzer findings - stream bad lp_count
[ok]: Slave is able to evict keys created in writable slaves
[26/64 done]: unit/type/list-3 (31 seconds)
Testing integration/redis-cli
[ok]: corrupt payload: fuzzer findings - stream bad lp_count - unsanitized
[ok]: MIGRATE can migrate multiple keys at once
[ok]: MIGRATE with multiple keys must have empty key arg
[ok]: ZREMRANGEBYLEX fuzzy test, 100 ranges in 100 element sorted set - skiplist
I/O error reading reply
    while executing
"{*}$r type $k"
    (procedure "createComplexDataset" line 43)
    invoked from within
"createComplexDataset $r $ops"
    (procedure "bg_complex_data" line 5)
    invoked from within
"bg_complex_data [lindex $argv 0] [lindex $argv 1] [lindex $argv 2] [lindex $argv 3] [lindex $argv 4]"
    (file "tests/helpers/bg_complex_data.tcl" line 13)I/O error reading reply
    while executing
"{*}$r type $k"
    (procedure "createComplexDataset" line 43)
    invoked from within
"createComplexDataset $r $ops"
    (procedure "bg_complex_data" line 5)
    invoked from within
"bg_complex_data [lindex $argv 0] [lindex $argv 1] [lindex $argv 2] [lindex $argv 3] [lindex $argv 4]"
    (file "tests/helpers/bg_complex_data.tcl" line 13)
I/O error reading reply
    while executing
"{*}$r type $k"
    (procedure "findKeyWithType" line 7)
    invoked from within
"findKeyWithType {*}$r zset"
    ("uplevel" body line 2)
    invoked from within
"uplevel 1 [lindex $args $path]"
    (procedure "randpath" line 3)
    invoked from within
"randpath {{*}$r zadd $k $d $v}  {{*}$r zrem $k $v}  {
                            set otherzset [findKeyWithType {*}$r zset]
                         ..."
    (procedure "createComplexDataset" line 68)
    invoked from within
"createComplexDataset $r $ops"
    (procedure "bg_complex_data" line 5)
    invoked from within
"bg_complex_data [lindex $argv 0] [lindex $argv 1] [lindex $argv 2] [lindex $argv 3] [lindex $argv 4]"
    (file "tests/helpers/bg_complex_data.tcl" line 13)

[ok]: ZSETs skiplist implementation backlink consistency test - skiplist
[ok]: PSYNC2: cluster is consistent after failover
[ok]: Interactive CLI: INFO response should be printed raw
[ok]: Interactive CLI: Status reply
[ok]: Interactive CLI: Integer reply
[ok]: Interactive CLI: Bulk reply
[ok]: Interactive CLI: Multi-bulk reply
[ok]: Interactive CLI: Parsing quotes
[ok]: Non-interactive TTY CLI: Status reply
[ok]: MIGRATE with multiple keys migrate just existing ones
[ok]: Non-interactive TTY CLI: Integer reply
[ok]: Non-interactive TTY CLI: Bulk reply
[ok]: corrupt payload: fuzzer findings - stream integrity check issue
[ok]: Non-interactive TTY CLI: Multi-bulk reply
[ok]: Non-interactive TTY CLI: Read last argument from pipe
[ok]: Non-interactive TTY CLI: Read last argument from file
[ok]: Non-interactive non-TTY CLI: Status reply
[ok]: Non-interactive non-TTY CLI: Integer reply
[ok]: Non-interactive non-TTY CLI: Bulk reply
[ok]: Non-interactive non-TTY CLI: Multi-bulk reply
[ok]: Non-interactive non-TTY CLI: Quoted input arguments
[ok]: MIGRATE with multiple keys: stress command rewriting
[ok]: Non-interactive non-TTY CLI: No accidental unquoting of input arguments
[ok]: Non-interactive non-TTY CLI: Invalid quoted input arguments
[ok]: corrupt payload: fuzzer findings - infinite loop
[ok]: Non-interactive non-TTY CLI: Read last argument from pipe
[ok]: First server should have role slave after SLAVEOF
[ok]: Non-interactive non-TTY CLI: Read last argument from file
[ok]: Slave should be able to synchronize with the master
[ok]: PSYNC2 #3899 regression: kill chained replica
[ok]: MIGRATE with multiple keys: delete just ack keys
[ok]: corrupt payload: fuzzer findings - hash convert asserts on RESTORE with shallow sanitization
[ok]: PSYNC2 #3899 regression: kill chained replica
[ok]: corrupt payload: OOM in rdbGenericLoadStringObject
[ok]: MIGRATE AUTH: correct and wrong password cases
[ok]: PSYNC2 #3899 regression: kill chained replica
[ok]: Detect write load to master
[27/64 done]: unit/dump (33 seconds)
Testing integration/redis-benchmark
[ok]: corrupt payload: fuzzer findings - OOM in dictExpand
[ok]: ZSETs ZRANK augmented skip list stress testing - skiplist
[ok]: BZPOPMIN, ZADD + DEL should not awake blocked client
[ok]: BZPOPMIN, ZADD + DEL + SET should not awake blocked client
[ok]: BZPOPMIN with same key multiple times should work
[ok]: MULTI/EXEC is isolated from the point of view of BZPOPMIN
[ok]: BZPOPMIN with variadic ZADD
[ok]: corrupt payload: fuzzer findings - invalid tail offset after removal
[ok]: benchmark: set,get
[ok]: PSYNC2 #3899 regression: kill chained replica
[ok]: corrupt payload: fuzzer findings - negative reply length
[ok]: Test replication with blocking lists and sorted sets operations
[ok]: benchmark: full test suite
[ok]: PSYNC2 #3899 regression: kill chained replica
I/O error reading reply
    while executing
"$r blpop $k 2"
    ("uplevel" body line 2)
    invoked from within
"uplevel 1 [lindex $args $path]"
    (procedure "randpath" line 3)
    invoked from within
"randpath {
            randpath {
                $r rpush $k $v
            } {
                $r lpush $k $v
            }
        } {
            ..."
    (procedure "bg_block_op" line 13)
    invoked from within
"bg_block_op [lindex $argv 0] [lindex $argv 1] [lindex $argv 2] [lindex $argv 3] [lindex $argv 4]"
    (file "tests/helpers/bg_block_op.tcl" line 55)
I/O error reading reply
    while executing
"$r blpop $k $k2 2"
    ("uplevel" body line 2)
    invoked from within
"uplevel 1 [lindex $args $path]"
    (procedure "randpath" line 3)
    invoked from within
"randpath {
            randpath {
                $r rpush $k $v
            } {
                $r lpush $k $v
            }
        } {
            ..."
    (procedure "bg_block_op" line 13)
    invoked from within
"bg_block_op [lindex $argv 0] [lindex $argv 1] [lindex $argv 2] [lindex $argv 3] [lindex $argv 4]"
    (file "tests/helpers/bg_block_op.tcl" line 55)
I/O error reading reply
    while executing
"$r bzpopmin $k 2"
    ("uplevel" body line 2)
    invoked from within
"uplevel 1 [lindex $args $path]"
    (procedure "randpath" line 3)
    invoked from within
"randpath {
            $r zadd $k [randomInt 10000] $v
        } {
            $r zadd $k [randomInt 10000] $v [randomInt 10000] $v2
        } {
     ..."
    (procedure "bg_block_op" line 31)
    invoked from within
"bg_block_op [lindex $argv 0] [lindex $argv 1] [lindex $argv 2] [lindex $argv 3] [lindex $argv 4]"
    (file "tests/helpers/bg_block_op.tcl" line 55)
[ok]: corrupt payload: fuzzer findings - valgrind negative malloc
[28/64 done]: integration/block-repl (27 seconds)
Testing unit/pubsub
[ok]: BZPOPMIN with zero timeout should block indefinitely
[ok]: Fuzzer corrupt restore payloads - sanitize_dump: no
[ok]: Pub/Sub PING
[ok]: PUBLISH/SUBSCRIBE basics
[ok]: PUBLISH/SUBSCRIBE with two clients
[ok]: PUBLISH/SUBSCRIBE after UNSUBSCRIBE without arguments
[ok]: SUBSCRIBE to one channel more than once
[ok]: UNSUBSCRIBE from non-subscribed channels
[ok]: PUBLISH/PSUBSCRIBE basics
[ok]: PUBLISH/PSUBSCRIBE with two clients
[ok]: PUBLISH/PSUBSCRIBE after PUNSUBSCRIBE without arguments
[ok]: PUNSUBSCRIBE from non-subscribed channels
[ok]: NUMSUB returns numbers, not strings (#1561)
[ok]: Mix SUBSCRIBE and PSUBSCRIBE
[ok]: PUNSUBSCRIBE and UNSUBSCRIBE should always reply
[ok]: Keyspace notifications: we receive keyspace notifications
[ok]: Keyspace notifications: we receive keyevent notifications
[ok]: Keyspace notifications: we can receive both kind of events
[ok]: Keyspace notifications: we are able to mask events
[ok]: Keyspace notifications: general events test
[ok]: benchmark: multi-thread set,get
[ok]: Keyspace notifications: list events test
[ok]: Keyspace notifications: set events test
[ok]: Keyspace notifications: zset events test
[ok]: Keyspace notifications: hash events test
[ok]: PSYNC2 pingoff: pause replica and promote it
[ok]: Keyspace notifications: expired events (triggered expire)
[ok]: corrupt payload: fuzzer findings - valgrind invalid read
[ok]: Keyspace notifications: expired events (background expire)
[ok]: Keyspace notifications: evicted events
[ok]: Keyspace notifications: test CONFIG GET/SET of event flags
[29/64 done]: unit/pubsub (0 seconds)
Testing unit/slowlog
[ok]: benchmark: pipelined full set,get
[ok]: SLOWLOG - check that it starts with an empty log
[ok]: benchmark: arbitrary command
[ok]: failover command to specific replica works
[ok]: corrupt payload: fuzzer findings - HRANDFIELD on bad ziplist
[ok]: SLOWLOG - only logs commands taking more time than specified
[ok]: SLOWLOG - max entries is correctly handled
[ok]: SLOWLOG - GET optional argument to limit output len works
[ok]: SLOWLOG - RESET subcommand works
[ok]: benchmark: keyspace length
[ok]: Make the old master a replica of the new one and check conditions
[ok]: PSYNC2 #3899 regression: kill chained replica
[ok]: SLOWLOG - logged entry sanity check
[ok]: SLOWLOG - Certain commands are omitted that contain sensitive information
[ok]: SLOWLOG - Some commands can redact sensitive fields
[ok]: SLOWLOG - Rewritten commands are logged as their original command
[ok]: SLOWLOG - commands with too many arguments are trimmed
[ok]: SLOWLOG - too long arguments are trimmed
[ok]: corrupt payload: fuzzer findings - stream with no records
[30/64 done]: integration/corrupt-dump (17 seconds)
Testing unit/scripting
[ok]: SLOWLOG - EXEC is not logged, just executed commands
[ok]: EVAL - Does Lua interpreter replies to our requests?
[ok]: EVAL - Lua integer -> Redis protocol type conversion
[ok]: EVAL - Lua string -> Redis protocol type conversion
[ok]: EVAL - Lua true boolean -> Redis protocol type conversion
[ok]: EVAL - Lua false boolean -> Redis protocol type conversion
[ok]: EVAL - Lua status code reply -> Redis protocol type conversion
[ok]: EVAL - Lua error reply -> Redis protocol type conversion
[ok]: EVAL - Lua table -> Redis protocol type conversion
[ok]: SLOWLOG - can clean older entries
[ok]: EVAL - Are the KEYS and ARGV arrays populated correctly?
[ok]: EVAL - is Lua able to call Redis API?
[ok]: EVALSHA - Can we call a SHA1 if already defined?
[ok]: EVALSHA - Can we call a SHA1 in uppercase?
[ok]: EVALSHA - Do we get an error on invalid SHA1?
[ok]: EVALSHA - Do we get an error on non defined SHA1?
[ok]: EVAL - Redis integer -> Lua type conversion
[ok]: EVAL - Redis bulk -> Lua type conversion
[ok]: EVAL - Redis multi bulk -> Lua type conversion
[ok]: EVAL - Redis status reply -> Lua type conversion
[ok]: EVAL - Redis error reply -> Lua type conversion
[ok]: EVAL - Redis nil bulk reply -> Lua type conversion
[ok]: EVAL - Is the Lua client using the currently selected DB?
[ok]: EVAL - SELECT inside Lua should not affect the caller
[ok]: EVAL - Scripts can't run blpop command
[ok]: EVAL - Scripts can't run brpop command
[ok]: EVAL - Scripts can't run brpoplpush command
[ok]: EVAL - Scripts can't run blmove command
[ok]: EVAL - Scripts can't run bzpopmin command
[ok]: EVAL - Scripts can't run bzpopmax command
[ok]: EVAL - Scripts can't run XREAD and XREADGROUP with BLOCK option
[ok]: EVAL - Scripts can't run certain commands
[ok]: EVAL - No arguments to redis.call/pcall is considered an error
[ok]: EVAL - redis.call variant raises a Lua error on Redis cmd error (1)
[ok]: EVAL - redis.call variant raises a Lua error on Redis cmd error (1)
[ok]: EVAL - redis.call variant raises a Lua error on Redis cmd error (1)
[ok]: EVAL - JSON numeric decoding
[ok]: EVAL - JSON string decoding
[ok]: EVAL - cmsgpack can pack double?
[ok]: EVAL - cmsgpack can pack negative int64?
[ok]: EVAL - cmsgpack can pack and unpack circular references?
[ok]: EVAL - Numerical sanity check from bitop
[ok]: EVAL - Verify minimal bitop functionality
[ok]: EVAL - Able to parse trailing comments
[ok]: EVAL_RO - Successful case
[ok]: EVAL_RO - Cannot run write commands
[ok]: SCRIPTING FLUSH - is able to clear the scripts cache?
[ok]: SCRIPTING FLUSH ASYNC
[ok]: SCRIPT EXISTS - can detect already defined scripts?
[ok]: SCRIPT LOAD - is able to register scripts in the scripting cache
[31/64 done]: integration/redis-benchmark (3 seconds)
Testing unit/maxmemory
[ok]: In the context of Lua the output of random commands gets ordered
[ok]: SORT is normally not alpha re-ordered for the scripting engine
[ok]: SORT BY <constant> output gets ordered for scripting
[ok]: SORT BY <constant> with GET gets ordered for scripting
[ok]: redis.sha1hex() implementation
[ok]: Globals protection reading an undeclared global variable
[ok]: Globals protection setting an undeclared global*
[ok]: Test an example script DECR_IF_GT
[ok]: Scripting engine resets PRNG at every script execution
[ok]: Scripting engine PRNG can be seeded correctly
[ok]: PSYNC2: generate load while killing replication links
[ok]: PSYNC2: cluster is consistent after load (x = 19579)
[ok]: PSYNC2: total sum of full synchronizations is exactly 4
[ok]: PSYNC2: --- CYCLE 2 ---
[ok]: PSYNC2: [NEW LAYOUT] Set #0 as master
[ok]: PSYNC2: Set #1 to replicate from #0
[ok]: PSYNC2: Set #2 to replicate from #1
[ok]: PSYNC2: Set #3 to replicate from #0
[ok]: PSYNC2: Set #4 to replicate from #2
[ok]: Without maxmemory small integers are shared
[ok]: With maxmemory and non-LRU policy integers are still shared
[ok]: With maxmemory and LRU policy integers are not shared
[ok]: failover command to any replica works
[ok]: PSYNC2 #3899 regression: kill first replica
[ok]: SLOWLOG - can be disabled
[ok]: failover to a replica with force works
[32/64 done]: unit/slowlog (2 seconds)
Testing unit/introspection
[ok]: ZSET skiplist order consistency when elements are moved
[ok]: ZRANGESTORE basic
[ok]: ZRANGESTORE RESP3
[ok]: ZRANGESTORE range
[ok]: ZRANGESTORE BYLEX
[ok]: ZRANGESTORE BYSCORE
[ok]: ZRANGESTORE BYSCORE LIMIT
[ok]: ZRANGESTORE BYSCORE REV LIMIT
[ok]: ZRANGE BYSCORE REV LIMIT
[ok]: ZRANGESTORE - empty range
[ok]: ZRANGESTORE BYLEX - empty range
[ok]: ZRANGESTORE BYSCORE - empty range
[ok]: ZRANGE BYLEX
[ok]: ZRANGESTORE invalid syntax
[ok]: ZRANGE invalid syntax
[ok]: ZRANDMEMBER - ziplist
[ok]: ZRANDMEMBER - skiplist
[ok]: ZRANDMEMBER with RESP3
[ok]: ZRANDMEMBER count of 0 is handled correctly
[ok]: ZRANDMEMBER with <count> against non existing key
[ok]: CLIENT LIST
[ok]: CLIENT LIST with IDs
[ok]: CLIENT INFO
[ok]: MONITOR can log executed commands
[ok]: MONITOR can log commands issued by the scripting engine
[ok]: MONITOR supports redacting command arguments
[ok]: MONITOR correctly handles multi-exec cases
[ok]: CLIENT GETNAME should return NIL if name is not assigned
[ok]: CLIENT LIST shows empty fields for unassigned names
[ok]: CLIENT SETNAME does not accept spaces
[ok]: CLIENT SETNAME can assign a name to this connection
[ok]: CLIENT SETNAME can change the name of an existing connection
[ok]: After CLIENT SETNAME, connection can still be closed
[ok]: maxmemory - is the memory limit honoured? (policy allkeys-random)
[ok]: EVAL does not leak in the Lua stack
[ok]: PSYNC2: cluster is consistent after failover
[ok]: ZRANDMEMBER with <count> - skiplist
[ok]: failover with timeout aborts if replica never catches up
[ok]: EVAL processes writes from AOF in read-only slaves
[ok]: failovers can be aborted
[ok]: ZRANDMEMBER with <count> - ziplist
[ok]: maxmemory - is the memory limit honoured? (policy allkeys-lru)
[ok]: CONFIG save params special case handled properly
[ok]: CONFIG sanity
[33/64 done]: unit/type/zset (38 seconds)
Testing unit/introspection-2
[ok]: CONFIG REWRITE sanity
[ok]: failover aborts if target rejects sync request
[ok]: maxmemory - is the memory limit honoured? (policy allkeys-lfu)
[ok]: PSYNC2 #3899 regression: kill first replica
[ok]: maxmemory - is the memory limit honoured? (policy volatile-lru)
[34/64 done]: integration/failover (10 seconds)
Testing unit/limits
[ok]: PSYNC2 #3899 regression: kill first replica
[ok]: maxmemory - is the memory limit honoured? (policy volatile-lfu)
[ok]: CONFIG REWRITE handles save properly
[35/64 done]: unit/introspection (3 seconds)
Testing unit/obuf-limits
[ok]: Dumping an RDB
[ok]: maxmemory - is the memory limit honoured? (policy volatile-random)
[ok]: PSYNC2 #3899 regression: kill first replica
[ok]: Check if maxclients works refusing connections
[ok]: Scan mode
[36/64 done]: unit/limits (2 seconds)
Testing unit/bitops
[ok]: BITCOUNT returns 0 against non existing key
[ok]: BITCOUNT returns 0 with out of range indexes
[ok]: BITCOUNT returns 0 with negative indexes where start > end
[ok]: BITCOUNT against test vector #1
[ok]: BITCOUNT against test vector #2
[ok]: BITCOUNT against test vector #3
[ok]: BITCOUNT against test vector #4
[ok]: BITCOUNT against test vector #5
[ok]: maxmemory - is the memory limit honoured? (policy volatile-ttl)
[ok]: PSYNC2 #3899 regression: kill chained replica
[ok]: TTL, TYPE and EXISTS do not alter the last access time of a key
[ok]: test various edge cases of repl topology changes with missing pings at the end
[ok]: Connecting as a replica
[ok]: BITCOUNT fuzzing without start/end
[ok]: maxmemory - only allkeys-* should remove non-volatile keys (allkeys-random)
[ok]: PSYNC2: generate load while killing replication links
[ok]: PSYNC2: cluster is consistent after load (x = 43527)
[ok]: PSYNC2: total sum of full synchronizations is exactly 4
[ok]: PSYNC2: --- CYCLE 3 ---
[ok]: PSYNC2: [NEW LAYOUT] Set #1 as master
[ok]: PSYNC2: Set #3 to replicate from #1
[ok]: PSYNC2: Set #4 to replicate from #1
[ok]: PSYNC2: Set #0 to replicate from #1
[ok]: PSYNC2: Set #2 to replicate from #0
[ok]: Test replication partial resync: ok after delay (diskless: no, disabled, reconnect: 1)
[ok]: BITCOUNT fuzzing with start/end
[ok]: BITCOUNT with start, end
[ok]: BITCOUNT syntax error #1
[ok]: BITCOUNT regression test for github issue #582
[ok]: BITCOUNT misaligned prefix
[ok]: BITCOUNT misaligned prefix + full words + remainder
[ok]: BITOP NOT (empty string)
[ok]: BITOP NOT (known string)
[ok]: BITOP where dest and target are the same key
[ok]: BITOP AND|OR|XOR don't change the string with single input key
[ok]: BITOP missing key is considered a stream of zero
[ok]: BITOP shorter keys are zero-padded to the key with max length
[ok]: maxmemory - only allkeys-* should remove non-volatile keys (allkeys-lru)
[ok]: PSYNC2: cluster is consistent after failover
I/O error reading reply
    while executing
"{*}$r type $k"
    (procedure "createComplexDataset" line 27)
    invoked from within
"createComplexDataset $r $ops"
    (procedure "bg_complex_data" line 5)
    invoked from within
"bg_complex_data [lindex $argv 0] [lindex $argv 1] [lindex $argv 2] [lindex $argv 3] [lindex $argv 4]"
    (file "tests/helpers/bg_complex_data.tcl" line 13)I/O error reading reply
    while executing
"{*}$r type $k"
    (procedure "createComplexDataset" line 27)
    invoked from within
"createComplexDataset $r $ops"
    (procedure "bg_complex_data" line 5)
    invoked from within
"bg_complex_data [lindex $argv 0] [lindex $argv 1] [lindex $argv 2] [lindex $argv 3] [lindex $argv 4]"
    (file "tests/helpers/bg_complex_data.tcl" line 13)

I/O error reading reply
    while executing
"{*}$r type $k"
    (procedure "createComplexDataset" line 27)
    invoked from within
"createComplexDataset $r $ops"
    (procedure "bg_complex_data" line 5)
    invoked from within
"bg_complex_data [lindex $argv 0] [lindex $argv 1] [lindex $argv 2] [lindex $argv 3] [lindex $argv 4]"
    (file "tests/helpers/bg_complex_data.tcl" line 13)
[ok]: PSYNC2 #3899 regression: kill chained replica
[ok]: BITOP and fuzzing
[ok]: Slave should be able to synchronize with the master
[ok]: TOUCH alters the last access time of a key
[ok]: TOUCH returns the number of existing keys specified
[ok]: command stats for GEOADD
[ok]: command stats for EXPIRE
[ok]: command stats for BRPOP
[ok]: command stats for MULTI
[ok]: command stats for scripts
[ok]: maxmemory - only allkeys-* should remove non-volatile keys (volatile-lru)
[37/64 done]: unit/introspection-2 (7 seconds)
Testing unit/bitfield
[ok]: Fuzzer corrupt restore payloads - sanitize_dump: yes
[38/64 done]: integration/corrupt-dump-fuzzer (20 seconds)
Testing unit/geo
[ok]: BITFIELD signed SET and GET basics
[ok]: BITFIELD unsigned SET and GET basics
[ok]: BITFIELD #<idx> form
[ok]: BITFIELD basic INCRBY form
[ok]: BITFIELD chaining of multiple commands
[ok]: BITFIELD unsigned overflow wrap
[ok]: BITFIELD unsigned overflow sat
[ok]: BITFIELD signed overflow wrap
[ok]: BITFIELD signed overflow sat
[ok]: GEOADD create
[ok]: GEOADD update
[ok]: GEOADD update with CH option
[ok]: GEOADD update with NX option
[ok]: GEOADD update with XX option
[ok]: GEOADD update with CH NX option
[ok]: GEOADD update with CH XX option
[ok]: GEOADD update with XX NX option will return syntax error
[ok]: GEOADD update with invalid option
[ok]: GEOADD invalid coordinates
[ok]: GEOADD multi add
[ok]: Check geoset values
[ok]: GEORADIUS simple (sorted)
[ok]: GEOSEARCH simple (sorted)
[ok]: GEOSEARCH FROMLONLAT and FROMMEMBER cannot exist at the same time
[ok]: GEOSEARCH FROMLONLAT and FROMMEMBER one must exist
[ok]: GEOSEARCH BYRADIUS and BYBOX cannot exist at the same time
[ok]: GEOSEARCH BYRADIUS and BYBOX one must exist
[ok]: GEOSEARCH with STOREDIST option
[ok]: GEORADIUS withdist (sorted)
[ok]: GEOSEARCH withdist (sorted)
[ok]: GEORADIUS with COUNT
[ok]: GEORADIUS with ANY not sorted by default
[ok]: GEORADIUS with ANY sorted by ASC
[ok]: GEORADIUS with ANY but no COUNT
[ok]: GEORADIUS with COUNT but missing integer argument
[ok]: GEORADIUS with COUNT DESC
[ok]: GEORADIUS HUGE, issue #2767
[ok]: GEORADIUSBYMEMBER simple (sorted)
[ok]: GEOSEARCH FROMMEMBER simple (sorted)
[ok]: GEOSEARCH vs GEORADIUS
[ok]: GEOSEARCH non square, long and narrow
[ok]: GEOSEARCH corner point test
[ok]: GEORADIUSBYMEMBER withdist (sorted)
[ok]: GEOHASH is able to return geohash strings
[ok]: GEOPOS simple
[ok]: GEOPOS missing element
[ok]: GEODIST simple & unit
[ok]: GEODIST missing elements
[ok]: GEORADIUS STORE option: syntax error
[ok]: GEOSEARCHSTORE STORE option: syntax error
[ok]: GEORANGE STORE option: incompatible options
[ok]: GEORANGE STORE option: plain usage
[ok]: GEOSEARCHSTORE STORE option: plain usage
[ok]: GEORANGE STOREDIST option: plain usage
[ok]: GEOSEARCHSTORE STOREDIST option: plain usage
[ok]: GEORANGE STOREDIST option: COUNT ASC and DESC
[ok]: GEOSEARCH the box spans -180? or 180?
[ok]: Piping raw protocol
[39/64 done]: integration/redis-cli (14 seconds)
Testing unit/memefficiency
[ok]: Detect write load to master
[ok]: PSYNC2 #3899 regression: kill chained replica
[ok]: BITFIELD overflow detection fuzzing
[ok]: maxmemory - only allkeys-* should remove non-volatile keys (volatile-random)
[ok]: Chained replicas disconnect when replica re-connect with the same master
[ok]: BITOP or fuzzing
[ok]: BITFIELD overflow wrap fuzzing
[ok]: BITFIELD regression for #3221
[ok]: BITFIELD regression for #3564
[ok]: maxmemory - only allkeys-* should remove non-volatile keys (volatile-ttl)
[40/64 done]: integration/psync2-pingoff (18 seconds)
Testing unit/hyperloglog
[ok]: BITFIELD: setup slave
[ok]: BITFIELD: write on master, read on slave
[ok]: BITFIELD_RO fails when write option is used
[ok]: maxmemory - policy volatile-lru should only remove volatile keys.
[ok]: EVAL timeout from AOF
[ok]: We can call scripts rewriting client->argv from Lua
[ok]: PSYNC2 #3899 regression: kill chained replica
[ok]: Call Redis command with many args from Lua (issue #1764)
[ok]: Number conversion precision test (issue #1118)
[ok]: String containing number precision test (regression of issue #1118)
[ok]: Verify negative arg count is error instead of crash (issue #1842)
[ok]: Correct handling of reused argv (issue #1939)
[ok]: Functions in the Redis namespace are able to report errors
[ok]: Script with RESP3 map
[41/64 done]: unit/bitfield (3 seconds)
Testing unit/lazyfree
[ok]: BITOP xor fuzzing
[ok]: Memory efficiency with values in range 32
[ok]: BITOP NOT fuzzing
[ok]: BITOP with integer encoded source objects
[ok]: BITOP with non string source key
[ok]: BITOP with empty string after non empty string (issue #529)
[ok]: BITPOS bit=0 with empty key returns 0
[ok]: BITPOS bit=1 with empty key returns -1
[ok]: BITPOS bit=0 with string less than 1 word works
[ok]: BITPOS bit=1 with string less than 1 word works
[ok]: BITPOS bit=0 starting at unaligned address
[ok]: BITPOS bit=1 starting at unaligned address
[ok]: BITPOS bit=0 unaligned+full word+reminder
[ok]: BITPOS bit=1 unaligned+full word+reminder
[ok]: BITPOS bit=1 returns -1 if string is all 0 bits
[ok]: BITPOS bit=0 works with intervals
[ok]: BITPOS bit=1 works with intervals
[ok]: BITPOS bit=0 changes behavior if end is given
[ok]: PSYNC2: generate load while killing replication links
[ok]: PSYNC2: cluster is consistent after load (x = 58375)
[ok]: PSYNC2: total sum of full synchronizations is exactly 4
[ok]: PSYNC2: --- CYCLE 4 ---
[ok]: PSYNC2: [NEW LAYOUT] Set #1 as master
[ok]: PSYNC2: Set #0 to replicate from #1
[ok]: PSYNC2: Set #3 to replicate from #1
[ok]: PSYNC2: Set #4 to replicate from #3
[ok]: PSYNC2: Set #2 to replicate from #1
[ok]: maxmemory - policy volatile-lfu should only remove volatile keys.
[ok]: BITPOS bit=1 fuzzy testing using SETBIT
[ok]: Timedout read-only scripts can be killed by SCRIPT KILL
[ok]: Timedout read-only scripts can be killed by SCRIPT KILL even when use pcall
[ok]: UNLINK can reclaim memory in background
[ok]: Timedout script does not cause a false dead client
[ok]: PSYNC2 #3899 regression: kill chained replica
[ok]: PSYNC2 #3899 regression: kill chained replica
[ok]: BITPOS bit=0 fuzzy testing using SETBIT
[ok]: Timedout script link is still usable after Lua returns
[ok]: PSYNC2: cluster is consistent after failover
[42/64 done]: unit/bitops (9 seconds)
Testing unit/wait
[ok]: Timedout scripts that modified data can't be killed by SCRIPT KILL
[ok]: SHUTDOWN NOSAVE can kill a timedout script anyway
[ok]: maxmemory - policy volatile-random should only remove volatile keys.
[ok]: PSYNC2 #3899 regression: kill chained replica
[ok]: FLUSHDB ASYNC can reclaim memory in background
[ok]: Memory efficiency with values in range 64
[ok]: PSYNC2 #3899 regression: verify consistency
[ok]: Setup slave
[ok]: WAIT should acknowledge 1 additional copy of the data
[ok]: Before the replica connects we issue two EVAL commands (scripts replication)
[ok]: lazy free a stream with all types of metadata
[ok]: lazy free a stream with deleted cgroup
[ok]: Connect a replica to the master instance (scripts replication)
[ok]: Now use EVALSHA against the master, with both SHAs (scripts replication)
[ok]: If EVALSHA was replicated as EVAL, 'x' should be '4' (scripts replication)
[ok]: Replication of script multiple pushes to list with BLPOP (scripts replication)
[ok]: EVALSHA replication when first call is readonly (scripts replication)
[ok]: Lua scripts using SELECT are replicated correctly (scripts replication)
[ok]: maxmemory - policy volatile-ttl should only remove volatile keys.
[43/64 done]: unit/lazyfree (3 seconds)
Testing unit/pendingquerybuf
[44/64 done]: integration/psync2-reg (22 seconds)
Testing unit/tls
[ok]: WAIT should not acknowledge 2 additional copies of the data
[45/64 done]: unit/tls (1 seconds)
Testing unit/tracking
[ok]: Before the replica connects we issue two EVAL commands (commands replication)
[ok]: Connect a replica to the master instance (commands replication)
[ok]: Now use EVALSHA against the master, with both SHAs (commands replication)
[ok]: If EVALSHA was replicated as EVAL, 'x' should be '4' (commands replication)
[ok]: Replication of script multiple pushes to list with BLPOP (commands replication)
[ok]: EVALSHA replication when first call is readonly (commands replication)
[ok]: Lua scripts using SELECT are replicated correctly (commands replication)
[ok]: Clients are able to enable tracking and redirect it
[ok]: The other connection is able to get invalidations
[ok]: The client is now able to disable tracking
[ok]: Clients can enable the BCAST mode with the empty prefix
[ok]: The connection gets invalidation messages about all the keys
[ok]: Clients can enable the BCAST mode with prefixes
[ok]: Adding prefixes to BCAST mode works
[ok]: Tracking NOLOOP mode in standard mode works
[ok]: Tracking NOLOOP mode in BCAST mode works
[ok]: HyperLogLog self test passes
[ok]: PFADD without arguments creates an HLL value
[ok]: Approximated cardinality after creation is zero
[ok]: PFADD returns 1 when at least 1 reg was modified
[ok]: PFADD returns 0 when no reg was modified
[ok]: PFADD works with empty string (regression)
[ok]: PFCOUNT returns approximated cardinality of set
[ok]: WAIT should not acknowledge 1 additional copy if slave is blocked
[ok]: Memory efficiency with values in range 128
[ok]: Connect a replica to the master instance
[ok]: Redis.replicate_commands() must be issued before any write
[ok]: Redis.replicate_commands() must be issued before any write (2)
[ok]: Redis.set_repl() must be issued after replicate_commands()
[ok]: Redis.set_repl() don't accept invalid values
[ok]: Test selective replication of certain Redis commands from Lua
[ok]: PRNG is seeded randomly for command replication
[ok]: Using side effects is not a problem with command replication
[ok]: Tracking gets notification of expired keys
[ok]: HELLO 3 reply is correct
[ok]: HELLO without protover
[ok]: RESP3 based basic invalidation
[ok]: RESP3 tracking redirection
[ok]: Invalidations of previous keys can be redirected after switching to RESP3
[ok]: Invalidations of new keys can be redirected after switching to RESP3
[ok]: RESP3 Client gets tracking-redir-broken push message after cached key changed when rediretion client is terminated
[ok]: Different clients can redirect to the same connection
[ok]: Different clients using different protocols can track the same key
[ok]: No invalidation message when using OPTIN option
[ok]: Invalidation message sent when using OPTIN option with CLIENT CACHING yes
[ok]: Invalidation message sent when using OPTOUT option
[ok]: No invalidation message when using OPTOUT option with CLIENT CACHING no
[ok]: Able to redirect to a RESP3 client
[ok]: After switching from normal tracking to BCAST mode, no invalidation message is produced for pre-BCAST keys
[ok]: BCAST with prefix collisions throw errors
[ok]: Tracking gets notification on tracking table key eviction
[ok]: Invalidation message received for flushall
[ok]: Invalidation message received for flushdb
[ok]: Test ASYNC flushall
[ok]: Server is able to evacuate enough keys when num of keys surpasses limit by more than defined initial effort
[ok]: Tracking info is correct
[ok]: CLIENT GETREDIR provides correct client id
[ok]: CLIENT TRACKINGINFO provides reasonable results when tracking off
[ok]: CLIENT TRACKINGINFO provides reasonable results when tracking on
[ok]: CLIENT TRACKINGINFO provides reasonable results when tracking on with options
[ok]: CLIENT TRACKINGINFO provides reasonable results when tracking optin
[ok]: CLIENT TRACKINGINFO provides reasonable results when tracking optout
[ok]: CLIENT TRACKINGINFO provides reasonable results when tracking bcast mode
[ok]: CLIENT TRACKINGINFO provides reasonable results when tracking redir broken
[46/64 done]: unit/tracking (1 seconds)
Testing unit/oom-score-adj
[ok]: WAIT implicitly blocks on client pause since ACKs aren't sent
[47/64 done]: unit/scripting (18 seconds)
Testing unit/shutdown
[ok]: CONFIG SET oom-score-adj works as expected
[ok]: CONFIG SET oom-score-adj handles configuration failures
[ok]: HyperLogLogs are promote from sparse to dense
[48/64 done]: unit/wait (4 seconds)
Testing unit/networking
[ok]: Temp rdb will be deleted if we use bg_unlink when shutdown
[ok]: CONFIG SET port number
[ok]: Temp rdb will be deleted in signal handle
[ok]: PSYNC2: generate load while killing replication links
[ok]: PSYNC2: cluster is consistent after load (x = 73741)
[ok]: PSYNC2: total sum of full synchronizations is exactly 4
[ok]: PSYNC2: --- CYCLE 5 ---
[ok]: PSYNC2: [NEW LAYOUT] Set #3 as master
[ok]: PSYNC2: Set #0 to replicate from #3
[ok]: PSYNC2: Set #2 to replicate from #0
[ok]: PSYNC2: Set #4 to replicate from #2
[49/64 done]: unit/shutdown (1 seconds)
[ok]: PSYNC2: Set #1 to replicate from #0
[ok]: Memory efficiency with values in range 1024
[ok]: Client output buffer hard limit is enforced
[ok]: Test replication partial resync: backlog expired (diskless: no, disabled, reconnect: 1)
[ok]: CONFIG SET bind address
[50/64 done]: unit/networking (1 seconds)
[51/64 done]: unit/oom-score-adj (2 seconds)
I/O error reading reply
    while executing
"{*}$r type $k"
    (procedure "createComplexDataset" line 43)
    invoked from within
"createComplexDataset $r $ops"
    (procedure "bg_complex_data" line 5)
    invoked from within
"bg_complex_data [lindex $argv 0] [lindex $argv 1] [lindex $argv 2] [lindex $argv 3] [lindex $argv 4]"
    (file "tests/helpers/bg_complex_data.tcl" line 13)I/O error reading reply
    while executing
"{*}$r hset $k $f $v"
    ("uplevel" body line 2)
    invoked from within
"uplevel 1 [lindex $args $path]"
    (procedure "randpath" line 3)
    invoked from within
"randpath {
                {*}$r set $k $v
            } {
                {*}$r lpush $k $v
            } {
                {*}$r sadd $k $v
        ..."
    (procedure "createComplexDataset" line 30)
    invoked from within
"createComplexDataset $r $ops"
    (procedure "bg_complex_data" line 5)
    invoked from within
"bg_complex_data [lindex $argv 0] [lindex $argv 1] [lindex $argv 2] [lindex $argv 3] [lindex $argv 4]"
    (file "tests/helpers/bg_complex_data.tcl" line 13)
I/O error reading reply
    while executing
"{*}$r hset $k $f $v"
    ("uplevel" body line 2)
    invoked from within
"uplevel 1 [lindex $args $path]"
    (procedure "randpath" line 3)
    invoked from within
"randpath {
                {*}$r set $k $v
            } {
                {*}$r lpush $k $v
            } {
                {*}$r sadd $k $v
        ..."
    (procedure "createComplexDataset" line 30)
    invoked from within
"createComplexDataset $r $ops"
    (procedure "bg_complex_data" line 5)
    invoked from within
"bg_complex_data [lindex $argv 0] [lindex $argv 1] [lindex $argv 2] [lindex $argv 3] [lindex $argv 4]"
    (file "tests/helpers/bg_complex_data.tcl" line 13)

[ok]: PSYNC2: cluster is consistent after failover
[ok]: HyperLogLog sparse encoding stress test
[ok]: Corrupted sparse HyperLogLogs are detected: Additional at tail
[ok]: Corrupted sparse HyperLogLogs are detected: Broken magic
[ok]: Corrupted sparse HyperLogLogs are detected: Invalid encoding
[ok]: Corrupted dense HyperLogLogs are detected: Wrong length
[ok]: Slave should be able to synchronize with the master
[ok]: Detect write load to master
[ok]: Test replication partial resync: no reconnection, just sync (diskless: no, swapdb, reconnect: 0)
I/O error reading reply
    while executing
"{*}$r sadd $k $v"
    ("uplevel" body line 2)
    invoked from within
"uplevel 1 [lindex $args $path]"
    (procedure "randpath" line 3)
    invoked from within
"randpath {
                {*}$r set $k $v
            } {
                {*}$r lpush $k $v
            } {
                {*}$r sadd $k $v
        ..."
    (procedure "createComplexDataset" line 30)
    invoked from within
"createComplexDataset $r $ops"
    (procedure "bg_complex_data" line 5)
    invoked from within
"bg_complex_data [lindex $argv 0] [lindex $argv 1] [lindex $argv 2] [lindex $argv 3] [lindex $argv 4]"
    (file "tests/helpers/bg_complex_data.tcl" line 13)I/O error reading reply
    while executing
"{*}$r lrem $k 0 $v"
    ("uplevel" body line 1)
    invoked from within
"uplevel 1 [lindex $args $path]"
    (procedure "randpath" line 3)
    invoked from within
"randpath {{*}$r lpush $k $v}  {{*}$r rpush $k $v}  {{*}$r lrem $k 0 $v}  {{*}$r rpop $k}  {{*}$r lpop $k}"
    (procedure "createComplexDataset" line 51)
    invoked from within
"createComplexDataset $r $ops"
    (procedure "bg_complex_data" line 5)
    invoked from within
"bg_complex_data [lindex $argv 0] [lindex $argv 1] [lindex $argv 2] [lindex $argv 3] [lindex $argv 4]"
    (file "tests/helpers/bg_complex_data.tcl" line 13)I/O error reading reply
    while executing
"{*}$r sadd $k $v"
    ("uplevel" body line 2)
    invoked from within
"uplevel 1 [lindex $args $path]"
    (procedure "randpath" line 3)
    invoked from within
"randpath {
                {*}$r set $k $v
            } {
                {*}$r lpush $k $v
            } {
                {*}$r sadd $k $v
        ..."
    (procedure "createComplexDataset" line 30)
    invoked from within
"createComplexDataset $r $ops"
    (procedure "bg_complex_data" line 5)
    invoked from within
"bg_complex_data [lindex $argv 0] [lindex $argv 1] [lindex $argv 2] [lindex $argv 3] [lindex $argv 4]"
    (file "tests/helpers/bg_complex_data.tcl" line 13)


[ok]: Slave should be able to synchronize with the master
[ok]: Memory efficiency with values in range 16384
[ok]: Detect write load to master
[ok]: Client output buffer soft limit is enforced if time is overreached
[52/64 done]: unit/memefficiency (15 seconds)
[ok]: PSYNC2: generate load while killing replication links
[ok]: PSYNC2: cluster is consistent after load (x = 96072)
[ok]: PSYNC2: total sum of full synchronizations is exactly 4
[ok]: PSYNC2: --- CYCLE 6 ---
[ok]: PSYNC2: [NEW LAYOUT] Set #1 as master
[ok]: PSYNC2: Set #2 to replicate from #1
[ok]: PSYNC2: Set #4 to replicate from #2
[ok]: PSYNC2: Set #0 to replicate from #2
[ok]: PSYNC2: Set #3 to replicate from #2
[ok]: PSYNC2: cluster is consistent after failover
[ok]: MASTER and SLAVE consistency with EVALSHA replication
[ok]: XRANGE fuzzing
[ok]: XREVRANGE regression test for issue #5006
[ok]: XREAD streamID edge (no-blocking)
[ok]: XREAD streamID edge (blocking)
[ok]: XADD streamID edge
[err]: Connect multiple replicas at the same time (issue #141), master diskless=no, replica diskless=disabled in tests/integration/replication.tcl
Expected e7471ba79981a9e99e698f6b2196a4c287775ff6 eq c4d9e8368b5adab6a817f2a49d4ab57254f4bd10 (context: type eval line 65 cmd {assert {$digest eq $digest0}} proc ::test)
[ok]: XTRIM with MAXLEN option basic test
[ok]: XADD with LIMIT consecutive calls
[ok]: XTRIM with ~ is limited
[ok]: XTRIM without ~ is not limited
[ok]: XTRIM without ~ and with LIMIT
[ok]: XADD with MAXLEN > xlen can propagate correctly
[ok]: AOF rewrite during write load: RDB preamble=yes
[ok]: pending querybuf: check size of pending_querybuf after set a big value
[ok]: Client output buffer soft limit is not enforced too early and is enforced when no traffic
[ok]: XADD with MINID > lastid can propagate correctly
[ok]: No response for single command if client output buffer hard limit is enforced
[ok]: PSYNC2: generate load while killing replication links
[ok]: PSYNC2: cluster is consistent after load (x = 121266)
[ok]: PSYNC2: total sum of full synchronizations is exactly 4
[ok]: PSYNC2: --- CYCLE 7 ---
[ok]: PSYNC2: [NEW LAYOUT] Set #2 as master
[ok]: PSYNC2: Set #1 to replicate from #2
[ok]: PSYNC2: Set #0 to replicate from #2
[ok]: PSYNC2: Set #3 to replicate from #1
[ok]: PSYNC2: Set #4 to replicate from #2
[ok]: PSYNC2: cluster is consistent after failover
[ok]: XADD with ~ MAXLEN can propagate correctly
[ok]: Test replication partial resync: ok psync (diskless: no, swapdb, reconnect: 1)
[53/64 done]: unit/pendingquerybuf (15 seconds)
I/O error reading reply
    while executing
"{*}$r zadd $k $d $v"
    ("uplevel" body line 2)
    invoked from within
"uplevel 1 [lindex $args $path]"
    (procedure "randpath" line 3)
    invoked from within
"randpath {
                {*}$r set $k $v
            } {
                {*}$r lpush $k $v
            } {
                {*}$r sadd $k $v
        ..."
    (procedure "createComplexDataset" line 30)
    invoked from within
"createComplexDataset $r $ops"
    (procedure "bg_complex_data" line 5)
    invoked from within
"bg_complex_data [lindex $argv 0] [lindex $argv 1] [lindex $argv 2] [lindex $argv 3] [lindex $argv 4]"
    (file "tests/helpers/bg_complex_data.tcl" line 13)
I/O error reading reply
    while executing
"{*}$r type $k"
    (procedure "createComplexDataset" line 27)
    invoked from within
"createComplexDataset $r $ops"
    (procedure "bg_complex_data" line 5)
    invoked from within
"bg_complex_data [lindex $argv 0] [lindex $argv 1] [lindex $argv 2] [lindex $argv 3] [lindex $argv 4]"
    (file "tests/helpers/bg_complex_data.tcl" line 13)
I/O error reading reply
    while executing
"{*}$r sadd $k $v"
    ("uplevel" body line 2)
    invoked from within
"uplevel 1 [lindex $args $path]"
    (procedure "randpath" line 3)
    invoked from within
"randpath {
                {*}$r set $k $v
            } {
                {*}$r lpush $k $v
            } {
                {*}$r sadd $k $v
        ..."
    (procedure "createComplexDataset" line 30)
    invoked from within
"createComplexDataset $r $ops"
    (procedure "bg_complex_data" line 5)
    invoked from within
"bg_complex_data [lindex $argv 0] [lindex $argv 1] [lindex $argv 2] [lindex $argv 3] [lindex $argv 4]"
    (file "tests/helpers/bg_complex_data.tcl" line 13)
[ok]: XADD with ~ MAXLEN and LIMIT can propagate correctly
[ok]: No response for multi commands in pipeline if client output buffer limit is enforced
[ok]: Execute transactions completely even if client output buffer limit is enforced
[54/64 done]: unit/obuf-limits (27 seconds)
[ok]: XADD with ~ MINID can propagate correctly
[ok]: Slave should be able to synchronize with the master
[ok]: XADD with ~ MINID and LIMIT can propagate correctly
[ok]: XTRIM with ~ MAXLEN can propagate correctly
[ok]: Detect write load to master
[ok]: SLAVE can reload "lua" AUX RDB fields of duplicated scripts
[ok]: XADD can CREATE an empty stream
[ok]: XSETID can set a specific ID
[ok]: XSETID cannot SETID with smaller ID
[ok]: XSETID cannot SETID on non-existent key
I/O error reading reply
    while executing
"{*}$r type $k"
    (procedure "createComplexDataset" line 27)
    invoked from within
"createComplexDataset $r $ops"
    (procedure "bg_complex_data" line 5)
    invoked from within
"bg_complex_data [lindex $argv 0] [lindex $argv 1] [lindex $argv 2] [lindex $argv 3] [lindex $argv 4]"
    (file "tests/helpers/bg_complex_data.tcl" line 13)I/O error reading reply
    while executing
"{*}$r type $k"
    (procedure "createComplexDataset" line 27)
    invoked from within
"createComplexDataset $r $ops"
    (procedure "bg_complex_data" line 5)
    invoked from within
"bg_complex_data [lindex $argv 0] [lindex $argv 1] [lindex $argv 2] [lindex $argv 3] [lindex $argv 4]"
    (file "tests/helpers/bg_complex_data.tcl" line 13)

I/O error reading reply
    while executing
"{*}$r hset $k $f $v"
    ("uplevel" body line 2)
    invoked from within
"uplevel 1 [lindex $args $path]"
    (procedure "randpath" line 3)
    invoked from within
"randpath {
                {*}$r set $k $v
            } {
                {*}$r lpush $k $v
            } {
                {*}$r sadd $k $v
        ..."
    (procedure "createComplexDataset" line 30)
    invoked from within
"createComplexDataset $r $ops"
    (procedure "bg_complex_data" line 5)
    invoked from within
"bg_complex_data [lindex $argv 0] [lindex $argv 1] [lindex $argv 2] [lindex $argv 3] [lindex $argv 4]"
    (file "tests/helpers/bg_complex_data.tcl" line 13)
[55/64 done]: integration/replication-3 (58 seconds)
[ok]: Empty stream can be rewrite into AOF correctly
[ok]: PSYNC2: generate load while killing replication links
[ok]: PSYNC2: cluster is consistent after load (x = 141727)
[ok]: PSYNC2: total sum of full synchronizations is exactly 4
[ok]: Replication: commands with many arguments (issue #1221)
[ok]: Stream can be rewrite into AOF correctly after XDEL lastid
[ok]: PSYNC2: Bring the master back again for next test
[ok]: XGROUP HELP should not have unexpected options
[ok]: PSYNC2: Partial resync after restart using RDB aux fields
[56/64 done]: unit/type/stream (73 seconds)
[ok]: Replication of SPOP command -- alsoPropagate() API
[ok]: PSYNC2: Replica RDB restart with EVALSHA in backlog issue #4483
[57/64 done]: integration/replication-4 (63 seconds)
[58/64 done]: integration/psync2 (46 seconds)
[ok]: Test replication partial resync: no backlog (diskless: no, swapdb, reconnect: 1)
I/O error reading reply
    while executing
"{*}$r hset $k $f $v"
    ("uplevel" body line 2)
    invoked from within
"uplevel 1 [lindex $args $path]"
    (procedure "randpath" line 3)
    invoked from within
"randpath {
                {*}$r set $k $v
            } {
                {*}$r lpush $k $v
            } {
                {*}$r sadd $k $v
        ..."
    (procedure "createComplexDataset" line 30)
    invoked from within
"createComplexDataset $r $ops"
    (procedure "bg_complex_data" line 5)
    invoked from within
"bg_complex_data [lindex $argv 0] [lindex $argv 1] [lindex $argv 2] [lindex $argv 3] [lindex $argv 4]"
    (file "tests/helpers/bg_complex_data.tcl" line 13)I/O error reading reply
    while executing
"{*}$r type $k"
    (procedure "createComplexDataset" line 43)
    invoked from within
"createComplexDataset $r $ops"
    (procedure "bg_complex_data" line 5)
    invoked from within
"bg_complex_data [lindex $argv 0] [lindex $argv 1] [lindex $argv 2] [lindex $argv 3] [lindex $argv 4]"
    (file "tests/helpers/bg_complex_data.tcl" line 13)
I/O error reading reply
    while executing
"{*}$r set $k $v"
    ("uplevel" body line 2)
    invoked from within
"uplevel 1 [lindex $args $path]"
    (procedure "randpath" line 3)
    invoked from within
"randpath {
                {*}$r set $k $v
            } {
                {*}$r lpush $k $v
            } {
                {*}$r sadd $k $v
        ..."
    (procedure "createComplexDataset" line 30)
    invoked from within
"createComplexDataset $r $ops"
    (procedure "bg_complex_data" line 5)
    invoked from within
"bg_complex_data [lindex $argv 0] [lindex $argv 1] [lindex $argv 2] [lindex $argv 3] [lindex $argv 4]"
    (file "tests/helpers/bg_complex_data.tcl" line 13)

[ok]: Slave should be able to synchronize with the master
[ok]: Fuzzing dense/sparse encoding: Redis should always detect errors
[ok]: PFADD, PFCOUNT, PFMERGE type checking works
[ok]: PFMERGE results on the cardinality of union of sets
[ok]: Detect write load to master
[ok]: PFCOUNT multiple-keys merge returns cardinality of union #1
[ok]: Test replication partial resync: ok after delay (diskless: no, swapdb, reconnect: 1)
I/O error reading reply
    while executing
"{*}$r type $k"
    (procedure "createComplexDataset" line 27)
    invoked from within
"createComplexDataset $r $ops"
    (procedure "bg_complex_data" line 5)
    invoked from within
"bg_complex_data [lindex $argv 0] [lindex $argv 1] [lindex $argv 2] [lindex $argv 3] [lindex $argv 4]"
    (file "tests/helpers/bg_complex_data.tcl" line 13)I/O error reading reply
    while executing
"{*}$r randomkey"
    (procedure "findKeyWithType" line 3)
    invoked from within
"findKeyWithType {*}$r set"
    ("uplevel" body line 2)
    invoked from within
"uplevel 1 [lindex $args $path]"
    (procedure "randpath" line 3)
    invoked from within
"randpath {{*}$r sadd $k $v}  {{*}$r srem $k $v}  {
                            set otherset [findKeyWithType {*}$r set]
                            if..."
    (procedure "createComplexDataset" line 54)
    invoked from within
"createComplexDataset $r $ops"
    (procedure "bg_complex_data" line 5)
    invoked from within
"bg_complex_data [lindex $argv 0] [lindex $argv 1] [lindex $argv 2] [lindex $argv 3] [lindex $argv 4]"
    (file "tests/helpers/bg_complex_data.tcl" line 13)
I/O error reading reply
    while executing
"{*}$r type $k"
    (procedure "createComplexDataset" line 27)
    invoked from within
"createComplexDataset $r $ops"
    (procedure "bg_complex_data" line 5)
    invoked from within
"bg_complex_data [lindex $argv 0] [lindex $argv 1] [lindex $argv 2] [lindex $argv 3] [lindex $argv 4]"
    (file "tests/helpers/bg_complex_data.tcl" line 13)

[ok]: Slave should be able to synchronize with the master
[ok]: Detect write load to master
[ok]: PFCOUNT multiple-keys merge returns cardinality of union #2
[ok]: PFDEBUG GETREG returns the HyperLogLog raw registers
[ok]: PFADD / PFCOUNT cache invalidation works
[59/64 done]: unit/hyperloglog (48 seconds)
[err]: Connect multiple replicas at the same time (issue #141), master diskless=no, replica diskless=swapdb in tests/integration/replication.tcl
Expected 9d0d7de765b37cb47254f58ee5536d97f037faf0 eq e744fa1bab2ed7323e6c6cd65a78733583fd0f1f (context: type eval line 66 cmd {assert {$digest eq $digest1}} proc ::test)
[ok]: GEOSEARCH fuzzy test - byradius
[ok]: Test replication partial resync: backlog expired (diskless: no, swapdb, reconnect: 1)
I/O error reading reply
    while executing
"{*}$r type $k"
    (procedure "createComplexDataset" line 43)
    invoked from within
"createComplexDataset $r $ops"
    (procedure "bg_complex_data" line 5)
    invoked from within
"bg_complex_data [lindex $argv 0] [lindex $argv 1] [lindex $argv 2] [lindex $argv 3] [lindex $argv 4]"
    (file "tests/helpers/bg_complex_data.tcl" line 13)I/O error reading reply
    while executing
"{*}$r zadd $k $d $v"
    ("uplevel" body line 2)
    invoked from within
"uplevel 1 [lindex $args $path]"
    (procedure "randpath" line 3)
    invoked from within
"randpath {
                {*}$r set $k $v
            } {
                {*}$r lpush $k $v
            } {
                {*}$r sadd $k $v
        ..."
    (procedure "createComplexDataset" line 30)
    invoked from within
"createComplexDataset $r $ops"
    (procedure "bg_complex_data" line 5)
    invoked from within
"bg_complex_data [lindex $argv 0] [lindex $argv 1] [lindex $argv 2] [lindex $argv 3] [lindex $argv 4]"
    (file "tests/helpers/bg_complex_data.tcl" line 13)
I/O error reading reply
    while executing
"{*}$r type $k"
    (procedure "createComplexDataset" line 27)
    invoked from within
"createComplexDataset $r $ops"
    (procedure "bg_complex_data" line 5)
    invoked from within
"bg_complex_data [lindex $argv 0] [lindex $argv 1] [lindex $argv 2] [lindex $argv 3] [lindex $argv 4]"
    (file "tests/helpers/bg_complex_data.tcl" line 13)

I/O error reading reply
    while executing
"{*}$r del $k"
    ("uplevel" body line 2)
    invoked from within
"uplevel 1 [lindex $args $path]"
    (procedure "randpath" line 3)
    invoked from within
"randpath {
                {*}$r set $k $v
            } {
                {*}$r lpush $k $v
            } {
                {*}$r sadd $k $v
        ..."
    (procedure "createComplexDataset" line 30)
    invoked from within
"createComplexDataset $r $ops"
    (procedure "bg_complex_data" line 5)
    invoked from within
"bg_complex_data [lindex $argv 0] [lindex $argv 1] [lindex $argv 2] [lindex $argv 3] [lindex $argv 4]"
    (file "tests/helpers/bg_complex_data.tcl" line 13)
I/O error reading reply
    while executing
"{*}$r hset $k $f $v"
    ("uplevel" body line 2)
    invoked from within
"uplevel 1 [lindex $args $path]"
    (procedure "randpath" line 3)
    invoked from within
"randpath {
                {*}$r set $k $v
            } {
                {*}$r lpush $k $v
            } {
                {*}$r sadd $k $v
        ..."
    (procedure "createComplexDataset" line 30)
    invoked from within
"createComplexDataset $r $ops"
    (procedure "bg_complex_data" line 5)
    invoked from within
"bg_complex_data [lindex $argv 0] [lindex $argv 1] [lindex $argv 2] [lindex $argv 3] [lindex $argv 4]"
    (file "tests/helpers/bg_complex_data.tcl" line 13)
I/O error reading reply
    while executing
"{*}$r zadd $k $d $v"
    ("uplevel" body line 2)
    invoked from within
"uplevel 1 [lindex $args $path]"
    (procedure "randpath" line 3)
    invoked from within
"randpath {
                {*}$r set $k $v
            } {
                {*}$r lpush $k $v
            } {
                {*}$r sadd $k $v
        ..."
    (procedure "createComplexDataset" line 30)
    invoked from within
"createComplexDataset $r $ops"
    (procedure "bg_complex_data" line 5)
    invoked from within
"bg_complex_data [lindex $argv 0] [lindex $argv 1] [lindex $argv 2] [lindex $argv 3] [lindex $argv 4]"
    (file "tests/helpers/bg_complex_data.tcl" line 13)
[ok]: Slave should be able to synchronize with the master
[ok]: Detect write load to master
[ok]: Test replication partial resync: no reconnection, just sync (diskless: yes, disabled, reconnect: 0)
I/O error reading reply
    while executing
"{*}$r type $k"
    (procedure "createComplexDataset" line 43)
    invoked from within
"createComplexDataset $r $ops"
    (procedure "bg_complex_data" line 5)
    invoked from within
"bg_complex_data [lindex $argv 0] [lindex $argv 1] [lindex $argv 2] [lindex $argv 3] [lindex $argv 4]"
    (file "tests/helpers/bg_complex_data.tcl" line 13)I/O error reading reply
    while executing
"{*}$r randomkey"
    (procedure "findKeyWithType" line 3)
    invoked from within
"findKeyWithType {*}$r zset"
    ("uplevel" body line 2)
    invoked from within
"uplevel 1 [lindex $args $path]"
    (procedure "randpath" line 3)
    invoked from within
"randpath {{*}$r zadd $k $d $v}  {{*}$r zrem $k $v}  {
                            set otherzset [findKeyWithType {*}$r zset]
                         ..."
    (procedure "createComplexDataset" line 68)
    invoked from within
"createComplexDataset $r $ops"
    (procedure "bg_complex_data" line 5)
    invoked from within
"bg_complex_data [lindex $argv 0] [lindex $argv 1] [lindex $argv 2] [lindex $argv 3] [lindex $argv 4]"
    (file "tests/helpers/bg_complex_data.tcl" line 13)
I/O error reading reply
    while executing
"{*}$r type $k"
    (procedure "createComplexDataset" line 43)
    invoked from within
"createComplexDataset $r $ops"
    (procedure "bg_complex_data" line 5)
    invoked from within
"bg_complex_data [lindex $argv 0] [lindex $argv 1] [lindex $argv 2] [lindex $argv 3] [lindex $argv 4]"
    (file "tests/helpers/bg_complex_data.tcl" line 13)

[ok]: Slave should be able to synchronize with the master
[ok]: Detect write load to master
[ok]: slave buffer are counted correctly
[ok]: Test replication partial resync: ok psync (diskless: yes, disabled, reconnect: 1)
I/O error reading reply
    while executing
"{*}$r type $k"
    (procedure "createComplexDataset" line 27)
    invoked from within
"createComplexDataset $r $ops"
    (procedure "bg_complex_data" line 5)
    invoked from within
"bg_complex_data [lindex $argv 0] [lindex $argv 1] [lindex $argv 2] [lindex $argv 3] [lindex $argv 4]"
    (file "tests/helpers/bg_complex_data.tcl" line 13)I/O error reading reply
    while executing
"{*}$r randomkey"
    (procedure "findKeyWithType" line 3)
    invoked from within
"findKeyWithType {*}$r set"
    ("uplevel" body line 2)
    invoked from within
"uplevel 1 [lindex $args $path]"
    (procedure "randpath" line 3)
    invoked from within
"randpath {{*}$r sadd $k $v}  {{*}$r srem $k $v}  {
                            set otherset [findKeyWithType {*}$r set]
                            if..."
    (procedure "createComplexDataset" line 54)
    invoked from within
"createComplexDataset $r $ops"
    (procedure "bg_complex_data" line 5)
    invoked from within
"bg_complex_data [lindex $argv 0] [lindex $argv 1] [lindex $argv 2] [lindex $argv 3] [lindex $argv 4]"
    (file "tests/helpers/bg_complex_data.tcl" line 13)
I/O error reading reply
    while executing
"{*}$r sadd $k $v"
    ("uplevel" body line 1)
    invoked from within
"uplevel 1 [lindex $args $path]"
    (procedure "randpath" line 3)
    invoked from within
"randpath {{*}$r sadd $k $v}  {{*}$r srem $k $v}  {
                            set otherset [findKeyWithType {*}$r set]
                            if..."
    (procedure "createComplexDataset" line 54)
    invoked from within
"createComplexDataset $r $ops"
    (procedure "bg_complex_data" line 5)
    invoked from within
"bg_complex_data [lindex $argv 0] [lindex $argv 1] [lindex $argv 2] [lindex $argv 3] [lindex $argv 4]"
    (file "tests/helpers/bg_complex_data.tcl" line 13)

[ok]: Slave should be able to synchronize with the master
[ok]: Detect write load to master
[ok]: replica buffer don't induce eviction
[ok]: Don't rehash if used memory exceeds maxmemory after rehash
[ok]: AOF rewrite during write load: RDB preamble=no
[ok]: Test replication partial resync: no backlog (diskless: yes, disabled, reconnect: 1)
[ok]: client tracking don't cause eviction feedback loop
I/O error reading reply
    while executing
"{*}$r type $k"
    (procedure "createComplexDataset" line 27)
    invoked from within
"createComplexDataset $r $ops"
    (procedure "bg_complex_data" line 5)
    invoked from within
"bg_complex_data [lindex $argv 0] [lindex $argv 1] [lindex $argv 2] [lindex $argv 3] [lindex $argv 4]"
    (file "tests/helpers/bg_complex_data.tcl" line 13)
I/O error reading reply
    while executing
"{*}$r sadd $k $v"
    ("uplevel" body line 1)
    invoked from within
"uplevel 1 [lindex $args $path]"
    (procedure "randpath" line 3)
    invoked from within
"randpath {{*}$r sadd $k $v}  {{*}$r srem $k $v}  {
                            set otherset [findKeyWithType {*}$r set]
                            if..."
    (procedure "createComplexDataset" line 54)
    invoked from within
"createComplexDataset $r $ops"
    (procedure "bg_complex_data" line 5)
    invoked from within
"bg_complex_data [lindex $argv 0] [lindex $argv 1] [lindex $argv 2] [lindex $argv 3] [lindex $argv 4]"
    (file "tests/helpers/bg_complex_data.tcl" line 13)
I/O error reading reply
    while executing
"{*}$r type $k"
    (procedure "createComplexDataset" line 43)
    invoked from within
"createComplexDataset $r $ops"
    (procedure "bg_complex_data" line 5)
    invoked from within
"bg_complex_data [lindex $argv 0] [lindex $argv 1] [lindex $argv 2] [lindex $argv 3] [lindex $argv 4]"
    (file "tests/helpers/bg_complex_data.tcl" line 13)
[60/64 done]: unit/maxmemory (84 seconds)
[ok]: Slave should be able to synchronize with the master
[ok]: Detect write load to master
[ok]: Turning off AOF kills the background writing child if any
[ok]: GEOSEARCH fuzzy test - bybox
[ok]: GEOSEARCH box edges fuzzy test
[61/64 done]: north (78 seconds)
[ok]: AOF rewrite of list with quicklist encoding, string data
[ok]: AOF rewrite of list with quicklist encoding, int data
[err]: Connect multiple replicas at the same time (issue #141), master diskless=yes, replica diskless=disabled in tests/integration/replication.tcl
Expected 8f8ca836423f6bd4d40c3e221e80a996f1b479ff eq a9361604fc77c6241abc3798aafbfa52b856ae38 (context: type eval line 66 cmd {assert {$digest eq $digest1}} proc ::test)
[ok]: AOF rewrite of set with intset encoding, string data
[ok]: AOF rewrite of set with hashtable encoding, string data
I/O error reading reply
    while executing
"{*}$r type $k"
    (procedure "createComplexDataset" line 27)
    invoked from within
"createComplexDataset $r $ops"
    (procedure "bg_complex_data" line 5)
    invoked from within
"bg_complex_data [lindex $argv 0] [lindex $argv 1] [lindex $argv 2] [lindex $argv 3] [lindex $argv 4]"
    (file "tests/helpers/bg_complex_data.tcl" line 13)
I/O error reading reply
    while executing
"{*}$r type $k"
    (procedure "createComplexDataset" line 27)
    invoked from within
"createComplexDataset $r $ops"
    (procedure "bg_complex_data" line 5)
    invoked from within
"bg_complex_data [lindex $argv 0] [lindex $argv 1] [lindex $argv 2] [lindex $argv 3] [lindex $argv 4]"
    (file "tests/helpers/bg_complex_data.tcl" line 13)
I/O error reading reply
    while executing
"{*}$r randomkey"
    (procedure "findKeyWithType" line 3)
    invoked from within
"findKeyWithType {*}$r set"
    ("uplevel" body line 2)
    invoked from within
"uplevel 1 [lindex $args $path]"
    (procedure "randpath" line 3)
    invoked from within
"randpath {{*}$r sadd $k $v}  {{*}$r srem $k $v}  {
                            set otherset [findKeyWithType {*}$r set]
                            if..."
    (procedure "createComplexDataset" line 54)
    invoked from within
"createComplexDataset $r $ops"
    (procedure "bg_complex_data" line 5)
    invoked from within
"bg_complex_data [lindex $argv 0] [lindex $argv 1] [lindex $argv 2] [lindex $argv 3] [lindex $argv 4]"
    (file "tests/helpers/bg_complex_data.tcl" line 13)
[ok]: AOF rewrite of set with intset encoding, int data
[ok]: AOF rewrite of set with hashtable encoding, int data
[ok]: AOF rewrite of hash with ziplist encoding, string data
[ok]: Test replication partial resync: ok after delay (diskless: yes, disabled, reconnect: 1)
I/O error reading reply
    while executing
"{*}$r type $k"
    (procedure "createComplexDataset" line 27)
    invoked from within
"createComplexDataset $r $ops"
    (procedure "bg_complex_data" line 5)
    invoked from within
"bg_complex_data [lindex $argv 0] [lindex $argv 1] [lindex $argv 2] [lindex $argv 3] [lindex $argv 4]"
    (file "tests/helpers/bg_complex_data.tcl" line 13)I/O error reading reply
    while executing
"{*}$r type $k"
    (procedure "createComplexDataset" line 27)
    invoked from within
"createComplexDataset $r $ops"
    (procedure "bg_complex_data" line 5)
    invoked from within
"bg_complex_data [lindex $argv 0] [lindex $argv 1] [lindex $argv 2] [lindex $argv 3] [lindex $argv 4]"
    (file "tests/helpers/bg_complex_data.tcl" line 13)I/O error reading reply
    while executing
"{*}$r set $k $v"
    ("uplevel" body line 2)
    invoked from within
"uplevel 1 [lindex $args $path]"
    (procedure "randpath" line 3)
    invoked from within
"randpath {
                {*}$r set $k $v
            } {
                {*}$r lpush $k $v
            } {
                {*}$r sadd $k $v
        ..."
    (procedure "createComplexDataset" line 30)
    invoked from within
"createComplexDataset $r $ops"
    (procedure "bg_complex_data" line 5)
    invoked from within
"bg_complex_data [lindex $argv 0] [lindex $argv 1] [lindex $argv 2] [lindex $argv 3] [lindex $argv 4]"
    (file "tests/helpers/bg_complex_data.tcl" line 13)


[ok]: AOF rewrite of hash with hashtable encoding, string data
[ok]: Slave should be able to synchronize with the master
[ok]: Detect write load to master
[ok]: AOF rewrite of hash with ziplist encoding, int data
[ok]: AOF rewrite of hash with hashtable encoding, int data
[ok]: AOF rewrite of zset with ziplist encoding, string data
[ok]: AOF rewrite of zset with skiplist encoding, string data
[ok]: AOF rewrite of zset with ziplist encoding, int data
[ok]: AOF rewrite of zset with skiplist encoding, int data
[ok]: BGREWRITEAOF is delayed if BGSAVE is in progress
[ok]: BGREWRITEAOF is refused if already in progress
[62/64 done]: unit/aofrw (136 seconds)
[ok]: Test replication partial resync: backlog expired (diskless: yes, disabled, reconnect: 1)
I/O error reading reply
    while executing
"{*}$r type $k"
    (procedure "createComplexDataset" line 27)
    invoked from within
"createComplexDataset $r $ops"
    (procedure "bg_complex_data" line 5)
    invoked from within
"bg_complex_data [lindex $argv 0] [lindex $argv 1] [lindex $argv 2] [lindex $argv 3] [lindex $argv 4]"
    (file "tests/helpers/bg_complex_data.tcl" line 13)I/O error reading reply
    while executing
"{*}$r type $k"
    (procedure "createComplexDataset" line 27)
    invoked from within
"createComplexDataset $r $ops"
    (procedure "bg_complex_data" line 5)
    invoked from within
"bg_complex_data [lindex $argv 0] [lindex $argv 1] [lindex $argv 2] [lindex $argv 3] [lindex $argv 4]"
    (file "tests/helpers/bg_complex_data.tcl" line 13)
I/O error reading reply
    while executing
"{*}$r randomkey"
    (procedure "findKeyWithType" line 3)
    invoked from within
"findKeyWithType {*}$r zset"
    ("uplevel" body line 2)
    invoked from within
"uplevel 1 [lindex $args $path]"
    (procedure "randpath" line 3)
    invoked from within
"randpath {{*}$r zadd $k $d $v}  {{*}$r zrem $k $v}  {
                            set otherzset [findKeyWithType {*}$r zset]
                         ..."
    (procedure "createComplexDataset" line 68)
    invoked from within
"createComplexDataset $r $ops"
    (procedure "bg_complex_data" line 5)
    invoked from within
"bg_complex_data [lindex $argv 0] [lindex $argv 1] [lindex $argv 2] [lindex $argv 3] [lindex $argv 4]"
    (file "tests/helpers/bg_complex_data.tcl" line 13)

[ok]: Slave should be able to synchronize with the master
[ok]: Detect write load to master
[ok]: Test replication partial resync: no reconnection, just sync (diskless: yes, swapdb, reconnect: 0)
I/O error reading reply
    while executing
"{*}$r srem $k $v"
    ("uplevel" body line 1)
    invoked from within
"uplevel 1 [lindex $args $path]"
    (procedure "randpath" line 3)
    invoked from within
"randpath {{*}$r sadd $k $v}  {{*}$r srem $k $v}  {
                            set otherset [findKeyWithType {*}$r set]
                            if..."
    (procedure "createComplexDataset" line 54)
    invoked from within
"createComplexDataset $r $ops"
    (procedure "bg_complex_data" line 5)
    invoked from within
"bg_complex_data [lindex $argv 0] [lindex $argv 1] [lindex $argv 2] [lindex $argv 3] [lindex $argv 4]"
    (file "tests/helpers/bg_complex_data.tcl" line 13)I/O error reading reply
    while executing
"{*}$r sunionstore $k2 $k $otherset"
    ("uplevel" body line 2)
    invoked from within
"uplevel 1 [lindex $args $path]"
    (procedure "randpath" line 3)
    invoked from within
"randpath {
                                    {*}$r sunionstore $k2 $k $otherset
                                } {
                                ..."
    ("uplevel" body line 4)
    invoked from within
"uplevel 1 [lindex $args $path]"
    (procedure "randpath" line 3)
    invoked from within
"randpath {{*}$r sadd $k $v}  {{*}$r srem $k $v}  {
                            set otherset [findKeyWithType {*}$r set]
                            if..."
    (procedure "createComplexDataset" line 54)
    invoked from within
"createComplexDataset $r $ops"
    (procedure "bg_complex_data" line 5)
    invoked from within
"bg_complex_data [lindex $argv 0] [lindex $argv 1] [lindex $argv 2] [lindex $argv 3] [lindex $argv 4]"
    (file "tests/helpers/bg_complex_data.tcl" line 13)

I/O error reading reply
    while executing
"{*}$r type $k"
    (procedure "createComplexDataset" line 43)
    invoked from within
"createComplexDataset $r $ops"
    (procedure "bg_complex_data" line 5)
    invoked from within
"bg_complex_data [lindex $argv 0] [lindex $argv 1] [lindex $argv 2] [lindex $argv 3] [lindex $argv 4]"
    (file "tests/helpers/bg_complex_data.tcl" line 13)
[ok]: Slave should be able to synchronize with the master
[ok]: Detect write load to master
[ok]: Test replication partial resync: ok psync (diskless: yes, swapdb, reconnect: 1)
I/O error reading reply
    while executing
"{*}$r type $k"
    (procedure "createComplexDataset" line 43)
    invoked from within
"createComplexDataset $r $ops"
    (procedure "bg_complex_data" line 5)
    invoked from within
"bg_complex_data [lindex $argv 0] [lindex $argv 1] [lindex $argv 2] [lindex $argv 3] [lindex $argv 4]"
    (file "tests/helpers/bg_complex_data.tcl" line 13)I/O error reading reply
    while executing
"{*}$r sadd $k $v"
    ("uplevel" body line 2)
    invoked from within
"uplevel 1 [lindex $args $path]"
    (procedure "randpath" line 3)
    invoked from within
"randpath {
                {*}$r set $k $v
            } {
                {*}$r lpush $k $v
            } {
                {*}$r sadd $k $v
        ..."
    (procedure "createComplexDataset" line 30)
    invoked from within
"createComplexDataset $r $ops"
    (procedure "bg_complex_data" line 5)
    invoked from within
"bg_complex_data [lindex $argv 0] [lindex $argv 1] [lindex $argv 2] [lindex $argv 3] [lindex $argv 4]"
    (file "tests/helpers/bg_complex_data.tcl" line 13)I/O error reading reply
    while executing
"{*}$r type $k"
    (procedure "createComplexDataset" line 27)
    invoked from within
"createComplexDataset $r $ops"
    (procedure "bg_complex_data" line 5)
    invoked from within
"bg_complex_data [lindex $argv 0] [lindex $argv 1] [lindex $argv 2] [lindex $argv 3] [lindex $argv 4]"
    (file "tests/helpers/bg_complex_data.tcl" line 13)


[ok]: Slave should be able to synchronize with the master
[ok]: Detect write load to master
[err]: Connect multiple replicas at the same time (issue #141), master diskless=yes, replica diskless=swapdb in tests/integration/replication.tcl
Expected 2e6e05bcae9885c100f5e9c4963d37adeaef142c eq c1ba004fb31716b02fa52b1d3814e791b213f11b (context: type eval line 66 cmd {assert {$digest eq $digest1}} proc ::test)
I/O error reading reply
    while executing
"{*}$r type $k"
    (procedure "createComplexDataset" line 27)
    invoked from within
"createComplexDataset $r $ops"
    (procedure "bg_complex_data" line 5)
    invoked from within
"bg_complex_data [lindex $argv 0] [lindex $argv 1] [lindex $argv 2] [lindex $argv 3] [lindex $argv 4]"
    (file "tests/helpers/bg_complex_data.tcl" line 13)
I/O error reading reply
    while executing
"{*}$r randomkey"
    (procedure "findKeyWithType" line 3)
    invoked from within
"findKeyWithType {*}$r set"
    ("uplevel" body line 2)
    invoked from within
"uplevel 1 [lindex $args $path]"
    (procedure "randpath" line 3)
    invoked from within
"randpath {{*}$r sadd $k $v}  {{*}$r srem $k $v}  {
                            set otherset [findKeyWithType {*}$r set]
                            if..."
    (procedure "createComplexDataset" line 54)
    invoked from within
"createComplexDataset $r $ops"
    (procedure "bg_complex_data" line 5)
    invoked from within
"bg_complex_data [lindex $argv 0] [lindex $argv 1] [lindex $argv 2] [lindex $argv 3] [lindex $argv 4]"
    (file "tests/helpers/bg_complex_data.tcl" line 13)
I/O error reading reply
    while executing
"{*}$r type $k"
    (procedure "findKeyWithType" line 7)
    invoked from within
"findKeyWithType {*}$r zset"
    ("uplevel" body line 2)
    invoked from within
"uplevel 1 [lindex $args $path]"
    (procedure "randpath" line 3)
    invoked from within
"randpath {{*}$r zadd $k $d $v}  {{*}$r zrem $k $v}  {
                            set otherzset [findKeyWithType {*}$r zset]
                         ..."
    (procedure "createComplexDataset" line 68)
    invoked from within
"createComplexDataset $r $ops"
    (procedure "bg_complex_data" line 5)
    invoked from within
"bg_complex_data [lindex $argv 0] [lindex $argv 1] [lindex $argv 2] [lindex $argv 3] [lindex $argv 4]"
    (file "tests/helpers/bg_complex_data.tcl" line 13)
[ok]: Test replication partial resync: no backlog (diskless: yes, swapdb, reconnect: 1)
I/O error reading reply
    while executing
"{*}$r hset $k $f $v"
    ("uplevel" body line 1)
    invoked from within
"uplevel 1 [lindex $args $path]"
    (procedure "randpath" line 3)
    invoked from within
"randpath {{*}$r hset $k $f $v}  {{*}$r hdel $k $f}"
    (procedure "createComplexDataset" line 80)
    invoked from within
"createComplexDataset $r $ops"
    (procedure "bg_complex_data" line 5)
    invoked from within
"bg_complex_data [lindex $argv 0] [lindex $argv 1] [lindex $argv 2] [lindex $argv 3] [lindex $argv 4]"
    (file "tests/helpers/bg_complex_data.tcl" line 13)I/O error reading reply
    while executing
"{*}$r sadd $k $v"
    ("uplevel" body line 2)
    invoked from within
"uplevel 1 [lindex $args $path]"
    (procedure "randpath" line 3)
    invoked from within
"randpath {
                {*}$r set $k $v
            } {
                {*}$r lpush $k $v
            } {
                {*}$r sadd $k $v
        ..."
    (procedure "createComplexDataset" line 30)
    invoked from within
"createComplexDataset $r $ops"
    (procedure "bg_complex_data" line 5)
    invoked from within
"bg_complex_data [lindex $argv 0] [lindex $argv 1] [lindex $argv 2] [lindex $argv 3] [lindex $argv 4]"
    (file "tests/helpers/bg_complex_data.tcl" line 13)
I/O error reading reply
    while executing
"{*}$r zadd $k $d $v"
    ("uplevel" body line 2)
    invoked from within
"uplevel 1 [lindex $args $path]"
    (procedure "randpath" line 3)
    invoked from within
"randpath {
                {*}$r set $k $v
            } {
                {*}$r lpush $k $v
            } {
                {*}$r sadd $k $v
        ..."
    (procedure "createComplexDataset" line 30)
    invoked from within
"createComplexDataset $r $ops"
    (procedure "bg_complex_data" line 5)
    invoked from within
"bg_complex_data [lindex $argv 0] [lindex $argv 1] [lindex $argv 2] [lindex $argv 3] [lindex $argv 4]"
    (file "tests/helpers/bg_complex_data.tcl" line 13)

[ok]: Slave should be able to synchronize with the master
[ok]: Detect write load to master
[ok]: Test replication partial resync: ok after delay (diskless: yes, swapdb, reconnect: 1)
I/O error reading reply
    while executing
"{*}$r type $k"
    (procedure "createComplexDataset" line 27)
    invoked from within
"createComplexDataset $r $ops"
    (procedure "bg_complex_data" line 5)
    invoked from within
"bg_complex_data [lindex $argv 0] [lindex $argv 1] [lindex $argv 2] [lindex $argv 3] [lindex $argv 4]"
    (file "tests/helpers/bg_complex_data.tcl" line 13)I/O error reading reply
    while executing
"{*}$r randomkey"
    (procedure "findKeyWithType" line 3)
    invoked from within
"findKeyWithType {*}$r set"
    ("uplevel" body line 2)
    invoked from within
"uplevel 1 [lindex $args $path]"
    (procedure "randpath" line 3)
    invoked from within
"randpath {{*}$r sadd $k $v}  {{*}$r srem $k $v}  {
                            set otherset [findKeyWithType {*}$r set]
                            if..."
    (procedure "createComplexDataset" line 54)
    invoked from within
"createComplexDataset $r $ops"
    (procedure "bg_complex_data" line 5)
    invoked from within
"bg_complex_data [lindex $argv 0] [lindex $argv 1] [lindex $argv 2] [lindex $argv 3] [lindex $argv 4]"
    (file "tests/helpers/bg_complex_data.tcl" line 13)
I/O error reading reply
    while executing
"{*}$r type $k"
    (procedure "createComplexDataset" line 27)
    invoked from within
"createComplexDataset $r $ops"
    (procedure "bg_complex_data" line 5)
    invoked from within
"bg_complex_data [lindex $argv 0] [lindex $argv 1] [lindex $argv 2] [lindex $argv 3] [lindex $argv 4]"
    (file "tests/helpers/bg_complex_data.tcl" line 13)

[ok]: Slave should be able to synchronize with the master
[ok]: Detect write load to master
[ok]: Master stream is correctly processed while the replica has a script in -BUSY state
[ok]: Test replication partial resync: backlog expired (diskless: yes, swapdb, reconnect: 1)
I/O error reading reply
    while executing
"{*}$r zadd $k $d $v"
    ("uplevel" body line 1)
    invoked from within
"uplevel 1 [lindex $args $path]"
    (procedure "randpath" line 3)
    invoked from within
"randpath {{*}$r zadd $k $d $v}  {{*}$r zrem $k $v}  {
                            set otherzset [findKeyWithType {*}$r zset]
                         ..."
    (procedure "createComplexDataset" line 68)
    invoked from within
"createComplexDataset $r $ops"
    (procedure "bg_complex_data" line 5)
    invoked from within
"bg_complex_data [lindex $argv 0] [lindex $argv 1] [lindex $argv 2] [lindex $argv 3] [lindex $argv 4]"
    (file "tests/helpers/bg_complex_data.tcl" line 13)I/O error reading reply
    while executing
"{*}$r randomkey"
    (procedure "findKeyWithType" line 3)
    invoked from within
"findKeyWithType {*}$r zset"
    ("uplevel" body line 2)
    invoked from within
"uplevel 1 [lindex $args $path]"
    (procedure "randpath" line 3)
    invoked from within
"randpath {{*}$r zadd $k $d $v}  {{*}$r zrem $k $v}  {
                            set otherzset [findKeyWithType {*}$r zset]
                         ..."
    (procedure "createComplexDataset" line 68)
    invoked from within
"createComplexDataset $r $ops"
    (procedure "bg_complex_data" line 5)
    invoked from within
"bg_complex_data [lindex $argv 0] [lindex $argv 1] [lindex $argv 2] [lindex $argv 3] [lindex $argv 4]"
    (file "tests/helpers/bg_complex_data.tcl" line 13)I/O error reading reply
    while executing
"{*}$r zadd $k $d $v"
    ("uplevel" body line 2)
    invoked from within
"uplevel 1 [lindex $args $path]"
    (procedure "randpath" line 3)
    invoked from within
"randpath {
                {*}$r set $k $v
            } {
                {*}$r lpush $k $v
            } {
                {*}$r sadd $k $v
        ..."
    (procedure "createComplexDataset" line 30)
    invoked from within
"createComplexDataset $r $ops"
    (procedure "bg_complex_data" line 5)
    invoked from within
"bg_complex_data [lindex $argv 0] [lindex $argv 1] [lindex $argv 2] [lindex $argv 3] [lindex $argv 4]"
    (file "tests/helpers/bg_complex_data.tcl" line 13)


[63/64 done]: integration/replication-psync (173 seconds)
[ok]: slave fails full sync and diskless load swapdb recovers it
[ok]: diskless loading short read
[ok]: diskless no replicas drop during rdb pipe
[ok]: diskless slow replicas drop during rdb pipe
[ok]: diskless fast replicas drop during rdb pipe
[ok]: diskless all replicas drop during rdb pipe
[ok]: diskless timeout replicas drop during rdb pipe
[ok]: diskless replication child being killed is collected
[ok]: replicaof right after disconnection
[ok]: Kill rdb child process if its dumping RDB is not useful
[64/64 done]: integration/replication (221 seconds)
Testing solo test
[64/64 done]: defrag (0 seconds)

                   The End

Execution time of different units:
  0 seconds - unit/printver
  1 seconds - unit/type/incr
  1 seconds - unit/info
  1 seconds - unit/protocol
  2 seconds - unit/keyspace
  2 seconds - unit/auth
  1 seconds - unit/quit
  4 seconds - unit/multi
  6 seconds - unit/type/stream-cgroups
  2 seconds - unit/acl
  9 seconds - unit/type/list
  9 seconds - unit/type/hash
  11 seconds - unit/scan
  12 seconds - unit/type/string
  12 seconds - unit/type/set
  15 seconds - unit/sort
  15 seconds - unit/expire
  18 seconds - unit/other
  19 seconds - unit/latency-monitor
  11 seconds - integration/aof
  19 seconds - integration/replication-2
  13 seconds - integration/rdb
  29 seconds - unit/type/list-2
  2 seconds - integration/convert-zipmap-hash-on-load
  1 seconds - integration/logging
  31 seconds - unit/type/list-3
  33 seconds - unit/dump
  27 seconds - integration/block-repl
  0 seconds - unit/pubsub
  17 seconds - integration/corrupt-dump
  3 seconds - integration/redis-benchmark
  2 seconds - unit/slowlog
  38 seconds - unit/type/zset
  10 seconds - integration/failover
  3 seconds - unit/introspection
  2 seconds - unit/limits
  7 seconds - unit/introspection-2
  20 seconds - integration/corrupt-dump-fuzzer
  14 seconds - integration/redis-cli
  18 seconds - integration/psync2-pingoff
  3 seconds - unit/bitfield
  9 seconds - unit/bitops
  3 seconds - unit/lazyfree
  22 seconds - integration/psync2-reg
  1 seconds - unit/tls
  1 seconds - unit/tracking
  18 seconds - unit/scripting
  4 seconds - unit/wait
  1 seconds - unit/shutdown
  1 seconds - unit/networking
  2 seconds - unit/oom-score-adj
  15 seconds - unit/memefficiency
  15 seconds - unit/pendingquerybuf
  27 seconds - unit/obuf-limits
  58 seconds - integration/replication-3
  73 seconds - unit/type/stream
  63 seconds - integration/replication-4
  46 seconds - integration/psync2
  48 seconds - unit/hyperloglog
  84 seconds - unit/maxmemory
  78 seconds - north
  136 seconds - unit/aofrw
  173 seconds - integration/replication-psync
  221 seconds - integration/replication
  0 seconds - defrag

�[0m!!! WARNING The following tests �[0m�[31mfailed�[0m�[1m�[0m�[1m:�[0m�[1m
�[0m
*** [err]: Connect multiple replicas at the same time (issue #141), master diskless=no, replica diskless=disabled in tests/integration/replication.tcl
Expected e7471ba79981a9e99e698f6b2196a4c287775ff6 eq c4d9e8368b5adab6a817f2a49d4ab57254f4bd10 (context: type eval line 65 cmd {assert {$digest eq $digest0}} proc ::test)
*** [err]: Connect multiple replicas at the same time (issue #141), master diskless=no, replica diskless=swapdb in tests/integration/replication.tcl
Expected 9d0d7de765b37cb47254f58ee5536d97f037faf0 eq e744fa1bab2ed7323e6c6cd65a78733583fd0f1f (context: type eval line 66 cmd {assert {$digest eq $digest1}} proc ::test)
*** [err]: Connect multiple replicas at the same time (issue #141), master diskless=yes, replica diskless=disabled in tests/integration/replication.tcl
Expected 8f8ca836423f6bd4d40c3e221e80a996f1b479ff eq a9361604fc77c6241abc3798aafbfa52b856ae38 (context: type eval line 66 cmd {assert {$digest eq $digest1}} proc ::test)
*** [err]: Connect multiple replicas at the same time (issue #141), master diskless=yes, replica diskless=swapdb in tests/integration/replication.tcl
Expected 2e6e05bcae9885c100f5e9c4963d37adeaef142c eq c1ba004fb31716b02fa52b1d3814e791b213f11b (context: type eval line 66 cmd {assert {$digest eq $digest1}} proc ::test)
Cleanup: may take some time... OK
make[1]: *** [Makefile:383: test] Error 1
make[1]: Leaving directory '/tmp/guix-build-redis-6.2.4-1.32a2584.drv-0/redis-6.2.4-1.32a2584-checkout/src'
make: *** [Makefile:6: check] Error 2

Test suite failed, dumping logs.
command "make" "check" "-j" "4" "CC=gcc" "MALLOC=libc" "LDFLAGS=-ldl" "PREFIX=/gnu/store/90dmkln2vfhdf4jvryhds1kh8x2mnc8i-redis-6.2.4-1.32a2584" failed with status 2
note: keeping build directory `/tmp/guix-build-re

@oranagra
Copy link
Member

oranagra commented Jun 9, 2021

thanks.

so it looks like with that fix, most problems are resolved, but one (replication.tcl) is still present.
also, i see a few tcl crash reports in workloads (bg_block_op, and bg_complex_data), that seem to happen but don't cause any test to fail.

@YaacovHazan can you please look into that?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants