{"payload":{"feedbackUrl":"https://github.com/orgs/community/discussions/53140","repo":{"id":156018,"defaultBranch":"unstable","name":"redis","ownerLogin":"redis","currentUserCanPush":false,"isFork":false,"isEmpty":false,"createdAt":"2009-03-21T22:32:25.000Z","ownerAvatar":"https://avatars.githubusercontent.com/u/1529926?v=4","public":true,"private":false,"isOrgOwned":true},"refInfo":{"name":"","listCacheKey":"v0:1715508858.0","currentOid":""},"activityList":{"items":[{"before":"71676513ddd125af03926b60748bf7eae5689f67","after":"323be4d6993faf02bf89382f06f04f0d0f9a70c9","ref":"refs/heads/hash-field-expiry-integ","pushedAt":"2024-05-17T10:27:02.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"sundb","name":"debing.sun","path":"/sundb","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/965798?s=80&v=4"},"commit":{"message":"Hfe serialization listpack (#13243)\n\nAdd RDB de/serialization for HFE\r\n\r\nThis PR adds two new RDB types: `RDB_TYPE_HASH_METADATA` and\r\n`RDB_TYPE_HASH_LISTPACK_TTL` to save HFE data.\r\nWhen the hash RAM encoding is dict, it will be saved in the former, and\r\nwhen it is listpack it will be saved in the latter.\r\nBoth formats just add the TTL value for each field after the data that\r\nwas previously saved, i.e HASH_METADATA will save the number of entries\r\nand, for each entry, key, value and TTL, whereas listpack is saved as a\r\nblob.\r\nOn read, the usual dict <--> listpack conversion takes place if\r\nrequired.\r\nIn addition, when reading a hash that was saved as a dict fields are\r\nactively expired if expiry is due. Currently this slao holds for\r\nlistpack encoding, but it is supposed to be removed.\r\n\r\nTODO:\r\nRemove active expiry on load when loading from listpack format (unless\r\nwe'll decide to keep it)","shortMessageHtmlLink":"Hfe serialization listpack (#13243)"}},{"before":"80be2cc29111b31e316c47529b21c7a9efc61651","after":"71676513ddd125af03926b60748bf7eae5689f67","ref":"refs/heads/hash-field-expiry-integ","pushedAt":"2024-05-16T16:35:58.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"moticless","name":"Moti Cohen","path":"/moticless","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/24944278?s=80&v=4"},"commit":{"message":"Fix commands H*EXPIRE* and H*TTL to include `FIELDS` constant (#13270)\n\nThe same goes to: HPEXPIRE, HEXPIREAT, HPEXPIREAT, HEXPIRETIME,\r\nHPEXPIRETIME, HPTTL, HTTL, HPERSIST","shortMessageHtmlLink":"Fix commands H*EXPIRE* and H*TTL to include FIELDS constant (#13270)"}},{"before":"ffbdf2f6f37e2d5c0d1dcc6db16b63faacffd705","after":"f1b02129173dd676feca0131dcde92d5b374dacf","ref":"refs/heads/unstable","pushedAt":"2024-05-15T06:12:22.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"sundb","name":"debing.sun","path":"/sundb","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/965798?s=80&v=4"},"commit":{"message":"Revert \"Change mmap rand bits as a temporary mitigation to resolve asan bug (#13150)\" (#13266)\n\nThe kernel config `vm.mmap_rnd_bits` had been revert in\r\nhttps://github.com/actions/runner-images/issues/9491, so we can revert\r\nthe changes from #13150.\r\n\r\nCI only with ASAN passed:\r\nhttps://github.com/sundb/redis/actions/runs/9058263634","shortMessageHtmlLink":"Revert \"Change mmap rand bits as a temporary mitigation to resolve as…"}},{"before":"8a05f0092b0e291498b8fdb8dd93355467ceab25","after":"ffbdf2f6f37e2d5c0d1dcc6db16b63faacffd705","ref":"refs/heads/unstable","pushedAt":"2024-05-14T12:08:32.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"sundb","name":"debing.sun","path":"/sundb","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/965798?s=80&v=4"},"commit":{"message":"Fix test failure due to differing reply format of XREADGROUP under RESP3 in MULTI (#13255)\n\nThis test was introducted by #13251.\r\nNormally we auto transform the reply format of XREADGROUP to array under\r\nRESP3 (see trasformer_funcs).\r\nBut when we execute XREADGROUP command in multi it can't work, which\r\ncause the new test failed.\r\nThe solution is to verity the reply of XREADGROUP in advance rather than\r\nin MULTI.\r\n\r\nFailed validate schema CI:\r\nhttps://github.com/redis/redis/actions/runs/9025128323/job/24800285684\r\n\r\n---------\r\n\r\nCo-authored-by: guybe7 ","shortMessageHtmlLink":"Fix test failure due to differing reply format of XREADGROUP under RE…"}},{"before":"5066e6e9cdd5f8dcb5451d4b5432d9ddbf3364de","after":"80be2cc29111b31e316c47529b21c7a9efc61651","ref":"refs/heads/hash-field-expiry-integ","pushedAt":"2024-05-14T09:32:33.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"sundb","name":"debing.sun","path":"/sundb","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/965798?s=80&v=4"},"commit":{"message":"Add defragment support for HFE (#13229)\n\n## Background\r\n1. All hash objects that contain HFE are referenced by db->hexpires.\r\n2. All fields in a dict hash object with HFE are referenced by an\r\nebucket.\r\n\r\nSo when we defrag the hash object or the field in a dict with HFE, we\r\nalso need to update the references in them.\r\n\r\n## Interface\r\n1. Add a new interface `ebDefragItem`, which can accept a defrag\r\ncallback to defrag items in ebuckets, and simultaneously update their\r\nreferences in the ebucket.\r\n\r\n## Mainly changes\r\n1. The key type of dict of hash object is no longer sds, so add new\r\n`activeDefragHfieldDict()` to defrag the dict instead of\r\n`activeDefragSdsDict()`.\r\n2. When we defrag the dict of hash object by using `dictScanDefrag()`,\r\nwe always set the defrag callback `defragKey` of `dictDefragFunctions`\r\nto NULL, because we can't reallocate a field with out updating it's\r\nreference in ebuckets.\r\nInstead, we will defrag the field of the dict and update its reference\r\nin the callback `dictScanDefrag` of dictScanFunction().\r\n3. When we defrag the hash robj with HFE, we will use `ebDefragItem` to\r\ndefrag the robj and update the reference in db->hexpires.\r\n\r\n## TODO:\r\nDefrag ebucket structure incremently, which will be handler in a future\r\nPR.\r\n\r\n---------\r\n\r\nCo-authored-by: Ozan Tezcan \r\nCo-authored-by: Moti Cohen ","shortMessageHtmlLink":"Add defragment support for HFE (#13229)"}},{"before":"7010f41c9671d0a424459be33619b08bdc49838a","after":"5066e6e9cdd5f8dcb5451d4b5432d9ddbf3364de","ref":"refs/heads/hash-field-expiry-integ","pushedAt":"2024-05-13T08:09:49.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"tezc","name":"Ozan Tezcan","path":"/tezc","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/17865367?s=80&v=4"},"commit":{"message":"Fix hgetf/hsetf reply type by returning string (#13263)\n\nIf encoding is listpack, hgetf and hsetf commands reply field value type\r\nas integer.\r\nThis PR fixes it by returning string.\r\n\r\nProblematic cases:\r\n```\r\n127.0.0.1:6379> hset hash one 1\r\n(integer) 1\r\n127.0.0.1:6379> hgetf hash fields 1 one\r\n1) (integer) 1\r\n127.0.0.1:6379> hsetf hash GETOLD fvs 1 one 2\r\n1) (integer) 1\r\n127.0.0.1:6379> hsetf hash DOF GETNEW fvs 1 one 2\r\n1) (integer) 2\r\n```\r\n\r\nAdditional fixes:\r\n- hgetf/hsetf command description text\r\n\r\nFixes #13261, #13262","shortMessageHtmlLink":"Fix hgetf/hsetf reply type by returning string (#13263)"}},{"before":null,"after":"f012703e2f9d9dbe2f5ce780bff89eb0e868f686","ref":"refs/heads/LiorKogan-patch-1","pushedAt":"2024-05-12T10:14:18.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"LiorKogan","name":"Lior Kogan","path":"/LiorKogan","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/7252991?s=80&v=4"},"commit":{"message":"Update 00-RELEASENOTES","shortMessageHtmlLink":"Update 00-RELEASENOTES"}},{"before":"0e1de78fca849c135fd00cd85b5b87920e46e50d","after":"8a05f0092b0e291498b8fdb8dd93355467ceab25","ref":"refs/heads/unstable","pushedAt":"2024-05-10T03:10:14.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"sundb","name":"debing.sun","path":"/sundb","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/965798?s=80&v=4"},"commit":{"message":"Add reverse history search in redis-cli (linenoise) (#12543)\n\nadded reverse history search to redis-cli, use it with the following:\r\n\r\n* CTRL+R : enable search backward mode, and search next one when\r\npressing CTRL+R again until reach index 0.\r\n```\r\n127.0.0.1:6379> keys one\r\n127.0.0.1:6379> keys two\r\n(reverse-i-search): # press CTRL+R\r\n(reverse-i-search): keys two # input `keys`\r\n(reverse-i-search): keys one # press CTRL+R again\r\n(reverse-i-search): keys one # press CTRL+R again, still `keys one` due to reaching index 0\r\n(i-search): keys two # press CTRL+S, enable search forward\r\n(i-search): keys two # press CTRL+S, still `keys one` due to reaching index 1\r\n```\r\n\r\n* CTRL+S : enable search forward mode, and search next one when pressing\r\nCTRL+S again until reach index 0.\r\n```\r\n127.0.0.1:6379> keys one\r\n127.0.0.1:6379> keys two\r\n(i-search): # press CTRL+S\r\n(i-search): keys one # input `keys`\r\n(i-search): keys two # press CTRL+S again\r\n(i-search): keys two # press CTRL+R again, still `keys two` due to reaching index 0\r\n(reverse-i-search): keys one # press CTRL+R, enable search backward\r\n(reverse-i-search): keys one # press CTRL+S, still `keys one` due to reaching index 1\r\n```\r\n\r\n* CTRL+G : disable\r\n```\r\n127.0.0.1:6379> keys one\r\n127.0.0.1:6379> keys two\r\n(reverse-i-search): # press CTRL+R\r\n(reverse-i-search): keys two # input `keys`\r\n127.0.0.1:6379> # press CTRL+G\r\n```\r\n\r\n* CTRL+C : disable\r\n```\r\n127.0.0.1:6379> keys one\r\n127.0.0.1:6379> keys two\r\n(reverse-i-search): # press CTRL+R\r\n(reverse-i-search): keys two # input `keys`\r\n127.0.0.1:6379> # press CTRL+G\r\n```\r\n\r\n* TAB : use the current search result and exit search mode\r\n```\r\n127.0.0.1:6379> keys one\r\n127.0.0.1:6379> keys two\r\n(reverse-i-search): # press CTRL+R\r\n(reverse-i-search): keys two # input `keys`\r\n127.0.0.1:6379> keys two # press TAB\r\n```\r\n\r\n* ENTER : use the current search result and execute the command\r\n```\r\n127.0.0.1:6379> keys one\r\n127.0.0.1:6379> keys two\r\n(reverse-i-search): # press CTRL+R\r\n(reverse-i-search): keys two # input `keys`\r\n127.0.0.1:6379> keys two # press ENTER\r\n(empty array)\r\n127.0.0.1:6379>\r\n```\r\n\r\n* any arrow key will disable reverse search\r\n\r\nyour result will have the search match bolded, you can press enter to\r\nexecute the full result\r\n\r\nnote: I have _only added this for multi-line mode_, as it seems to be\r\nforced that way when `repl` is called\r\n\r\nCloses: https://github.com/redis/redis/issues/8277\r\n\r\n---------\r\n\r\nCo-authored-by: Clayton Northey \r\nCo-authored-by: Viktor Söderqvist \r\nCo-authored-by: debing.sun \r\nCo-authored-by: Bjorn Svensson \r\nCo-authored-by: Viktor Söderqvist ","shortMessageHtmlLink":"Add reverse history search in redis-cli (linenoise) (#12543)"}},{"before":"ca4ed48db662cbd568df4775003faabf7491043b","after":"7010f41c9671d0a424459be33619b08bdc49838a","ref":"refs/heads/hash-field-expiry-integ","pushedAt":"2024-05-09T14:23:00.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"sundb","name":"debing.sun","path":"/sundb","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/965798?s=80&v=4"},"commit":{"message":"Add notification support for HFE (#13237)\n\n1. Add `hpersist` notification for `hpersist` command.\r\n2. Add `pexpire` notification for `hexpire`, `hexpireat` and `hpexpire`.","shortMessageHtmlLink":"Add notification support for HFE (#13237)"}},{"before":"13401f8bc1cd6ab2905f406227fb9d762d78247e","after":"ca4ed48db662cbd568df4775003faabf7491043b","ref":"refs/heads/hash-field-expiry-integ","pushedAt":"2024-05-08T20:11:32.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"tezc","name":"Ozan Tezcan","path":"/tezc","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/17865367?s=80&v=4"},"commit":{"message":"Add listpack support, hgetf and hsetf commands (#13209)\n\n**Changes:**\r\n- Adds listpack support to hash field expiration \r\n- Implements hgetf/hsetf commands\r\n\r\n**Listpack support for hash field expiration**\r\n\r\nWe keep field name and value pairs in listpack for the hash type. With\r\nthis PR, if one of hash field expiration command is called on the key\r\nfor the first time, it converts listpack layout to triplets to hold\r\nfield name, value and ttl per field. If a field does not have a TTL, we\r\nstore zero as the ttl value. Zero is encoded as two bytes in the\r\nlistpack. So, once we convert listpack to hold triplets, for the fields\r\nthat don't have a TTL, it will be consuming those extra 2 bytes per\r\nitem. Fields are ordered by ttl in the listpack to find the field with\r\nminimum expiry time efficiently.\r\n\r\n**New command implementations as part of this PR:** \r\n\r\n- HGETF command\r\n\r\nFor each specified field get its value and optionally set the field's\r\nexpiration time in sec/msec /unix-sec/unix-msec:\r\n ```\r\n HGETF key \r\n [NX | XX | GT | LT]\r\n[EX seconds | PX milliseconds | EXAT unix-time-seconds | PXAT\r\nunix-time-milliseconds | PERSIST]\r\n \r\n ```\r\n\r\n- HSETF command\r\n\r\nFor each specified field value pair: set field to value and optionally\r\nset the field's expiration time in sec/msec /unix-sec/unix-msec:\r\n ```\r\n HSETF key \r\n [DC] \r\n [DCF | DOF] \r\n [NX | XX | GT | LT] \r\n [GETNEW | GETOLD] \r\n[EX seconds | PX milliseconds | EXAT unix-time-seconds | PXAT\r\nunix-time-milliseconds | KEEPTTL]\r\n \r\n ```\r\n\r\nTodo:\r\n- Performance improvement.\r\n- rdb load/save\r\n- aof\r\n- defrag","shortMessageHtmlLink":"Add listpack support, hgetf and hsetf commands (#13209)"}},{"before":"03cd525ffaa301822ebdf96e95e0ffe19474f07d","after":"13401f8bc1cd6ab2905f406227fb9d762d78247e","ref":"refs/heads/hash-field-expiry-integ","pushedAt":"2024-05-08T15:38:45.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"moticless","name":"Moti Cohen","path":"/moticless","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/24944278?s=80&v=4"},"commit":{"message":"ebuckets: Add test for ACT_UPDATE_EXP_ITEM (#13249)\n\n- On ebExpire() verify the logic of update expired value to a new time\r\nrather than remove it.\r\n- Refine ebuckets benchmark","shortMessageHtmlLink":"ebuckets: Add test for ACT_UPDATE_EXP_ITEM (#13249)"}},{"before":"f95031c4733078788063de775c968b6dc85522c0","after":"0e1de78fca849c135fd00cd85b5b87920e46e50d","ref":"refs/heads/unstable","pushedAt":"2024-05-06T08:55:42.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"guybe7","name":null,"path":"/guybe7","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/19590683?s=80&v=4"},"commit":{"message":"XREADGROUP from PEL should not affect server.dirty (#13251)\n\nBecause it does not cause any propagation (arguably it should, see the\r\ncomment in the tcl file)\r\n\r\nThe motivation for this fix is that in 6.2 if dirty changed without\r\npropagation inside MULTI/EXEC it would cause propagation of EXEC only,\r\nwhich would result in the replica sending errors to its master","shortMessageHtmlLink":"XREADGROUP from PEL should not affect server.dirty (#13251)"}},{"before":"c33c91dbcea21d895bcda7ca5756c24f86664b22","after":"03cd525ffaa301822ebdf96e95e0ffe19474f07d","ref":"refs/heads/hash-field-expiry-integ","pushedAt":"2024-05-03T03:11:42.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"sundb","name":"debing.sun","path":"/sundb","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/965798?s=80&v=4"},"commit":{"message":"Fix reply schema for hfe related commands (#13238)","shortMessageHtmlLink":"Fix reply schema for hfe related commands (#13238)"}},{"before":"c18ff05665cb195190fbbd37235e15b2b86ebb63","after":"c33c91dbcea21d895bcda7ca5756c24f86664b22","ref":"refs/heads/hash-field-expiry-integ","pushedAt":"2024-04-25T15:29:03.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"moticless","name":"Moti Cohen","path":"/moticless","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/24944278?s=80&v=4"},"commit":{"message":"Support HSET+expire in one command, at infra level (#13230)\n\nUnify infra of `HSETF`, `HEXPIRE`, `HSET` and provide API for RDB load\r\nas well. Whereas setting plain fields is rather straightforward, setting\r\nexpiration time to fields might be time-consuming and complex since each \r\nupdate of expiration time, not only updates `ebuckets` of corresponding hash, \r\nbut also might update `ebuckets` of global HFE DS. It is required to opt \r\nsequence of field updates with expirartion for a given hash, such that only once\r\ndone, the global HFE DS will get updated.\r\n\r\nTo do so, follow the scheme:\r\n1. Call `hashTypeSetExInit()` to initialize the HashTypeSetEx struct.\r\n2. Call `hashTypeSetEx()` one time or more, for each field/expiration update.\r\n3. Call `hashTypeSetExDone()` for notification and update of global HFE.\r\n\r\nIf expiration is not required, then avoid this API and use instead hashTypeSet().","shortMessageHtmlLink":"Support HSET+expire in one command, at infra level (#13230)"}},{"before":"772564fc9e7a4415c79ef47486db1aacf9cb8915","after":"f95031c4733078788063de775c968b6dc85522c0","ref":"refs/heads/unstable","pushedAt":"2024-04-25T06:11:45.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"sundb","name":"debing.sun","path":"/sundb","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/965798?s=80&v=4"},"commit":{"message":"Fix CI failure caused by PR #13231 (#13233)\n\nFor my mistake, in the last revert commit in #13231, I originally wanted\r\nto revert the last one, but reverted the penultimate fix.\r\nNow that we have fix another potential memory read issue in [`743f1dd`\r\n(#13231)](https://github.com/redis/redis/pull/13231/commits/743f1dde79b433fdb8ea13de4fd73457d4fe25eb),\r\nnow it just seems to avoid confusion, i will verify in the future\r\nwhether it will have any impact, if so we will add this PR to backport.\r\n\r\nFailed CI: https://github.com/sundb/redis/actions/runs/8826731960","shortMessageHtmlLink":"Fix CI failure caused by PR #13231 (#13233)"}},{"before":"804110a487f048669aa9d9412e5789ec43f4fe39","after":"772564fc9e7a4415c79ef47486db1aacf9cb8915","ref":"refs/heads/unstable","pushedAt":"2024-04-24T08:15:42.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"sundb","name":"debing.sun","path":"/sundb","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/965798?s=80&v=4"},"commit":{"message":"Fix forget to update the dict's node in the kvstore's rehashing list after defragment (#13231)\n\nIntroducted by #13013\r\n\r\nAfter defragmenting the dictionary in the kvstore, if the dict is\r\nreallocated, the value of its node in the kvstore rehashing list must be\r\nupdated.","shortMessageHtmlLink":"Fix forget to update the dict's node in the kvstore's rehashing list …"}},{"before":"4581d43230fab23600b90f731edb0472dbea1c4d","after":"c18ff05665cb195190fbbd37235e15b2b86ebb63","ref":"refs/heads/hash-field-expiry-integ","pushedAt":"2024-04-18T13:06:30.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"moticless","name":"Moti Cohen","path":"/moticless","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/24944278?s=80&v=4"},"commit":{"message":"Hash Field Expiration - Basic support\n\n- Add ebuckets & mstr data structures\r\n- Integrate active & lazy expiration\r\n- Add most of the commands \r\n- Add support for dict (listpack is missing)\r\nTODOs: RDB, notification, listpack, HSET, HGETF, defrag, aof","shortMessageHtmlLink":"Hash Field Expiration - Basic support"}},{"before":"e3550f01dde29d5d1eaa37dbb4533692c5680f06","after":"804110a487f048669aa9d9412e5789ec43f4fe39","ref":"refs/heads/unstable","pushedAt":"2024-04-16T09:43:34.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"oranagra","name":"Oran Agra","path":"/oranagra","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/7045099?s=80&v=4"},"commit":{"message":"Allocate Lua VM code with jemalloc instead of libc, and count it used memory (#13133)\n\n## Background\r\n1. Currently Lua memory control does not pass through Redis's zmalloc.c.\r\nRedis maxmemory cannot limit memory problems caused by users abusing lua\r\nsince these lua VM memory is not part of used_memory.\r\n\r\n2. Since jemalloc is much better (fragmentation and speed), and also we\r\nknow it and trust it. we are\r\ngoing to use jemalloc instead of libc to allocate the Lua VM code and\r\ncount it used memory.\r\n\r\n## Process:\r\nIn this PR, we will use jemalloc in lua. \r\n1. Create an arena for all lua vm (script and function), which is\r\nshared, in order to avoid blocking defragger.\r\n2. Create a bound tcache for the lua VM, since the lua VM and the main\r\nthread are by default in the same tcache, and if there is no isolated\r\ntcache, lua may request memory from the tcache which has just been freed\r\nby main thread, and vice versa\r\nOn the other hand, since lua vm might be release in bio thread, but\r\ntcache is not thread-safe, we need to recreate\r\n the tcache every time we recreate the lua vm.\r\n3. Remove lua memory statistics from memory fragmentation statistics to\r\navoid the effects of lua memory fragmentation\r\n\r\n## Other\r\nAdd the following new fields to `INFO DEBUG` (we may promote them to\r\nINFO MEMORY some day)\r\n1. allocator_allocated_lua: total number of bytes allocated of lua arena\r\n2. allocator_active_lua: total number of bytes in active pages allocated\r\nin lua arena\r\n3. allocator_resident_lua: maximum number of bytes in physically\r\nresident data pages mapped in lua arena\r\n4. allocator_frag_bytes_lua: fragment bytes in lua arena\r\n\r\nThis is oranagra's idea, and i got some help from sundb.\r\n\r\nThis solves the third point in #13102.\r\n\r\n---------\r\n\r\nCo-authored-by: debing.sun \r\nCo-authored-by: Oran Agra ","shortMessageHtmlLink":"Allocate Lua VM code with jemalloc instead of libc, and count it used…"}},{"before":"aada46a0c8e94c1716c308562ae53484a2e06e02","after":null,"ref":"refs/heads/dependabot/github_actions/cross-platform-actions/action-0.23.0","pushedAt":"2024-04-15T15:19:17.000Z","pushType":"branch_deletion","commitsCount":0,"pusher":{"login":"dependabot[bot]","name":null,"path":"/apps/dependabot","primaryAvatarUrl":"https://avatars.githubusercontent.com/in/29110?s=80&v=4"}},{"before":null,"after":"6b0245b50c9d18c92cf5676ff396eef0dcf2946b","ref":"refs/heads/dependabot/github_actions/cross-platform-actions/action-0.24.0","pushedAt":"2024-04-15T15:19:10.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"dependabot[bot]","name":null,"path":"/apps/dependabot","primaryAvatarUrl":"https://avatars.githubusercontent.com/in/29110?s=80&v=4"},"commit":{"message":"Bump cross-platform-actions/action from 0.22.0 to 0.24.0\n\nBumps [cross-platform-actions/action](https://github.com/cross-platform-actions/action) from 0.22.0 to 0.24.0.\n- [Release notes](https://github.com/cross-platform-actions/action/releases)\n- [Changelog](https://github.com/cross-platform-actions/action/blob/master/changelog.md)\n- [Commits](https://github.com/cross-platform-actions/action/compare/v0.22.0...v0.24.0)\n\n---\nupdated-dependencies:\n- dependency-name: cross-platform-actions/action\n dependency-type: direct:production\n update-type: version-update:semver-minor\n...\n\nSigned-off-by: dependabot[bot] ","shortMessageHtmlLink":"Bump cross-platform-actions/action from 0.22.0 to 0.24.0"}},{"before":"f4481e657f905074fa515701af3f695757817d88","after":"e3550f01dde29d5d1eaa37dbb4533692c5680f06","ref":"refs/heads/unstable","pushedAt":"2024-04-08T08:12:57.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"oranagra","name":"Oran Agra","path":"/oranagra","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/7045099?s=80&v=4"},"commit":{"message":"redis-cli - sendReadOnly() to work with Redis Cloud (#13195)\n\nWhen using Redis Cloud, sendReadOnly() exit with `Error: ERR unknown\r\ncommand 'READONLY'`.\r\nIt is impacting `--memkeys`, `--bigkeys`, `--hotkeys`, and will impact\r\n`--keystats`.\r\nAdded one line to ignore this error.\r\n\r\nissue introduced in #12735 (not yet released).","shortMessageHtmlLink":"redis-cli - sendReadOnly() to work with Redis Cloud (#13195)"}},{"before":"4581d43230fab23600b90f731edb0472dbea1c4d","after":"f4481e657f905074fa515701af3f695757817d88","ref":"refs/heads/unstable","pushedAt":"2024-04-07T12:59:36.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"sundb","name":"debing.sun","path":"/sundb","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/965798?s=80&v=4"},"commit":{"message":"Use usleep() instead of sched_yield() to yield cpu (#13183)\n\nwhen the main thread and the module thread are in the same thread,\r\nsched_yield() can work well.\r\nwhen they are both bind to different cpus, sched_yield() will look for\r\nthe thread with the highest priority, and if the module thread is always\r\nthe highest priority on a cpu, it will take a long time to let the main\r\nthread to reacquire the GIL.\r\n\r\nref https://man7.org/linux/man-pages/man2/sched_yield.2.html\r\n```\r\nIf the calling thread is the only thread in the highest priority\r\nlist at that time, it will continue to run after a call to\r\nsched_yield().\r\n```","shortMessageHtmlLink":"Use usleep() instead of sched_yield() to yield cpu (#13183)"}},{"before":null,"after":"4581d43230fab23600b90f731edb0472dbea1c4d","ref":"refs/heads/hash-field-expiry-integ","pushedAt":"2024-04-07T11:40:36.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"moticless","name":"Moti Cohen","path":"/moticless","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/24944278?s=80&v=4"},"commit":{"message":"Fix daylight race condition and some thread leaks (#13191)\n\nfix some issues that come from sanitizer thread report.\r\n\r\n1. when the main thread is updating daylight_active, other threads (bio,\r\nmodule thread) may be writing logs at the same time.\r\n```\r\nWARNING: ThreadSanitizer: data race (pid=661064)\r\n Read of size 4 at 0x55c9a4d11c70 by thread T2:\r\n #0 serverLogRaw /home/sundb/data/redis_fork/src/server.c:116 (redis-server+0x8d797) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0)\r\n #1 _serverLog.constprop.2 /home/sundb/data/redis_fork/src/server.c:146 (redis-server+0x2a3b14) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0)\r\n #2 bioProcessBackgroundJobs /home/sundb/data/redis_fork/src/bio.c:329 (redis-server+0x1c24ca) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0)\r\n\r\n Previous write of size 4 at 0x55c9a4d11c70 by main thread (mutexes: write M0, write M1, write M2, write M3):\r\n #0 updateCachedTimeWithUs /home/sundb/data/redis_fork/src/server.c:1102 (redis-server+0x925e7) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0)\r\n #1 updateCachedTimeWithUs /home/sundb/data/redis_fork/src/server.c:1087 (redis-server+0x925e7)\r\n #2 updateCachedTime /home/sundb/data/redis_fork/src/server.c:1118 (redis-server+0x925e7)\r\n #3 afterSleep /home/sundb/data/redis_fork/src/server.c:1811 (redis-server+0x925e7)\r\n #4 aeProcessEvents /home/sundb/data/redis_fork/src/ae.c:389 (redis-server+0x85ae0) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0)\r\n #5 aeProcessEvents /home/sundb/data/redis_fork/src/ae.c:342 (redis-server+0x85ae0)\r\n #6 aeMain /home/sundb/data/redis_fork/src/ae.c:477 (redis-server+0x85ae0)\r\n #7 main /home/sundb/data/redis_fork/src/server.c:7211 (redis-server+0x7168c) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0)\r\n```\r\n\r\n2. thread leaks in module tests\r\n```\r\nWARNING: ThreadSanitizer: thread leak (pid=668683)\r\n Thread T13 (tid=670041, finished) created by main thread at:\r\n #0 pthread_create ../../../../src/libsanitizer/tsan/tsan_interceptors_posix.cpp:1036 (libtsan.so.2+0x3d179) (BuildId: 28a9f70061dbb2dfa2cef661d3b23aff4ea13536)\r\n #1 HelloBlockNoTracking_RedisCommand /home/sundb/data/redis_fork/tests/modules/blockonbackground.c:200 (blockonbackground.so+0x97fd) (BuildId: 9cd187906c57e88cdf896d121d1d96448b37a136)\r\n #2 HelloBlockNoTracking_RedisCommand /home/sundb/data/redis_fork/tests/modules/blockonbackground.c:169 (blockonbackground.so+0x97fd)\r\n #3 call /home/sundb/data/redis_fork/src/server.c:3546 (redis-server+0x9b7fb) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0)\r\n #4 processCommand /home/sundb/data/redis_fork/src/server.c:4176 (redis-server+0xa091c) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0)\r\n #5 processCommandAndResetClient /home/sundb/data/redis_fork/src/networking.c:2468 (redis-server+0xd2b8e) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0)\r\n #6 processInputBuffer /home/sundb/data/redis_fork/src/networking.c:2576 (redis-server+0xd2b8e)\r\n #7 readQueryFromClient /home/sundb/data/redis_fork/src/networking.c:2722 (redis-server+0xd358f) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0)\r\n #8 callHandler /home/sundb/data/redis_fork/src/connhelpers.h:58 (redis-server+0x288a7b) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0)\r\n #9 connSocketEventHandler /home/sundb/data/redis_fork/src/socket.c:277 (redis-server+0x288a7b)\r\n #10 aeProcessEvents /home/sundb/data/redis_fork/src/ae.c:417 (redis-server+0x85b45) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0)\r\n #11 aeProcessEvents /home/sundb/data/redis_fork/src/ae.c:342 (redis-server+0x85b45)\r\n #12 aeMain /home/sundb/data/redis_fork/src/ae.c:477 (redis-server+0x85b45)\r\n #13 main /home/sundb/data/redis_fork/src/server.c:7211 (redis-server+0x7168c) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0)\r\n```","shortMessageHtmlLink":"Fix daylight race condition and some thread leaks (#13191)"}},{"before":"4df037962dd446a4a686e2b6d56d5367b6c9f0db","after":"4581d43230fab23600b90f731edb0472dbea1c4d","ref":"refs/heads/unstable","pushedAt":"2024-04-04T10:49:51.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"oranagra","name":"Oran Agra","path":"/oranagra","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/7045099?s=80&v=4"},"commit":{"message":"Fix daylight race condition and some thread leaks (#13191)\n\nfix some issues that come from sanitizer thread report.\r\n\r\n1. when the main thread is updating daylight_active, other threads (bio,\r\nmodule thread) may be writing logs at the same time.\r\n```\r\nWARNING: ThreadSanitizer: data race (pid=661064)\r\n Read of size 4 at 0x55c9a4d11c70 by thread T2:\r\n #0 serverLogRaw /home/sundb/data/redis_fork/src/server.c:116 (redis-server+0x8d797) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0)\r\n #1 _serverLog.constprop.2 /home/sundb/data/redis_fork/src/server.c:146 (redis-server+0x2a3b14) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0)\r\n #2 bioProcessBackgroundJobs /home/sundb/data/redis_fork/src/bio.c:329 (redis-server+0x1c24ca) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0)\r\n\r\n Previous write of size 4 at 0x55c9a4d11c70 by main thread (mutexes: write M0, write M1, write M2, write M3):\r\n #0 updateCachedTimeWithUs /home/sundb/data/redis_fork/src/server.c:1102 (redis-server+0x925e7) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0)\r\n #1 updateCachedTimeWithUs /home/sundb/data/redis_fork/src/server.c:1087 (redis-server+0x925e7)\r\n #2 updateCachedTime /home/sundb/data/redis_fork/src/server.c:1118 (redis-server+0x925e7)\r\n #3 afterSleep /home/sundb/data/redis_fork/src/server.c:1811 (redis-server+0x925e7)\r\n #4 aeProcessEvents /home/sundb/data/redis_fork/src/ae.c:389 (redis-server+0x85ae0) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0)\r\n #5 aeProcessEvents /home/sundb/data/redis_fork/src/ae.c:342 (redis-server+0x85ae0)\r\n #6 aeMain /home/sundb/data/redis_fork/src/ae.c:477 (redis-server+0x85ae0)\r\n #7 main /home/sundb/data/redis_fork/src/server.c:7211 (redis-server+0x7168c) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0)\r\n```\r\n\r\n2. thread leaks in module tests\r\n```\r\nWARNING: ThreadSanitizer: thread leak (pid=668683)\r\n Thread T13 (tid=670041, finished) created by main thread at:\r\n #0 pthread_create ../../../../src/libsanitizer/tsan/tsan_interceptors_posix.cpp:1036 (libtsan.so.2+0x3d179) (BuildId: 28a9f70061dbb2dfa2cef661d3b23aff4ea13536)\r\n #1 HelloBlockNoTracking_RedisCommand /home/sundb/data/redis_fork/tests/modules/blockonbackground.c:200 (blockonbackground.so+0x97fd) (BuildId: 9cd187906c57e88cdf896d121d1d96448b37a136)\r\n #2 HelloBlockNoTracking_RedisCommand /home/sundb/data/redis_fork/tests/modules/blockonbackground.c:169 (blockonbackground.so+0x97fd)\r\n #3 call /home/sundb/data/redis_fork/src/server.c:3546 (redis-server+0x9b7fb) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0)\r\n #4 processCommand /home/sundb/data/redis_fork/src/server.c:4176 (redis-server+0xa091c) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0)\r\n #5 processCommandAndResetClient /home/sundb/data/redis_fork/src/networking.c:2468 (redis-server+0xd2b8e) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0)\r\n #6 processInputBuffer /home/sundb/data/redis_fork/src/networking.c:2576 (redis-server+0xd2b8e)\r\n #7 readQueryFromClient /home/sundb/data/redis_fork/src/networking.c:2722 (redis-server+0xd358f) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0)\r\n #8 callHandler /home/sundb/data/redis_fork/src/connhelpers.h:58 (redis-server+0x288a7b) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0)\r\n #9 connSocketEventHandler /home/sundb/data/redis_fork/src/socket.c:277 (redis-server+0x288a7b)\r\n #10 aeProcessEvents /home/sundb/data/redis_fork/src/ae.c:417 (redis-server+0x85b45) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0)\r\n #11 aeProcessEvents /home/sundb/data/redis_fork/src/ae.c:342 (redis-server+0x85b45)\r\n #12 aeMain /home/sundb/data/redis_fork/src/ae.c:477 (redis-server+0x85b45)\r\n #13 main /home/sundb/data/redis_fork/src/server.c:7211 (redis-server+0x7168c) (BuildId: dca0b1945ba30010e36129bdb296e488dd2b32d0)\r\n```","shortMessageHtmlLink":"Fix daylight race condition and some thread leaks (#13191)"}},{"before":"ce47834309ec8fd74cbeaf676313005ee440faa5","after":"4df037962dd446a4a686e2b6d56d5367b6c9f0db","ref":"refs/heads/unstable","pushedAt":"2024-04-02T12:09:52.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"moticless","name":"Moti Cohen","path":"/moticless","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/24944278?s=80&v=4"},"commit":{"message":"Change FLUSHALL/FLUSHDB SYNC to run as blocking ASYNC (#13167)\n\n# Overview\r\nUsers utilize the `FLUSHDB SYNC` and `FLUSHALL SYNC` commands for a variety of \r\nreasons. The main issue with this command is that if the database becomes \r\nsubstantial in size, the server will be unresponsive for an extended period. \r\nOther than freezing application traffic, this may also lead some clients making \r\nincorrect judgments about the server's availability. For instance, a watchdog may \r\nerroneously decide to terminate the process, resulting in potential adverse \r\noutcomes. While a `FLUSH* ASYNC` can address these issues, it might not be used \r\nfor two reasons: firstly, it's not the default, and secondly, in some cases, the \r\nclient issuing the flush wants to wait for its completion before repopulating the \r\ndatabase.\r\n\r\nBetween the option of triggering FLUSH* asynchronously in the background without \r\nindication for completion versus running it synchronously in the foreground by \r\nthe main thread, there is another more appealing option. We can block the\r\nclient that requested the flush, execute the flush command in the background, and \r\nonce done, unblock the client and return notification for completion. This approach \r\nensures the server remains responsive to other clients, and the blocked client \r\nreceives the expected response only after the flush operation has been successfully \r\ncarried out.\r\n\r\n# Implementation details\r\nInstead of defining yet another flavor to the flush command, we can modify\r\n`FLUSHALL SYNC` and `FLUSHDB SYNC` always run in this new mode.\r\n\r\n## Extending BIO Threads capabilities\r\nToday jobs that are carried out by BIO threads don't have the capability to \r\nindicate completion to the main thread. We can add this infrastructure by having\r\nan additional dummy job, coined as completion-job, that eventually will be written \r\nby BIO threads to a response-queue. The main thread will take care to consume items\r\nfrom the response-queue and call the provided callback function of each \r\ncompletion-job.\r\n\r\n## FLUSH* SYNC to run as blocking ASYNC\r\nCommand `FLUSH* SYNC` will be modified to create one or more async jobs to flush\r\nDB(s) and afterward will push additional completion-job request. By sending the\r\ncompletion job request only at the end, the main thread will be called back only\r\nafter all the preceding jobs completed their task in the background. During that\r\ntime, the client of the command is suspended and marked as `BLOCKED_LAZYFREE`\r\nwhereas any other client will be able to communicate with the server without any\r\nissue.","shortMessageHtmlLink":"Change FLUSHALL/FLUSHDB SYNC to run as blocking ASYNC (#13167)"}},{"before":"0b34396924eca4edc524469886dc5be6c77ec4ed","after":"ce47834309ec8fd74cbeaf676313005ee440faa5","ref":"refs/heads/unstable","pushedAt":"2024-04-01T15:08:55.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"moticless","name":"Moti Cohen","path":"/moticless","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/24944278?s=80&v=4"},"commit":{"message":"kvstoreIteratorNext() wrongly reset iterator twice (#13178)\n\nIt calls kvstoreIteratorNextDict() which eventually calls dictResumeRehashing()\r\nAnd then, on return, it calls dictResetIterator(iter) which calls dictResumeRehashing().\r\nWe end up with pauserehash value decremented twice instead of once.","shortMessageHtmlLink":"kvstoreIteratorNext() wrongly reset iterator twice (#13178)"}},{"before":"d6d38dc9285a01261b93b85277086d53c7b7714a","after":null,"ref":"refs/heads/revert-13157-license_change","pushedAt":"2024-03-26T18:01:49.000Z","pushType":"branch_deletion","commitsCount":0,"pusher":{"login":"LiorKogan","name":"Lior Kogan","path":"/LiorKogan","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/7252991?s=80&v=4"}},{"before":null,"after":"d6d38dc9285a01261b93b85277086d53c7b7714a","ref":"refs/heads/revert-13157-license_change","pushedAt":"2024-03-21T03:50:13.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"enjoy-binbin","name":"Binbin","path":"/enjoy-binbin","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/22811481?s=80&v=4"},"commit":{"message":"Revert \"Change license from BSD-3 to dual RSALv2+SSPLv1 (#13157)\"\n\nThis reverts commit 0b34396924eca4edc524469886dc5be6c77ec4ed.","shortMessageHtmlLink":"Revert \"Change license from BSD-3 to dual RSALv2+SSPLv1 (#13157)\""}},{"before":"e64d91c37105bc2e23816b6f81b9ffc5e5d99801","after":"0b34396924eca4edc524469886dc5be6c77ec4ed","ref":"refs/heads/unstable","pushedAt":"2024-03-20T22:38:24.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"K-Jo","name":"Pieter Cailliau","path":"/K-Jo","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/4069725?s=80&v=4"},"commit":{"message":"Change license from BSD-3 to dual RSALv2+SSPLv1 (#13157)\n\n[Read more about the license change\r\nhere](https://redis.com/blog/redis-adopts-dual-source-available-licensing/)\r\nLive long and prosper 🖖","shortMessageHtmlLink":"Change license from BSD-3 to dual RSALv2+SSPLv1 (#13157)"}},{"before":"bad33f8738b4be5f58c4439a0c78312e4afbe432","after":"e64d91c37105bc2e23816b6f81b9ffc5e5d99801","ref":"refs/heads/unstable","pushedAt":"2024-03-20T20:44:29.000Z","pushType":"pr_merge","commitsCount":1,"pusher":{"login":"oranagra","name":"Oran Agra","path":"/oranagra","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/7045099?s=80&v=4"},"commit":{"message":"Fix dict use-after-free problem in kvs->rehashing (#13154)\n\nIn ASAN CI, we find server may crash because of NULL ptr in `kvstoreIncrementallyRehash`.\r\nthe reason is that we use two phase unlink in `dbGenericDelete`. After `kvstoreDictTwoPhaseUnlinkFind`,\r\nthe dict may be in rehashing and only have one element in ht[0] of `db->keys`.\r\n\r\nWhen we delete the last element in `db->keys` meanwhile `db->keys` is in rehashing, we may free the\r\ndict in `kvstoreDictTwoPhaseUnlinkFree` without deleting the node in `kvs->rehashing`. Then we may\r\nuse this freed ptr in `kvstoreIncrementallyRehash` in the `serverCron` and cause the crash.\r\nThis is indeed a use-after-free problem.\r\n\r\nThe fix is to call rehashingCompleted in dictRelease and dictEmpty, so that every call for\r\nrehashingStarted is always matched with a rehashingCompleted.\r\n\r\nAdding a test in the unit test to catch it consistently\r\n\r\n---------\r\n\r\nCo-authored-by: Oran Agra \r\nCo-authored-by: debing.sun ","shortMessageHtmlLink":"Fix dict use-after-free problem in kvs->rehashing (#13154)"}}],"hasNextPage":true,"hasPreviousPage":false,"activityType":"all","actor":null,"timePeriod":"all","sort":"DESC","perPage":30,"cursor":"djE6ks8AAAAETMpyPwA","startCursor":null,"endCursor":null}},"title":"Activity · redis/redis"}