{"payload":{"feedbackUrl":"https://github.com/orgs/community/discussions/53140","repo":{"id":184981,"defaultBranch":"master","name":"memcached","ownerLogin":"memcached","currentUserCanPush":false,"isFork":false,"isEmpty":false,"createdAt":"2009-04-24T23:34:25.000Z","ownerAvatar":"https://avatars.githubusercontent.com/u/41836?v=4","public":true,"private":false,"isOrgOwned":true},"refInfo":{"name":"","listCacheKey":"v0:1717172364.0","currentOid":""},"activityList":{"items":[{"before":"9dc8b148345b27dd86580641d17f4268885b5894","after":"90f1d91bd0b3048fc2e3dffad8511559568b8ac2","ref":"refs/heads/master","pushedAt":"2024-05-31T16:19:23.000Z","pushType":"push","commitsCount":11,"pusher":{"login":"dormando","name":"dormando","path":"/dormando","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/66832?s=80&v=4"},"commit":{"message":"memcached-tool: add -u flag to unescape special chars in keys names\n\nThe \"lru_crawler metadump all\" command called by \"keys\" option\nescapes unsafe special chars with corresponding %xx codes.\nFor example, a key named \"keywith+\" will be shown in \"memcached-tool keys\" as:\n\nkey=keywith%2B exp=... la=... cas=...\n\nBut in some cases in can be useful to have the original key name printed as-is.\nThis commit adds a \"-u\" flag to do the unescaping of lru_crawler output before\ndisplaying it.","shortMessageHtmlLink":"memcached-tool: add -u flag to unescape special chars in keys names"}},{"before":"6fdc1e6470760fdc681e8212c38038e94c710d74","after":"90f1d91bd0b3048fc2e3dffad8511559568b8ac2","ref":"refs/heads/next","pushedAt":"2024-05-31T15:55:43.000Z","pushType":"push","commitsCount":2,"pusher":{"login":"dormando","name":"dormando","path":"/dormando","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/66832?s=80&v=4"},"commit":{"message":"memcached-tool: add -u flag to unescape special chars in keys names\n\nThe \"lru_crawler metadump all\" command called by \"keys\" option\nescapes unsafe special chars with corresponding %xx codes.\nFor example, a key named \"keywith+\" will be shown in \"memcached-tool keys\" as:\n\nkey=keywith%2B exp=... la=... cas=...\n\nBut in some cases in can be useful to have the original key name printed as-is.\nThis commit adds a \"-u\" flag to do the unescaping of lru_crawler output before\ndisplaying it.","shortMessageHtmlLink":"memcached-tool: add -u flag to unescape special chars in keys names"}},{"before":"f3700374f40385b3288c961401be4b0895642235","after":"6fdc1e6470760fdc681e8212c38038e94c710d74","ref":"refs/heads/next","pushedAt":"2024-05-31T05:16:13.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"dormando","name":"dormando","path":"/dormando","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/66832?s=80&v=4"},"commit":{"message":"proxy: add counters for VM memory and GC runs\n\nvm_gc_runs: number of times a lua VM GC has been run. Each worker thread\nVM causing a GC run increases this number.\nvm_memory_kb: total memory in kilobytes of all worker thread VM's. Does\nnot include the config thread.\n\nWanted to add a counter for gc steps as well, but am not convinced it's\nuseful information.\n\nAlso adds a VM fudge when freeing a request slot. This tricks the GC to\nrun if slots are being freed up, as otherwise memory can stick around\nfor a long time if nothing else is being allocated and dropped.","shortMessageHtmlLink":"proxy: add counters for VM memory and GC runs"}},{"before":"5d97fda21fe3c855c23f4504140d77cb93143e5c","after":"f3700374f40385b3288c961401be4b0895642235","ref":"refs/heads/next","pushedAt":"2024-05-30T01:16:32.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"dormando","name":"dormando","path":"/dormando","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/66832?s=80&v=4"},"commit":{"message":"proxy: make iov limit bugs easier to see\n\nShredders bench suite can trigger the former BE IOV limit issue easily\nwhen the IOV limit is 128 or less, but somehow very very rare for 256+.\nAlso adds an assert that will fire if the condition is hit rather than\ncorrupt data.","shortMessageHtmlLink":"proxy: make iov limit bugs easier to see"}},{"before":"33c00b49f20a092a33ffb5d48ff3a0037088fc7c","after":"5d97fda21fe3c855c23f4504140d77cb93143e5c","ref":"refs/heads/next","pushedAt":"2024-05-29T17:57:08.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"dormando","name":"dormando","path":"/dormando","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/66832?s=80&v=4"},"commit":{"message":"proxy: `res:close()`\n\nalias the __close to close() so users can manually clear an object.","shortMessageHtmlLink":"proxy: res:close()"}},{"before":"8a9b709265cc0e7b786aa6d43dcc80206a8311af","after":"33c00b49f20a092a33ffb5d48ff3a0037088fc7c","ref":"refs/heads/next","pushedAt":"2024-05-28T23:47:42.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"dormando","name":"dormando","path":"/dormando","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/66832?s=80&v=4"},"commit":{"message":"proxy: fix stupid write flush bug\n\nIf we can't write the entire backend request queue within the IOV limit,\nwe mark where in the request queue we were and try again later if the\nsocket is still writeable.\n\nOut of paranoia I had marked the position _before_ the next not-flushed\nIO, but that is obviously wrong: any _flushed_ IO can be _removed_ from\nthe stack after processing a READ event from the socket. If we start\nagain at the position of a flushed IO it might be reclaimed memory.\n\nSomehow this was extremely hard for the bench suite to reproduce, unless\nI cut the IOV limit from 1024 to 32.","shortMessageHtmlLink":"proxy: fix stupid write flush bug"}},{"before":"788f592f4a93a8f4f4ff2721f119f7de269e22c9","after":"8a9b709265cc0e7b786aa6d43dcc80206a8311af","ref":"refs/heads/next","pushedAt":"2024-05-24T20:38:05.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"dormando","name":"dormando","path":"/dormando","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/66832?s=80&v=4"},"commit":{"message":"proxy: backend TLS support [EXPERIMENTAL]\n\nHas not been extensively tested or validated under benchmarks. Please\nlet us know if you intend to use the feature, but feel free to try it\nout yourself since it will likely work.\n\nTo use, within `mcp_config_pools`:\nmcp.init_tls() -- before making any backends\nmcp.backend_use_tls(true)\nor pass 'tls = true' as an argument to `mcp.backend`\n\nDoes not currently support client certificates or peer verification. Let\nus know if you need this support and we will prioritize it.","shortMessageHtmlLink":"proxy: backend TLS support [EXPERIMENTAL]"}},{"before":"0954b53cd18cbc185fbf26d44f1c22c8a692fc36","after":"788f592f4a93a8f4f4ff2721f119f7de269e22c9","ref":"refs/heads/next","pushedAt":"2024-05-24T20:22:27.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"dormando","name":"dormando","path":"/dormando","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/66832?s=80&v=4"},"commit":{"message":"crawler: don't block during hash expansion\n\nif using `lru_crawler metadump hash` the client could block while trying\nto grab the maintenance lock potentially held by the hash table\nexpander. Hash expansion can potentially take a long time.\n\nFurther, while waiting for this lock, the crawler is holding the lru\ncrawler lock, which could cause other clients trying to talk to the\ncrawler code to themselves block.\n\nSo... throw a locked error and don't do that.","shortMessageHtmlLink":"crawler: don't block during hash expansion"}},{"before":"4a75ac2415549eca39dece91c7292378018f785b","after":"0954b53cd18cbc185fbf26d44f1c22c8a692fc36","ref":"refs/heads/next","pushedAt":"2024-05-24T19:29:12.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"dormando","name":"dormando","path":"/dormando","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/66832?s=80&v=4"},"commit":{"message":"proxy: support to-be-closed for result objects\n\nif using mcp.internal(r) to fetch keys, but not returning them to the\nuser, the underlying item references will normally stick around until\nthey are garbage collected. Memcached is _not_ designed for this:\nreferences must be held for a short and temporary period of time.\n\nIf using a res object that you don't intend to send back to the user, it\nmust be marked with ","shortMessageHtmlLink":"proxy: support to-be-closed for result objects"}},{"before":"14ad1931b91eb12778ed0d2c6b2969e2aa0af24a","after":"4a75ac2415549eca39dece91c7292378018f785b","ref":"refs/heads/next","pushedAt":"2024-05-20T16:35:52.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"dormando","name":"dormando","path":"/dormando","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/66832?s=80&v=4"},"commit":{"message":"proxy: add mcp.backend_depth_limit(int)\n\nIf a backend has a queue depth limt over this amount, fast-fail any\nfurther requests.\n\nA global parallel request limit can mean a single slow or overloaded\nbackend causes the entire proxy to stop working. As a first layer of\ndefence the depth for a particular backend should be capped.","shortMessageHtmlLink":"proxy: add mcp.backend_depth_limit(int)"}},{"before":"9dc8b148345b27dd86580641d17f4268885b5894","after":"14ad1931b91eb12778ed0d2c6b2969e2aa0af24a","ref":"refs/heads/next","pushedAt":"2024-05-10T21:12:35.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"dormando","name":"dormando","path":"/dormando","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/66832?s=80&v=4"},"commit":{"message":"proxy: fix refcount leak in mcp.internal()\n\nIf running a fetch request without returning the result upstream to the\nuser, a successfully fetched item would leak its reference.","shortMessageHtmlLink":"proxy: fix refcount leak in mcp.internal()"}},{"before":"a4d8324942e653ec5fb5682d3ddeba38d00ff578","after":null,"ref":"refs/heads/proxy_subfix","pushedAt":"2024-05-10T20:16:59.000Z","pushType":"branch_deletion","commitsCount":0,"pusher":{"login":"dormando","name":"dormando","path":"/dormando","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/66832?s=80&v=4"}},{"before":"40096f93862d913fa6dc7933852de18b3f0e555b","after":null,"ref":"refs/heads/proxy_nreq","pushedAt":"2024-05-10T20:16:48.000Z","pushType":"branch_deletion","commitsCount":0,"pusher":{"login":"dormando","name":"dormando","path":"/dormando","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/66832?s=80&v=4"}},{"before":null,"after":"14ad1931b91eb12778ed0d2c6b2969e2aa0af24a","ref":"refs/heads/proxy_intref","pushedAt":"2024-05-10T20:13:42.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"dormando","name":"dormando","path":"/dormando","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/66832?s=80&v=4"},"commit":{"message":"proxy: fix refcount leak in mcp.internal()\n\nIf running a fetch request without returning the result upstream to the\nuser, a successfully fetched item would leak its reference.","shortMessageHtmlLink":"proxy: fix refcount leak in mcp.internal()"}},{"before":null,"after":"40096f93862d913fa6dc7933852de18b3f0e555b","ref":"refs/heads/proxy_nreq","pushedAt":"2024-05-10T01:16:09.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"dormando","name":"dormando","path":"/dormando","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/66832?s=80&v=4"},"commit":{"message":"proxy: improve gc handling of mcp.request\n\nSince userdata have to pass through the GC twice before being collected,\nwe usually make their allocations look oversized to the GC. This wasn't\nbeing done with mcp.request() because I don't personally see it causing\nproblems.","shortMessageHtmlLink":"proxy: improve gc handling of mcp.request"}},{"before":"77709d04dd4fc7d59f59425802cb9645b2cfe0f8","after":"9dc8b148345b27dd86580641d17f4268885b5894","ref":"refs/heads/master","pushedAt":"2024-05-06T04:16:52.000Z","pushType":"push","commitsCount":19,"pusher":{"login":"dormando","name":"dormando","path":"/dormando","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/66832?s=80&v=4"},"commit":{"message":"proxy: fix backend depth counter\n\nThe \"request depth\" value for a backend could go negative in some cases.\nWe used a magic value to attempt to speed up the sub-connection selector\nbut I don't think that's necessary. Without the magic value we don't\nneed to reset the depth to 0, so it stays balanced.\n\nAdded some new asserts so test and bench suites can find this issue more\neasily going forward. Survived the full bench suite with asserts\nenabled.","shortMessageHtmlLink":"proxy: fix backend depth counter"}},{"before":"a4d8324942e653ec5fb5682d3ddeba38d00ff578","after":"9dc8b148345b27dd86580641d17f4268885b5894","ref":"refs/heads/next","pushedAt":"2024-05-06T04:02:41.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"dormando","name":"dormando","path":"/dormando","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/66832?s=80&v=4"},"commit":{"message":"proxy: fix backend depth counter\n\nThe \"request depth\" value for a backend could go negative in some cases.\nWe used a magic value to attempt to speed up the sub-connection selector\nbut I don't think that's necessary. Without the magic value we don't\nneed to reset the depth to 0, so it stays balanced.\n\nAdded some new asserts so test and bench suites can find this issue more\neasily going forward. Survived the full bench suite with asserts\nenabled.","shortMessageHtmlLink":"proxy: fix backend depth counter"}},{"before":"cf132f5c4f86e18d49d5d4fec4a73bc0d6227d01","after":"a4d8324942e653ec5fb5682d3ddeba38d00ff578","ref":"refs/heads/next","pushedAt":"2024-05-04T01:14:16.000Z","pushType":"push","commitsCount":4,"pusher":{"login":"dormando","name":"dormando","path":"/dormando","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/66832?s=80&v=4"},"commit":{"message":"proxy: fix short writes caused by mcp.internal\n\nWasn't filling in the 'tosend' portion of a response structure when\nusing mcp.internal to create a response object.\n\nThis field is used as a fastpath check to see if a response has been\ncompletely transmitted to the socket or not. Since it was zero, we would\nalways consider an mcp.internal() based response as completely sent,\ncutting off the data for larger items.\n\nExpands tests for \"simply large\" and also a large enough to be chunked\nitem.","shortMessageHtmlLink":"proxy: fix short writes caused by mcp.internal"}},{"before":"8a188a09916029abbc15526bbcd4b02c5b94ccc2","after":"a4d8324942e653ec5fb5682d3ddeba38d00ff578","ref":"refs/heads/proxy_subfix","pushedAt":"2024-05-03T21:50:38.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"dormando","name":"dormando","path":"/dormando","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/66832?s=80&v=4"},"commit":{"message":"proxy: fix short writes caused by mcp.internal\n\nWasn't filling in the 'tosend' portion of a response structure when\nusing mcp.internal to create a response object.\n\nThis field is used as a fastpath check to see if a response has been\ncompletely transmitted to the socket or not. Since it was zero, we would\nalways consider an mcp.internal() based response as completely sent,\ncutting off the data for larger items.\n\nExpands tests for \"simply large\" and also a large enough to be chunked\nitem.","shortMessageHtmlLink":"proxy: fix short writes caused by mcp.internal"}},{"before":"38ac3768755b40de58c24b9d45a01004c6f493ee","after":"8a188a09916029abbc15526bbcd4b02c5b94ccc2","ref":"refs/heads/proxy_subfix","pushedAt":"2024-05-03T20:38:49.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"dormando","name":"dormando","path":"/dormando","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/66832?s=80&v=4"},"commit":{"message":"proxy: fix IO backlog softlock\n\nIf an IO thread got a large depth backlog it would only be able to write\n1024-ish items at a time, requiring more requests coming in to continue\nthe flush backlog. This is invisible in normal traffic conditions since\na sudden burst in traffic would still quickly dequeue with normal\ntraffic levels.\n\nSo if you cause a huge backlog and then _stop_ traffic it would never flush\neverything and appear to hang.\n\nThis does not apply to cases where the backend cannot finish flushing\ndue to EWOULDBLOCK. That worked fine already.","shortMessageHtmlLink":"proxy: fix IO backlog softlock"}},{"before":"4d9b0e066b1ceeb3fc9f44c83046eee46ba73aa4","after":"38ac3768755b40de58c24b9d45a01004c6f493ee","ref":"refs/heads/proxy_subfix","pushedAt":"2024-05-02T22:25:32.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"dormando","name":"dormando","path":"/dormando","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/66832?s=80&v=4"},"commit":{"message":"proxy: even more GC tuning\n\nFixes lua GC getting behind and bloating memory in certain conditions.\n\nI think this can only happen if you're running the debug binary and\ndoing a pipelined benchmark from tons of client connections. This would\ncreate tons of garbage very quickly before the inbetween-requests GC\ncollection system could catch up. With the debug binary the newer API\nrecycles request slots 10x more often, making the problem visible.","shortMessageHtmlLink":"proxy: even more GC tuning"}},{"before":null,"after":"4d9b0e066b1ceeb3fc9f44c83046eee46ba73aa4","ref":"refs/heads/proxy_subfix","pushedAt":"2024-05-02T01:30:52.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"dormando","name":"dormando","path":"/dormando","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/66832?s=80&v=4"},"commit":{"message":"proxy: fix race condition leading to hang\n\nIf a lua function schedules extra requests after an initial set of\nrequests, we need to use a workaround to ensure the second set of\nrequests is actually submitted to the proxy backends.\n\nThis workaround was in the wrong place. If connections are being cut and\nreconnected between batches of requests it's possible for the workaround\nto trigger on a connection object that has since moved to another\nthread.\n\nThis patch tightens up the workaround to run before the client\nconnection has a chance to resume.","shortMessageHtmlLink":"proxy: fix race condition leading to hang"}},{"before":"5c7e102326de0ab8267cad27e2ce52648d9d8925","after":"cf132f5c4f86e18d49d5d4fec4a73bc0d6227d01","ref":"refs/heads/next","pushedAt":"2024-04-29T05:50:07.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"dormando","name":"dormando","path":"/dormando","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/66832?s=80&v=4"},"commit":{"message":"proxy: fix possible corruption with global objects\n\nIf a global object (ie; pool, global tbf) is created during\n`mcp_config_pools`, _and_ copied to each worker VM during a config\nreload, _but not used_ during `mcp_config_routes`, the worker VM's\ncould cause the global object to be reaped early.\n\nConvoluted:\n- Create pool obj\n- Return pool obj\n- Get pool proxy obj in `mcp_config_routes`, but don't use it!\n- Don't pass it to any funcgens!\n- Config thread moves on to next worker\n- Worker VM runs GC, clears pool proxy obj\n- _before_ config reload moves onto the next worker VM\n- Worker VM sees proxy refcount as 0, signals to reap object\n- The above can happen multiple times as refcount bounces between 1 and\n 0.\n\nI have no idea how realistic it is to hit this problem and what the\nsymptoms even are. I caused this via a long benchmark with constant\nreloads and hitting an assert in the debug binary. Changing the asserts\naround made them trigger from startup, making it easier to see the\nproblem.\n\nThe fix:\n- When we first reference a proxy object, take an extra reference.\n- We get a signal bit by negating the lua self reference.\n- Enqueue the object for the manager thread to examine after the\n config reload completes.\n- The manager thread ack's the object and reduces the refcount by 1,\n negating the self ref back to positive.\n- If needed, immediately reap.","shortMessageHtmlLink":"proxy: fix possible corruption with global objects"}},{"before":"37004115c9deb9ecb2722565bdcbd3c4e6a0d764","after":"5c7e102326de0ab8267cad27e2ce52648d9d8925","ref":"refs/heads/next","pushedAt":"2024-04-26T21:37:59.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"dormando","name":"dormando","path":"/dormando","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/66832?s=80&v=4"},"commit":{"message":"core: add queue info to `stats conns`\n\nAdds to `stats conns` the state of deferred IO queues: extstore or\nproxy. The number of queues waiting (it can be both proxy and extstore),\nand within each queue how many sub-IO's are waiting.","shortMessageHtmlLink":"core: add queue info to stats conns"}},{"before":"eb159bf768f377e28ada10927ee9c90cb1d2cc5d","after":"37004115c9deb9ecb2722565bdcbd3c4e6a0d764","ref":"refs/heads/next","pushedAt":"2024-04-23T23:30:51.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"dormando","name":"dormando","path":"/dormando","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/66832?s=80&v=4"},"commit":{"message":"proxy: allow passing nil to mcp.server_stats()\n\nignores nil as though the argument were blank instead.","shortMessageHtmlLink":"proxy: allow passing nil to mcp.server_stats()"}},{"before":"6a8ee471d21c36e5a048d1dd2a8d0219fcc7e6e3","after":"eb159bf768f377e28ada10927ee9c90cb1d2cc5d","ref":"refs/heads/next","pushedAt":"2024-04-23T23:05:35.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"dormando","name":"dormando","path":"/dormando","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/66832?s=80&v=4"},"commit":{"message":"extstore: start arg tokenizer fix\n\nchecking for wrong separator for final token in a file descriptor. This\ndidn't leak to a bug because there were no tokens after the bucket, but\nif we ever added one it would break so it's worth fixing.","shortMessageHtmlLink":"extstore: start arg tokenizer fix"}},{"before":"8e497e9f091e46f3c4700b56024eae8e48bc5eb5","after":"6a8ee471d21c36e5a048d1dd2a8d0219fcc7e6e3","ref":"refs/heads/next","pushedAt":"2024-04-23T23:02:29.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"dormando","name":"dormando","path":"/dormando","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/66832?s=80&v=4"},"commit":{"message":"proxy: fix proxy_config with > 2 lua files\n\ntypo'ed a , for : in the second strtok_r, causing failure if more than\ntwo lua files are specified.","shortMessageHtmlLink":"proxy: fix proxy_config with > 2 lua files"}},{"before":"c67e7c29473a78cf2b480c471f01ef3f27b67f8c","after":null,"ref":"refs/heads/proxy_lua_stats","pushedAt":"2024-04-23T20:10:32.000Z","pushType":"branch_deletion","commitsCount":0,"pusher":{"login":"dormando","name":"dormando","path":"/dormando","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/66832?s=80&v=4"}},{"before":"702ecb57e2ec417121b48891f0b1c27f99bb01ca","after":null,"ref":"refs/heads/meta_del_x","pushedAt":"2024-04-23T20:10:30.000Z","pushType":"branch_deletion","commitsCount":0,"pusher":{"login":"dormando","name":"dormando","path":"/dormando","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/66832?s=80&v=4"}},{"before":"c824841a06d1c27b00274ea313e7a7b62f142cfa","after":"8e497e9f091e46f3c4700b56024eae8e48bc5eb5","ref":"refs/heads/next","pushedAt":"2024-04-23T20:09:53.000Z","pushType":"push","commitsCount":1,"pusher":{"login":"dormando","name":"dormando","path":"/dormando","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/66832?s=80&v=4"},"commit":{"message":"proxy: `mcp.server_stats(subcmd)`\n\nAllow accessing server stats from the configuration thread (useful for\ncrons, checking start args, etc). Does not support cachedump or detail.\n\nReturns a table of the results. The results are thus unordered.","shortMessageHtmlLink":"proxy: mcp.server_stats(subcmd)"}}],"hasNextPage":true,"hasPreviousPage":false,"activityType":"all","actor":null,"timePeriod":"all","sort":"DESC","perPage":30,"cursor":"djE6ks8AAAAEWTUlaQA","startCursor":null,"endCursor":null}},"title":"Activity ยท memcached/memcached"}