{"payload":{"feedbackUrl":"https://github.com/orgs/community/discussions/53140","repo":{"id":97581720,"defaultBranch":"imr-hackaton","name":"scylla","ownerLogin":"denesb","currentUserCanPush":false,"isFork":true,"isEmpty":false,"createdAt":"2017-07-18T09:46:31.000Z","ownerAvatar":"https://avatars.githubusercontent.com/u/1389273?v=4","public":true,"private":false,"isOrgOwned":false},"refInfo":{"name":"","listCacheKey":"v0:1720417857.0","currentOid":""},"activityList":{"items":[{"before":"627ed4ab3c3a51b85e17491dbb4398c652a0f488","after":"f628e7439cf45dbee28020a138a3a99b0303b167","ref":"refs/heads/rcs-cpu-concurrency-n-5.4","pushedAt":"2024-07-09T10:15:22.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"denesb","name":"Botond Dénes","path":"/denesb","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1389273?s=80&v=4"},"commit":{"message":"reader_concurrency_semaphore: execution_loop(): move maybe_admit_waiters() to the inner loop\n\nNow that the CPU concurency limit is configurable, new reads might be\nready to execute right after the current one was executed. So move the\npoll for admitting new reads into the inner loop, to prevent the\nsituation where the inner loop yields and a concurrent\ndo_wait_admission() finds that there are waiters (queued because at the\ntime they arrived to the semaphore, the _ready_list was not empty) but it\nis is possible to admit a new read. When this happens the semaphore will\ndump diagnostics to help debug the apparent contradiction, which can\ngenerate a lot of log spam. Moving the poll into the inner loop prevents\nthe false-positive contradiction detection from firing.\n\nRefs: scylladb/scylladb#19017\n\nCloses scylladb/scylladb#19600\n\n(cherry picked from commit 155acbb306c4fb5b7812a4ff48eacf14e8f9e043)","shortMessageHtmlLink":"reader_concurrency_semaphore: execution_loop(): move maybe_admit_wait…"}},{"before":"67d6bdb9ae64706bee1160f91e16d8ca1f27d0d2","after":"627ed4ab3c3a51b85e17491dbb4398c652a0f488","ref":"refs/heads/rcs-cpu-concurrency-n-5.4","pushedAt":"2024-07-09T08:41:26.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"denesb","name":"Botond Dénes","path":"/denesb","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1389273?s=80&v=4"},"commit":{"message":"reader_concurrency_semaphore: execution_loop(): move maybe_admit_waiters() to the inner loop\n\nNow that the CPU concurency limit is configurable, new reads might be\nready to execute right after the current one was executed. So move the\npoll for admitting new reads into the inner loop, to prevent the\nsituation where the inner loop yields and a concurrent\ndo_wait_admission() finds that there are waiters (queued because at the\ntime they arrived to the semaphore, the _ready_list was not empty) but it\nis is possible to admit a new read. When this happens the semaphore will\ndump diagnostics to help debug the apparent contradiction, which can\ngenerate a lot of log spam. Moving the poll into the inner loop prevents\nthe false-positive contradiction detection from firing.\n\nRefs: scylladb/scylladb#19017\n\nCloses scylladb/scylladb#19600\n\n(cherry picked from commit 155acbb306c4fb5b7812a4ff48eacf14e8f9e043)","shortMessageHtmlLink":"reader_concurrency_semaphore: execution_loop(): move maybe_admit_wait…"}},{"before":"cfeba7a30a5c0e50a02892f933025f5015138bfc","after":"9e5d18a8b62605dd14c10d428a9157462f16bb22","ref":"refs/heads/scylla-sstable-compact","pushedAt":"2024-07-08T14:16:50.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"denesb","name":"Botond Dénes","path":"/denesb","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1389273?s=80&v=4"},"commit":{"message":"tools/scylla-sstable: introduce scylla sstable compact","shortMessageHtmlLink":"tools/scylla-sstable: introduce scylla sstable compact"}},{"before":null,"after":"67d6bdb9ae64706bee1160f91e16d8ca1f27d0d2","ref":"refs/heads/rcs-cpu-concurrency-n-5.4","pushedAt":"2024-07-08T05:50:57.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"denesb","name":"Botond Dénes","path":"/denesb","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1389273?s=80&v=4"},"commit":{"message":"reader_concurrency_semaphore: execution_loop(): move maybe_admit_waiters() to the inner loop\n\nNow that the CPU concurency limit is configurable, new reads might be\nready to execute right after the current one was executed. So move the\npoll for admitting new reads into the inner loop, to prevent the\nsituation where the inner loop yields and a concurrent\ndo_wait_admission() finds that there are waiters (queued because at the\ntime they arrived to the semaphore, the _ready_list was not empty) but it\nis is possible to admit a new read. When this happens the semaphore will\ndump diagnostics to help debug the apparent contradiction, which can\ngenerate a lot of log spam. Moving the poll into the inner loop prevents\nthe false-positive contradiction detection from firing.\n\nRefs: scylladb/scylladb#19017\n\nCloses scylladb/scylladb#19600\n\n(cherry picked from commit 155acbb306c4fb5b7812a4ff48eacf14e8f9e043)","shortMessageHtmlLink":"reader_concurrency_semaphore: execution_loop(): move maybe_admit_wait…"}},{"before":null,"after":"dadc0c32e1a77b6f4313a8ff6b7d996273415e29","ref":"refs/heads/rcs-cpu-concurrency-n-6.0","pushedAt":"2024-07-08T05:15:42.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"denesb","name":"Botond Dénes","path":"/denesb","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1389273?s=80&v=4"},"commit":{"message":"reader_concurrency_semaphore: execution_loop(): move maybe_admit_waiters() to the inner loop\n\nNow that the CPU concurency limit is configurable, new reads might be\nready to execute right after the current one was executed. So move the\npoll for admitting new reads into the inner loop, to prevent the\nsituation where the inner loop yields and a concurrent\ndo_wait_admission() finds that there are waiters (queued because at the\ntime they arrived to the semaphore, the _ready_list was not empty) but it\nis is possible to admit a new read. When this happens the semaphore will\ndump diagnostics to help debug the apparent contradiction, which can\ngenerate a lot of log spam. Moving the poll into the inner loop prevents\nthe false-positive contradiction detection from firing.\n\nRefs: scylladb/scylladb#19017\n\nCloses scylladb/scylladb#19600\n\n(cherry picked from commit 155acbb306c4fb5b7812a4ff48eacf14e8f9e043)","shortMessageHtmlLink":"reader_concurrency_semaphore: execution_loop(): move maybe_admit_wait…"}},{"before":null,"after":"cfeba7a30a5c0e50a02892f933025f5015138bfc","ref":"refs/heads/scylla-sstable-compact","pushedAt":"2024-07-05T11:59:17.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"denesb","name":"Botond Dénes","path":"/denesb","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1389273?s=80&v=4"},"commit":{"message":"tools/scylla-sstable: introduce scylla-sstable compact\n\nWIP TODO FIXME","shortMessageHtmlLink":"tools/scylla-sstable: introduce scylla-sstable compact"}},{"before":"c03d9de1f9340212f886fc9883775e579b5c02a5","after":"3b72b8d87f93fff6ce33b69cb70961d76d42949c","ref":"refs/heads/paxos-no-such-cf-regression","pushedAt":"2024-07-05T09:47:18.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"denesb","name":"Botond Dénes","path":"/denesb","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1389273?s=80&v=4"},"commit":{"message":"service/paxos/paxos_state: restore resilience against dropped tables\n\nRecently, the code in paxos_state::prepare(), paxos_state::accept() and\npaxos_state::learn() was coroutinized by 58912c2cc1c, 887a5a8f625 and\n2b7acdb32c6 respectively. This introduced a regression: the latency\nhistogram updater code, was moved from a finally() to a defer(). Unlike\nthe former, the latter runs in a noexcept context so the possible\nreplica::no_such_column_family raised from the latency update code now\ncrashes the node, instead of failing just the paxos operation as before.\nFix by only updating the latency histogram if the table still exists.\n\nFixes: scylladb/scylladb#19620","shortMessageHtmlLink":"service/paxos/paxos_state: restore resilience against dropped tables"}},{"before":"c6dc6c6aca5a94d3b617013ad724fd8f916465b1","after":"a447d612ec4c5bb6f2ddfc5617b0168eca2e30bb","ref":"refs/heads/scylla-gdb-large-objects","pushedAt":"2024-07-05T08:21:10.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"denesb","name":"Botond Dénes","path":"/denesb","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1389273?s=80&v=4"},"commit":{"message":"scylla-gdb.py: introduce scylla large-objects\n\nThe equivalent of small-objects, but for large objects (spans).\nAllows listing object of a large-class, and therefore investigating a\nrun-away class, by attempting to identify the owners of the objects in\nit.\n\nWritten to investigate #16493","shortMessageHtmlLink":"scylla-gdb.py: introduce scylla large-objects"}},{"before":"4ffd8b97c7496ab3663b3b4f939d76552a37f10f","after":"c03d9de1f9340212f886fc9883775e579b5c02a5","ref":"refs/heads/paxos-no-such-cf-regression","pushedAt":"2024-07-05T07:37:44.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"denesb","name":"Botond Dénes","path":"/denesb","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1389273?s=80&v=4"},"commit":{"message":"service/paxos/paxos_state: restore resilience against dropped tables\n\nRecently, the code in paxos_state::prepare(), paxos_state::accept() and\npaxos_state::learn() was coroutinized by 58912c2cc1c, 887a5a8f625 and\n2b7acdb32c6 respectively. This introduced a regression: the latency\nhistogram updater code, was moved from a finally() to a defer(). Unlike\nthe former, the latter runs in a noexcept context so the possible\nreplica::no_such_column_family raised from the latency update code now\ncrashes the node, instead of failing just the paxos operation as before.\nFix by catching and ignoring the exception: updating the latency\nhistogram is pointless once a table is dropped anyway.\n\nFixes: scylladb/scylladb#19620","shortMessageHtmlLink":"service/paxos/paxos_state: restore resilience against dropped tables"}},{"before":null,"after":"4ffd8b97c7496ab3663b3b4f939d76552a37f10f","ref":"refs/heads/paxos-no-such-cf-regression","pushedAt":"2024-07-05T07:36:12.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"denesb","name":"Botond Dénes","path":"/denesb","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1389273?s=80&v=4"},"commit":{"message":"service/paxos/paxos_state: restore resilience against dropped tables\n\nRecently, the code in paxos_state::prepare(), paxos_state::accept() and\npaxos_state::learn() was coroutinized by 58912c2cc1c, 887a5a8f625 and\n2b7acdb32c6 respectively. This introduced a regression: the latency\nhistogram updater code, was moved from a finally() to a defer(). Unlike\nthe former, the latter runs in a noexcept content so the possible\nreplica::no_such_column_family raised from the latency update code now\ncrashes the node, instead of failing just the paxos operation as before.\nFix by catching and ignoring the exception: updating the latency\nhistogram is pointless once a table is dropped anyway.\n\nFixes: scylladb/scylladb#19620","shortMessageHtmlLink":"service/paxos/paxos_state: restore resilience against dropped tables"}},{"before":"43cfc762deeb2c5a7d06c32f1013b79eff6ca261","after":"c6dc6c6aca5a94d3b617013ad724fd8f916465b1","ref":"refs/heads/scylla-gdb-large-objects","pushedAt":"2024-07-05T07:22:12.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"denesb","name":"Botond Dénes","path":"/denesb","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1389273?s=80&v=4"},"commit":{"message":"scylla-gdb.py: introduce scylla large-objects\n\nThe equivalent of small-objects, but for large objects (spans).\nAllows listing object of a large-class, and therefore investigating a\nrun-away class, by attempting to identify the owners of the objects in\nit.\n\nWritten to investigate #16493","shortMessageHtmlLink":"scylla-gdb.py: introduce scylla large-objects"}},{"before":"ab47b946e96e8c09a952a12184d67182a23a0ab6","after":"9f5d56e3bc320ad4913e0177aad21e86ed95c496","ref":"refs/heads/rcs-ready-list-n","pushedAt":"2024-07-03T14:30:21.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"denesb","name":"Botond Dénes","path":"/denesb","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1389273?s=80&v=4"},"commit":{"message":"reader_concurrency_semaphore: execution_loop(): move maybe_admit_waiters() to the inner loop\n\nNow that the CPU concurency limit is configurable, new reads might be\nready to execute right after the current one was executed. So move the\npoll for admitting new reads into the inner loop, to prevent the\nsituation where the inner loop yields and a concurrent\ndo_wait_admission() finds that there are waiters (queued because at the\ntime they arrived to the semaphore, the _ready_list was not empty) but it\nis is possible to admit a new read. When this happens the semaphore will\ndump diagnostics to help debug the apparent contradiction, which can\ngenerate a lot of log spam. Moving the poll into the inner loop prevents\nthe false-positive contradiction detection from firing.\n\nRefs: scylladb/scylladb#19017","shortMessageHtmlLink":"reader_concurrency_semaphore: execution_loop(): move maybe_admit_wait…"}},{"before":null,"after":"ab47b946e96e8c09a952a12184d67182a23a0ab6","ref":"refs/heads/rcs-ready-list-n","pushedAt":"2024-07-03T13:49:43.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"denesb","name":"Botond Dénes","path":"/denesb","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1389273?s=80&v=4"},"commit":{"message":"reader_concurrency_semaphore: execution_loop(): move maybe_admit_waiters() to the inner loop\n\nNow that the CPU concurency limit is configurable, new reads might be\nready to execute right after the current one was executed. So move the\ncheck for admission into the inner loop, to prevent the situation where\nthe loop yields and do_wait_admission() finds that there are waiters\nwhile it is possible to admit a new read, spamming the logs with the\nwarning about this situation.\n\nRefs: scylladb/scylladb#19017","shortMessageHtmlLink":"reader_concurrency_semaphore: execution_loop(): move maybe_admit_wait…"}},{"before":"94a4663982f254d7fae72743df6780ae20db19d0","after":"b4f3809ad2f8c1d239ae5a4b024e4928555b4f0a","ref":"refs/heads/rcs-n-concurrency","pushedAt":"2024-06-27T14:19:38.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"denesb","name":"Botond Dénes","path":"/denesb","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1389273?s=80&v=4"},"commit":{"message":"test/boost/reader_concurrency_semaphore_test: add test for live-configurable cpu concurrenc\n Please enter the commit message for your changes. Lines starting","shortMessageHtmlLink":"test/boost/reader_concurrency_semaphore_test: add test for live-confi…"}},{"before":"7831e89c6eeaa288dca4a62bc7bea7e2a365f439","after":"94a4663982f254d7fae72743df6780ae20db19d0","ref":"refs/heads/rcs-n-concurrency","pushedAt":"2024-06-27T06:55:08.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"denesb","name":"Botond Dénes","path":"/denesb","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1389273?s=80&v=4"},"commit":{"message":"test/boost/reader_concurrency_semaphore_test: add test for live-configurable cpu concurrenc\n Please enter the commit message for your changes. Lines starting","shortMessageHtmlLink":"test/boost/reader_concurrency_semaphore_test: add test for live-confi…"}},{"before":"09d1fcb5a224cfd9b31eb707bd38331df515270c","after":"1fca3415142935f08b5f361a30d9a097c82d38ce","ref":"refs/heads/repair-compaction-tombstone-gc-conf","pushedAt":"2024-06-26T08:05:28.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"denesb","name":"Botond Dénes","path":"/denesb","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1389273?s=80&v=4"},"commit":{"message":"test/topology_custom/test_repair: add test for enable_tombstone_gc_for_streaming_and_repair","shortMessageHtmlLink":"test/topology_custom/test_repair: add test for enable_tombstone_gc_fo…"}},{"before":"13044dd8248e1cb5b78163ee296789c3a03fcdb8","after":"31c0fa07d8d66de0dd2d03ba2f421f24dd1baeb2","ref":"refs/heads/batchlog-replay-bypass-cache","pushedAt":"2024-06-25T10:18:22.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"denesb","name":"Botond Dénes","path":"/denesb","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1389273?s=80&v=4"},"commit":{"message":"db/batchlog_manager: bypass cache when scanning batchlog table\n\nScans should not pollute the cache with cold data, in general. In the\ncase of the batchlog table, there is another reason to bypass the cache:\nthis table can have a lot of partition tombstones, which currently are\nnot purged from the cache. So in certain cases, using the cache can make\nbatch replay very slow, because it has to scan past tombstones of\nalready replayed batches.","shortMessageHtmlLink":"db/batchlog_manager: bypass cache when scanning batchlog table"}},{"before":"e0008942ae37cf6ed729251427597861bc6d9efa","after":"13044dd8248e1cb5b78163ee296789c3a03fcdb8","ref":"refs/heads/batchlog-replay-bypass-cache","pushedAt":"2024-06-25T10:09:53.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"denesb","name":"Botond Dénes","path":"/denesb","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1389273?s=80&v=4"},"commit":{"message":"db/batchlog_manager: bypass cache when scanning batchlog table\n\nScans should not pollute the cache with cold data, in general. In the\ncase of the batchlog table, there is another reason to bypass the cache:\nthis table can have a lot of partition tombstones, which currently are\nnot purged from the cache. So in certain cases, using the cache can make\nbatch replay very slow, because it has to scan past tombstones of\nalready replayed batches.","shortMessageHtmlLink":"db/batchlog_manager: bypass cache when scanning batchlog table"}},{"before":"fef332196fc3174501f1512af30446651e0ad765","after":"e0008942ae37cf6ed729251427597861bc6d9efa","ref":"refs/heads/batchlog-replay-bypass-cache","pushedAt":"2024-06-25T04:52:20.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"denesb","name":"Botond Dénes","path":"/denesb","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1389273?s=80&v=4"},"commit":{"message":"db/batchlog_manager: bypass cache when scanning batchlog table\n\nScans should not pollute the cache with cold data, in general. In the\ncase of the batchlog table, there is another reason to bypass the cache:\nthis table can have a lot of partition tombstones, which currently are\nnot purged from the cache. So in certain cases, using the cache can make\nbatch replay very slow, because it has to scan past tombstones of\nalready replayed batches.","shortMessageHtmlLink":"db/batchlog_manager: bypass cache when scanning batchlog table"}},{"before":"bb756e5bc9ba0365e835676870a2eedbb61d8f6e","after":"bf8af4be8993e63ede30caf1428daa9e61717799","ref":"refs/heads/rcs-n-concurrency-dev","pushedAt":"2024-06-24T07:30:18.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"denesb","name":"Botond Dénes","path":"/denesb","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1389273?s=80&v=4"},"commit":{"message":"WIP","shortMessageHtmlLink":"WIP"}},{"before":"74ed8f9a3c44b6ef05d7a2836f10eeb012644c5d","after":"fef332196fc3174501f1512af30446651e0ad765","ref":"refs/heads/batchlog-replay-bypass-cache","pushedAt":"2024-06-19T14:14:05.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"denesb","name":"Botond Dénes","path":"/denesb","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1389273?s=80&v=4"},"commit":{"message":"db/batchlog_manager: bypass cache when scanning batchlog table\n\nScans should not pollute the cache with cold data, in general. In the\ncase of the batchlog table, there is another reason to bypass the cache:\nthis table can have a lot of partition tombstones, which currently are\nnot purged from the cache. So in certain cases, using the cache can make\nbatch replay very slow, because it has to scan past tombstones of\nalready replayed batches.","shortMessageHtmlLink":"db/batchlog_manager: bypass cache when scanning batchlog table"}},{"before":"1043ed16af5a809d5e837f6e39e191427bfddf55","after":"74ed8f9a3c44b6ef05d7a2836f10eeb012644c5d","ref":"refs/heads/batchlog-replay-bypass-cache","pushedAt":"2024-06-19T14:11:37.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"denesb","name":"Botond Dénes","path":"/denesb","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1389273?s=80&v=4"},"commit":{"message":"db/batchlog_manager: bypass cache when scanning batchlog table\n\nScans should not pollute the cache with cold data, in general. In the\ncase of the batchlog table, there is another reason to bypass the cache:\nthis table can have a lot of partition tombstones, which currently are\nnot purged from the cache. So in certain cases, using the cache can make\nbatch replay very slow, because it has to scan past tombstones of\nalready replayed batches.","shortMessageHtmlLink":"db/batchlog_manager: bypass cache when scanning batchlog table"}},{"before":null,"after":"1043ed16af5a809d5e837f6e39e191427bfddf55","ref":"refs/heads/batchlog-replay-bypass-cache","pushedAt":"2024-06-19T13:47:39.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"denesb","name":"Botond Dénes","path":"/denesb","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1389273?s=80&v=4"},"commit":{"message":"db/batchlog_manager: bypass cache when scanning batchlog table\n\nScans should not pollute the cache with cold data, in general. In the\ncase of the batchlog table, there is another reason to bypass the cache:\nthis table can have a lot of partition tombstones, which currently are\nnot purged from the cache. So in certain cases, using the cache can make\nbatch replay very slow, because it has to scan past tombstones of\nalready replayed batches.","shortMessageHtmlLink":"db/batchlog_manager: bypass cache when scanning batchlog table"}},{"before":null,"after":"bb756e5bc9ba0365e835676870a2eedbb61d8f6e","ref":"refs/heads/rcs-n-concurrency-dev","pushedAt":"2024-06-14T14:17:54.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"denesb","name":"Botond Dénes","path":"/denesb","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1389273?s=80&v=4"},"commit":{"message":"WIP - share data","shortMessageHtmlLink":"WIP - share data"}},{"before":"3e3a7a67816837813740d44ffc9997c7f5c6790e","after":"6868add2286f8d51c544c8f1cece53041a615aae","ref":"refs/heads/maint-rcs-conf-count","pushedAt":"2024-06-13T05:59:25.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"denesb","name":"Botond Dénes","path":"/denesb","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1389273?s=80&v=4"},"commit":{"message":"replica/database: wire in maintenance_reader_concurrency_semaphore_count_limit\n\nMaking the count resources on the maintenance (streaming) semaphore live\nupdate via config. This will allow us to improve repair speed on\nmixed-shard clusters, where we suspect that reader trashing -- due to\nthe combination of high number of readers on each shard and very\nconservative reader count limit (10) -- is the main cause of the\nslowness.\nMaking this count limit confgurable allows us to start experimenting\nwith this fix, without committing to a count limit increase (or\nremoval), addressing the pain in the field.","shortMessageHtmlLink":"replica/database: wire in maintenance_reader_concurrency_semaphore_co…"}},{"before":"139ad3c0359f171389262724f11e94a4197e3356","after":"145a67f77c550093ca44db89f6066754d83e9cc9","ref":"refs/heads/scylla-sstable-self-schema","pushedAt":"2024-06-13T05:32:31.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"denesb","name":"Botond Dénes","path":"/denesb","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1389273?s=80&v=4"},"commit":{"message":"tools/scylla-sstable: log loaded schema with trace level\n\nThe schema of the sstable can be interesting, so log it with trace\nlevel. Unfortunately, this is not the nice CQL statement we are used to\n(that requires a database object), but the not-nearly-so-nice CFMetadata\nprintout. Still, it is better then nothing.","shortMessageHtmlLink":"tools/scylla-sstable: log loaded schema with trace level"}},{"before":"eeda65502b9126eb96ba5accc2f0f29b16d333a3","after":"7831e89c6eeaa288dca4a62bc7bea7e2a365f439","ref":"refs/heads/rcs-n-concurrency","pushedAt":"2024-06-12T15:10:15.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"denesb","name":"Botond Dénes","path":"/denesb","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1389273?s=80&v=4"},"commit":{"message":"reader_concurrency_semaphore: wire in the configurable cpu concurrency\n\nBefore this patch, the semaphore was hard-wired to stop admission, if\nthere is even a single permit, which is in the need_cpu state.\nTherefore, keeping the CPU concurrency at 1.\nThis patch makes use of the new cpu_concurrency parameter, which was\nwired in in the last patches, allowing for a configurable amount of\nconcurrent need_cpu permits. This is to address workloads where some\nsmall subset of reads are expected to be slow, and can hold up faster\nreads behind them in the semaphore queue.","shortMessageHtmlLink":"reader_concurrency_semaphore: wire in the configurable cpu concurrency"}},{"before":"7d1d5b9f653af017c8f9644cf9ef7877d9c673e6","after":"139ad3c0359f171389262724f11e94a4197e3356","ref":"refs/heads/scylla-sstable-self-schema","pushedAt":"2024-06-12T14:47:16.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"denesb","name":"Botond Dénes","path":"/denesb","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1389273?s=80&v=4"},"commit":{"message":"tools/scylla-sstable: log loaded schema with trace level\n\nThe schema of the sstable can be interesting, so log it with trace\nlevel. Unfortunately, this is not the nice CQL statement we are used to\n(that requires a database object), but the not-nearly-so-nice CFMetadata\nprintout. Still, it is better then nothing.","shortMessageHtmlLink":"tools/scylla-sstable: log loaded schema with trace level"}},{"before":"35fe457f918bf901c8aecfb21a4f468253709867","after":"3e3a7a67816837813740d44ffc9997c7f5c6790e","ref":"refs/heads/maint-rcs-conf-count","pushedAt":"2024-06-12T13:54:59.000Z","pushType":"force_push","commitsCount":0,"pusher":{"login":"denesb","name":"Botond Dénes","path":"/denesb","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1389273?s=80&v=4"},"commit":{"message":"replica/database: wire in maintenance_reader_concurrency_semaphore_count_limit\n\nMaking the count resources on the maintenance (streaming) semaphore live\nupdate via config. This will allow us to improve repair speed on\nmixed-shard clusters, where we suspect that reader trashing -- due to\nthe combination of high number of readers on each shard and very\nconservative reader count limit (10) -- is the main cause of the\nslowness.\nMaking this count limit confgurable allows us to start experimenting\nwith this fix, without committing to a count limit increase (or\nremoval), addressing the pain in the field.","shortMessageHtmlLink":"replica/database: wire in maintenance_reader_concurrency_semaphore_co…"}},{"before":null,"after":"35fe457f918bf901c8aecfb21a4f468253709867","ref":"refs/heads/maint-rcs-conf-count","pushedAt":"2024-06-12T07:03:19.000Z","pushType":"branch_creation","commitsCount":0,"pusher":{"login":"denesb","name":"Botond Dénes","path":"/denesb","primaryAvatarUrl":"https://avatars.githubusercontent.com/u/1389273?s=80&v=4"},"commit":{"message":"replica/database: wire in maintenance_reader_concurrency_semaphore_count_limit\n\nMaking the count resources on the maintenance (streaming) semaphore live\nupdate via config. This will allow us to improve repair speed on\nmixed-shard clusters, where we suspect that reader trashing -- due to\nthe combination of high number of readers on each shard and very\nconservative reader count limit (10) -- is the main cause of the\nslowness.\nMaking this count limit confgurable allows us to start experimenting\nwith this fix, without committing to a count limit increase (or\nremoval), addressing the pain in the field.","shortMessageHtmlLink":"replica/database: wire in maintenance_reader_concurrency_semaphore_co…"}}],"hasNextPage":true,"hasPreviousPage":false,"activityType":"all","actor":null,"timePeriod":"all","sort":"DESC","perPage":30,"cursor":"djE6ks8AAAAEeoVBugA","startCursor":null,"endCursor":null}},"title":"Activity · denesb/scylla"}