Conversation
|
Important Review skippedAuto reviews are disabled on base/target branches other than the default branch. Please check the settings in the CodeRabbit UI or the ⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: You can disable this status message by setting the Use the checkbox below for a quick retry:
✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
e8ad614 to
61d87f8
Compare
61d87f8 to
5db8c7f
Compare
This reverts commit 5198e06.
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (4)
src/batteries/contexts.cxx (1)
27-28: Consider adding[[nodiscard]]for API consistency.The new
get_underlying()accessors return references that are typically used immediately. For consistency with other accessors in the codebase (e.g.,adapt_deque::thief()andadapt_deque::empty()added in this PR), consider marking these[[nodiscard]].♻️ Proposed fix
+ [[nodiscard]] constexpr auto get_underlying() noexcept -> Adaptor<context_type> & { return m_container; }Apply similarly to both
derived_poly_contextandmono_context.Also applies to: 61-62
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/batteries/contexts.cxx` around lines 27 - 28, Add the [[nodiscard]] attribute to the get_underlying() accessors to enforce return-value use and match existing API style; specifically update the Adaptor<context_type> & get_underlying() noexcept methods in both derived_poly_context and mono_context so they are declared as [[nodiscard]] constexpr auto get_underlying() noexcept -> Adaptor<context_type>&, and apply the same change to the corresponding const/non-const overloads if present.test/src/schedule.cpp (1)
71-76: Consider movingSTATIC_REQUIREoutside the loop.The
STATIC_REQUIRE(lf::scheduler<TestType>)is a compile-time check that doesn't vary withthr. Placing it before the loop would make the intent clearer and avoid redundant runtime assertions.TEMPLATE_TEST_CASE("Busy schedule", "[schedule]", mono_busy_thread_pool, poly_busy_thread_pool) { + STATIC_REQUIRE(lf::scheduler<TestType>); for (std::size_t thr = 1; thr < 4; ++thr) { TestType scheduler{thr}; - STATIC_REQUIRE(lf::scheduler<TestType>); simple_tests(scheduler); } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/src/schedule.cpp` around lines 71 - 76, Move the compile-time assertion STATIC_REQUIRE(lf::scheduler<TestType>) out of the runtime loop in the TEMPLATE_TEST_CASE: the check depends only on TestType, not thr, so place STATIC_REQUIRE(lf::scheduler<TestType>) immediately after the TEMPLATE_TEST_CASE declaration (before the for loop) and keep the for loop creating TestType scheduler{thr} and calling simple_tests(scheduler) as-is.src/schedulers/busy.cxx (2)
106-108: After successful steal,continueiterates the inner for-loop.The
continuestatement continues the steal-attemptforloop rather than breaking out to thewhileloop. This means after a successful steal-and-execute, the worker will try more steals (up tok_steal_attemptstotal) before re-checking posted tasks.If the intent is to prioritize posted tasks after each execution, consider using a labeled break or restructuring:
if (auto [err, result] = m_contexts[victim].get_underlying().thief().steal()) { execute(static_cast<context_type &>(ctx), result); - continue; + break; // Re-check posted tasks after successful execution }If the current behavior is intentional (batch steals before checking posted), a brief comment would clarify this design choice.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/schedulers/busy.cxx` around lines 106 - 108, After a successful steal-and-execute the current code uses "continue" which keeps iterating the inner for-loop (the steal attempts) instead of returning to the outer while loop to re-check posted tasks; change the control flow so a successful steal breaks out of the steal-attempt for-loop (replace the "continue" after execute(static_cast<context_type &>(ctx), result); with a break) so posted tasks are prioritized after each execution, or alternatively add a clear comment at the steal site (m_contexts[victim].get_underlying().thief().steal()) documenting that repeated steals per iteration are intentional if you want to keep current batching behavior.
77-78: Unsigned underflow whenn == 1creates wasteful distribution.When
n == 1, the expressionn - 2underflows toSIZE_MAX, creatingdist(0, SIZE_MAX). While theif (n > 1)guard on line 94 prevents usage, the distribution is still constructed with a huge range unnecessarily.Consider guarding the construction or using a conditional initialization:
- std::uniform_int_distribution<std::size_t> dist(0, n - 2); + std::uniform_int_distribution<std::size_t> dist(0, n > 1 ? n - 2 : 0);Or move the distribution inside the
if (n > 1)block if you want to avoid the object entirely for single-threaded pools.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/schedulers/busy.cxx` around lines 77 - 78, The uniform distribution dist is constructed with upper bound n - 2 which underflows when n == 1; avoid creating dist with a huge range by only constructing std::uniform_int_distribution<std::size_t> dist(0, n - 2) when n > 1 (move its construction inside the existing if (n > 1) block) or use a conditional initialization that selects a no-op/degenerate distribution for n <= 1; adjust code near std::default_random_engine rng(safe_cast<unsigned>(id + 1)) to ensure rng remains available while dist is only created when needed.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/core/execute.cxx`:
- Around line 55-77: The non-atomic access to frame->steals in execute(Context&,
steal_handle<Context>) introduces a data race; change the steals field in
frame_type<checkpoint_t<Context>> to use the same pattern as joins/exception_bit
(apply ATOMIC_ALIGN and store a std::atomic or use atomic_ref wrapper) and
update all accesses: in execute() replace the raw increment/check with atomic
operations (load, compare to k_u16_max, fetch_add) and in
join_awaitable::await_ready()/await_resume() use atomic loads; ensure the
overflow check uses an atomic-safe compare (e.g., load then conditional
fetch_add or fetch_add with an overflow guard) so all reads/writes of steals are
synchronized.
---
Nitpick comments:
In `@src/batteries/contexts.cxx`:
- Around line 27-28: Add the [[nodiscard]] attribute to the get_underlying()
accessors to enforce return-value use and match existing API style; specifically
update the Adaptor<context_type> & get_underlying() noexcept methods in both
derived_poly_context and mono_context so they are declared as [[nodiscard]]
constexpr auto get_underlying() noexcept -> Adaptor<context_type>&, and apply
the same change to the corresponding const/non-const overloads if present.
In `@src/schedulers/busy.cxx`:
- Around line 106-108: After a successful steal-and-execute the current code
uses "continue" which keeps iterating the inner for-loop (the steal attempts)
instead of returning to the outer while loop to re-check posted tasks; change
the control flow so a successful steal breaks out of the steal-attempt for-loop
(replace the "continue" after execute(static_cast<context_type &>(ctx), result);
with a break) so posted tasks are prioritized after each execution, or
alternatively add a clear comment at the steal site
(m_contexts[victim].get_underlying().thief().steal()) documenting that repeated
steals per iteration are intentional if you want to keep current batching
behavior.
- Around line 77-78: The uniform distribution dist is constructed with upper
bound n - 2 which underflows when n == 1; avoid creating dist with a huge range
by only constructing std::uniform_int_distribution<std::size_t> dist(0, n - 2)
when n > 1 (move its construction inside the existing if (n > 1) block) or use a
conditional initialization that selects a no-op/degenerate distribution for n <=
1; adjust code near std::default_random_engine rng(safe_cast<unsigned>(id + 1))
to ensure rng remains available while dist is only created when needed.
In `@test/src/schedule.cpp`:
- Around line 71-76: Move the compile-time assertion
STATIC_REQUIRE(lf::scheduler<TestType>) out of the runtime loop in the
TEMPLATE_TEST_CASE: the check depends only on TestType, not thr, so place
STATIC_REQUIRE(lf::scheduler<TestType>) immediately after the TEMPLATE_TEST_CASE
declaration (before the for loop) and keep the for loop creating TestType
scheduler{thr} and calling simple_tests(scheduler) as-is.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 87fa02ae-cc3d-4662-a943-aadeea267804
📒 Files selected for processing (12)
CMakeLists.txtbenchmark/src/libfork_benchmark/fib/libfork.cppsrc/batteries/adaptor_stack.cxxsrc/batteries/adaptors.cxxsrc/batteries/contexts.cxxsrc/batteries/geometric_stack.cxxsrc/core/execute.cxxsrc/core/promise.cxxsrc/exception.cppsrc/schedulers/busy.cxxsrc/schedulers/schedulers.cxxtest/src/schedule.cpp
A stripped back busy pool, just for dev (missing lock free submission queue)
Summary by CodeRabbit
Release Notes
New Features
busy_thread_poolscheduler for multi-threaded task execution with configurable worker threads.steal_overflow_errorexception for detecting steal operation thresholds.execute()overload supporting work stealing operations.Improvements
Breaking Changes
allocator_typetypedef fromgeometric_stack.