Skip to content

feat(client-pool): expose PoolHandle API with fair scheduling and graceful shutdown#498

Merged
leynos merged 5 commits intomainfrom
expose-poolhandle-api-cgnmjg
Mar 18, 2026
Merged

feat(client-pool): expose PoolHandle API with fair scheduling and graceful shutdown#498
leynos merged 5 commits intomainfrom
expose-poolhandle-api-cgnmjg

Conversation

@leynos
Copy link
Copy Markdown
Owner

@leynos leynos commented Mar 10, 2026

Summary

  • Refactors tests to use a new build_preamble_pool helper to bootstrap preamble state for PoolHandle fairness tests
  • Exposes PoolHandle API with fairness scheduling (11.2.2)
  • Adds a shared PoolScheduler and PoolFairnessPolicy, integrating with WireframeClientPool
  • Adds build_preamble_pool test helper, fixtures, and unit/BDD tests for PoolHandle fairness

Changes

Public API

  • Add PoolHandle<S, P, C> and PoolFairnessPolicy to the pool module
  • Extend WireframeClientPool with a handle() method that returns PoolHandle
  • Add PoolHandle::acquire and PoolHandle::call APIs for fair, whole-operation workflows

Internal

  • Introduce PoolScheduler with RoundRobin and FIFO policies
  • Refactor internal pool state to support a shared scheduler (ClientPoolInner)
  • New modules: src/client/pool/handle.rs, src/client/pool/policy.rs, src/client/pool/scheduler.rs
  • Extend ClientPoolConfig with fairness_policy and a getter for the policy value

Tests

  • Refactor tests to use build_preamble_pool for preamble/bootstrap wiring
  • Add unit tests in src/client/tests/pool_handle.rs to cover PoolHandle fairness
  • Extend tests scaffolding/fixtures to exercise PoolHandle behaviour and maintain existing semantics
  • Add behavioural tests and fixtures under tests/bdd_pool and tests/features for PoolHandle fairness
  • Introduce test helpers: build_preamble_pool and related wiring to initialize preamble state

Documentation

  • Update docs/wireframe-client-design.md with 11.2.2 decision record and API boundaries
  • Update docs/users-guide.md to describe when to use PoolHandle versus direct pool.acquire(), and explain fairness policies
  • Update docs/roadmap.md to reflect 11.2.2 progression and completion status
  • Add 11-2-2 ExecPlan docs under docs/execplans

Validation and evidence

  • Unit tests for PoolHandle fairness added and wired into the pool feature gate
  • BDD-style tests scaffolded to cover fairness scenarios
  • Public API changes are documented in code/docs; existing pooled client usage remains backward compatible

Notes

  • The test refactor introduces a dedicated build_preamble_pool helper to standardize preamble state across PoolHandle tests, reducing boilerplate and improving test reliability.

📎 Task: https://www.devboxer.com/task/c4b1e5d1-4af6-49be-9248-337cdb6e12c4

Introduce a comprehensive 457-line execution plan document laying out the design, constraints, risks, progress, decision log, and API plan for roadmap item 11.2.2. This plan covers exposing a new PoolHandle API to provide fair pooled acquisition for logical sessions, clarifying fairness policies, scheduler design, testing requirements, and documentation guidance to improve the Wireframe client pool subsystem.

Co-authored-by: devboxerhub[bot] <devboxerhub[bot]@users.noreply.github.com>
@sourcery-ai
Copy link
Copy Markdown
Contributor

sourcery-ai Bot commented Mar 10, 2026

Reviewer's Guide

Implements a fairness-aware PoolHandle API for WireframeClientPool, introduces configurable PoolFairnessPolicy and a PoolScheduler, refactors the pool internals around a shared ClientPoolInner used by both pool and handles, and adds focused unit/BDD tests and docs; pool_handle tests are refactored to use a shared build_preamble_pool helper to set up preamble-aware pools.

Sequence diagram for PoolHandle-based fair lease acquisition

sequenceDiagram
    actor Client
    participant WireframeClientPool
    participant PoolHandle
    participant ClientPoolInner
    participant PoolScheduler
    participant PoolSlot
    participant PooledClientLease

    Client->>WireframeClientPool: connect_pool(...)
    WireframeClientPool-->>Client: WireframeClientPool

    Client->>WireframeClientPool: handle()
    WireframeClientPool->>ClientPoolInner: scheduler.register_handle()
    ClientPoolInner->>PoolScheduler: register_handle()
    PoolScheduler-->>ClientPoolInner: handle_id
    ClientPoolInner-->>WireframeClientPool: handle_id
    WireframeClientPool-->>Client: PoolHandle

    Client->>PoolHandle: acquire()
    PoolHandle->>ClientPoolInner: scheduler.acquire_for_handle(inner, handle_id)
    ClientPoolInner->>PoolScheduler: acquire_for_handle(inner, handle_id)

    alt immediate_capacity
        PoolScheduler->>ClientPoolInner: inner.try_acquire_immediately()
        ClientPoolInner->>ClientPoolInner: ordered_slots()
        ClientPoolInner->>PoolSlot: try_acquire_permit()
        PoolSlot-->>ClientPoolInner: OwnedSemaphorePermit
        ClientPoolInner->>PooledClientLease: new(slot, permit, None)
        PooledClientLease-->>ClientPoolInner: lease
        ClientPoolInner-->>PoolScheduler: lease
        PoolScheduler-->>PoolHandle: lease
        PoolHandle-->>Client: PooledClientLease
    else contended_capacity
        PoolScheduler->>PoolScheduler: enqueue_waiter(handle_id)
        PoolScheduler-->>PoolHandle: pending future
        note over PoolScheduler,ClientPoolInner: later, when capacity returns
        loop service_waiters
            PoolScheduler->>ClientPoolInner: acquire_slot_permit()
            ClientPoolInner->>ClientPoolInner: ordered_slots()
            ClientPoolInner->>PoolSlot: acquire_permit()
            PoolSlot-->>ClientPoolInner: OwnedSemaphorePermit
            ClientPoolInner-->>PoolScheduler: (slot, permit)
            PoolScheduler->>PooledClientLease: new(slot, permit, Some(inner))
            PooledClientLease-->>PoolScheduler: lease
            PoolScheduler-->>PoolHandle: lease
        end
        PoolHandle-->>Client: PooledClientLease
    end

    Client->>PooledClientLease: call(request)
    PooledClientLease-->>Client: response
Loading

Sequence diagram for PooledClientLease drop notifying PoolScheduler

sequenceDiagram
    actor Client
    participant PoolHandle
    participant PooledClientLease
    participant ClientPoolInner
    participant PoolScheduler

    Client->>PoolHandle: acquire()
    PoolHandle-->>Client: PooledClientLease

    Client->>PooledClientLease: use lease
    Client-->>PooledClientLease: drop(lease)

    PooledClientLease->>PooledClientLease: Drop::drop()
    alt release_inner_present
        PooledClientLease->>ClientPoolInner: inner = release_inner
        PooledClientLease->>PoolScheduler: notify_capacity_available(inner)
        PoolScheduler->>PoolScheduler: kick(inner)
        PoolScheduler->>ClientPoolInner: acquire_slot_permit()
        ClientPoolInner-->>PoolScheduler: (slot, permit) or error
        PoolScheduler-->>waiting_handle: Result<PooledClientLease>
    else no_release_inner
        PooledClientLease-->>PooledClientLease: no scheduler notification
    end
Loading

Class diagram for fairness-aware WireframeClientPool and PoolHandle API

classDiagram
    class WireframeClientPool {
        +handle() PoolHandle
        +acquire() PooledClientLease
        +close() void
        +new(addr, pool_config, parts) Result
    }

    class ClientPoolInner {
        +slots Arc~[Arc~PoolSlot~~]~
        +next_slot AtomicUsize
        +scheduler Arc~PoolScheduler~
        +try_acquire_immediately() Option~PooledClientLease~
        +acquire_slot_permit() Result~(Arc~PoolSlot~, OwnedSemaphorePermit)~
        +ordered_slots() Vec~Arc~PoolSlot~~
    }

    class PoolHandle {
        -inner Arc~ClientPoolInner~
        -handle_id u64
        +new(inner, handle_id) PoolHandle
        +acquire() PooledClientLease
        +call(request) Resp
    }

    class PooledClientLease {
        -slot Arc~PoolSlot~
        -_permit OwnedSemaphorePermit
        -release_inner Option~Arc~ClientPoolInner~~
        +new(slot, permit, release_inner) PooledClientLease
        +send(request) Result
        +receive() Result
        +call(request) Result
        +call_correlated(request) Result
        <<Drop>>
    }

    class PoolScheduler {
        -fairness_policy PoolFairnessPolicy
        -next_handle_id AtomicU64
        -is_servicing AtomicBool
        -state Mutex~SchedulerState~
        +new(fairness_policy) PoolScheduler
        +register_handle() u64
        +deregister_handle(handle_id) void
        +acquire_for_handle(inner, handle_id) Result~PooledClientLease~
        +notify_capacity_available(inner) void
        -kick(inner) void
        -restart_if_waiters() bool
        -take_next_waiter_or_stop() Option~WaiterSender~
        -service_waiters(inner) void
    }

    class SchedulerState {
        -waiters HashMap~u64, WaiterSender~
        -fifo_waiters VecDeque~u64~
        -round_robin_handles VecDeque~u64~
        +new() SchedulerState
        +register_handle(handle_id) void
        +deregister_handle(handle_id) void
        +enqueue_waiter(handle_id, sender) void
        +has_waiters() bool
        +take_next_waiter(policy) Option~WaiterSender~
        +take_next_fifo_waiter() Option~WaiterSender~
        +take_next_round_robin_waiter() Option~WaiterSender~
    }

    class ClientPoolConfig {
        -pool_size usize
        -max_in_flight_per_socket usize
        -idle_timeout Duration
        -fairness_policy PoolFairnessPolicy
        +default() ClientPoolConfig
        +pool_size(value) ClientPoolConfig
        +max_in_flight_per_socket(value) ClientPoolConfig
        +idle_timeout(value) ClientPoolConfig
        +fairness_policy(fairness_policy) ClientPoolConfig
        +pool_size_value() usize
        +max_in_flight_per_socket_value() usize
        +idle_timeout_value() Duration
        +fairness_policy_value() PoolFairnessPolicy
    }

    class PoolFairnessPolicy {
        <<enum>>
        RoundRobin
        Fifo
    }

    class PoolSlot {
    }

    class ClientError {
    }

    WireframeClientPool --> ClientPoolInner : inner
    ClientPoolInner --> PoolScheduler : scheduler
    ClientPoolInner --> PoolSlot : slots
    PoolHandle --> ClientPoolInner : inner
    PooledClientLease --> ClientPoolInner : release_inner
    PoolScheduler --> SchedulerState : state
    PoolScheduler --> PoolFairnessPolicy : fairness_policy
    ClientPoolConfig --> PoolFairnessPolicy : fairness_policy
    PoolScheduler --> PooledClientLease : constructs
    PoolScheduler --> ClientError : returns
    WireframeClientPool ..> PoolHandle : handle()
    WireframeClientPool ..> PooledClientLease : acquire()
    PoolHandle ..> PooledClientLease : acquire()
    PooledClientLease ..> ClientError : results
    ClientPoolInner ..> ClientError : results
    PoolScheduler ..> ClientPoolInner : uses
Loading

File-Level Changes

Change Details Files
Refactor WireframeClientPool around a shared ClientPoolInner and add a fairness-aware handle entrypoint.
  • Introduce ClientPoolInner holding slots array, next_slot counter, and shared PoolScheduler state.
  • Change WireframeClientPool to store Arc instead of its own slots/next_slot fields.
  • Initialize PoolScheduler from ClientPoolConfig::fairness_policy_value() during pool construction.
  • Expose WireframeClientPool::handle() that registers a logical-session handle with the scheduler and returns a PoolHandle.
  • Route WireframeClientPool::acquire() through a temporary PoolHandle so ad-hoc acquires still participate in fairness.
  • Move immediate-acquire and select_all logic into ClientPoolInner::try_acquire_immediately and ClientPoolInner::acquire_slot_permit.
  • Move pool::close into WireframeClientPool and keep ClientPoolInner focused on acquisition helpers.
src/client/pool/client_pool.rs
Add a PoolScheduler that enforces configurable fairness policies across PoolHandle waiters.
  • Introduce PoolScheduler with a shared SchedulerState protected by a mutex, tracking waiters, FIFO queue, and round-robin handle rotation.
  • Implement PoolScheduler::register_handle/deregister_handle for stable logical-session IDs.
  • Implement PoolScheduler::acquire_for_handle to attempt an immediate lease via ClientPoolInner::try_acquire_immediately, otherwise enqueue a oneshot waiter and kick the scheduler loop.
  • Implement PoolScheduler::notify_capacity_available and an internal kick/service_waiters loop that acquires real slot permits and delivers PooledClientLease instances to chosen waiters.
  • Ensure servicing is serialized via an AtomicBool and that fairness policy decides which waiting handle receives the next lease.
src/client/pool/scheduler.rs
src/client/pool/client_pool.rs
Introduce a public PoolHandle type that represents a logical session and uses the scheduler for fair acquisition, plus a PoolFairnessPolicy configuration surface.
  • Add PoolHandle struct holding Arc and a handle_id, with a Drop impl that deregisters from the scheduler.
  • Implement PoolHandle::acquire to ask the scheduler for a lease under the configured fairness policy.
  • Implement PoolHandle::call as a safe whole-operation helper that acquires a lease and performs a request/response round trip.
  • Add PoolFairnessPolicy enum with RoundRobin (default) and Fifo variants.
  • Expose PoolHandle and PoolFairnessPolicy from the pool module and re-export them from wireframe::client.
src/client/pool/handle.rs
src/client/pool/policy.rs
src/client/pool/mod.rs
src/client/mod.rs
Extend PooledClientLease to notify the scheduler when capacity returns so waiting handles can be serviced fairly.
  • Add an optional Arc field to PooledClientLease to use on drop.
  • Update PooledClientLease::new signature to accept an optional inner reference and propagate this from scheduler-created leases.
  • Implement Drop for PooledClientLease that calls inner.scheduler.notify_capacity_available when release_inner is present.
src/client/pool/lease.rs
src/client/pool/scheduler.rs
Extend ClientPoolConfig with a fairness policy knob and wire it through docs and examples.
  • Add fairness_policy field to ClientPoolConfig, defaulting to PoolFairnessPolicy::RoundRobin.
  • Provide a builder-style fairness_policy(...) setter plus a fairness_policy_value() accessor.
  • Update Rustdoc examples and users-guide snippets to show configuring PoolFairnessPolicy and reading it back.
  • Update WireframeClientPool construction to read the configured fairness policy and pass it to PoolScheduler::new.
src/client/pool/config.rs
docs/users-guide.md
src/client/pool/client_pool.rs
Document the PoolHandle design and mark roadmap item 11.2.2 as complete.
  • Add a detailed execution plan and outcome document for 11.2.2 under docs/execplans, covering constraints, risks, design decisions, and validation steps.
  • Add a decision record section to docs/wireframe-client-design.md explaining that PoolHandle is a fairness identity, not a socket-affinity handle, and clarifying the scheduling model and policies.
  • Mark roadmap item 11.2.2 as completed in docs/roadmap.md.
docs/execplans/11-2-2-expose-pool-handle-api.md
docs/wireframe-client-design.md
docs/roadmap.md
Add focused unit tests for PoolHandle fairness, back-pressure, and preamble/idle-recycle semantics, including a shared build_preamble_pool helper.
  • Add src/client/tests/pool_handle.rs with rstest-based tests for round-robin fairness, FIFO ordering, back-pressure preservation, warm reuse, and idle recycle via PoolHandle.
  • Introduce build_handle_pool and build_preamble_pool helpers that centralize PoolTestServer startup, pooled client construction, and preamble counter wiring for PoolHandle tests.
  • Refactor PoolHandle-related tests to construct pools via build_preamble_pool instead of duplicating server/pool/preamble setup inline.
  • Wire the new test module into src/client/tests/mod.rs behind the pool feature flag.
src/client/tests/pool_handle.rs
src/client/tests/mod.rs
Add BDD coverage for PoolHandle behaviour using a shared ClientPoolHandleWorld fixture.
  • Introduce ClientPoolHandleWorld fixture that can construct pools with specific fairness policies, track preamble counts, record grant order, and run canned scenarios for round-robin, FIFO, back-pressure, and warm-reuse+recycle.
  • Implement BDD step definitions that call into ClientPoolHandleWorld methods using a current-thread Tokio runtime per scenario.
  • Add feature file scenarios describing PoolHandle fairness and back-pressure behaviour, and scenario bindings that map Gherkin scenarios to the fixture.
  • Wire the new fixture, steps, and scenarios into the existing bdd_pool test harness.
tests/fixtures/client_pool_handle.rs
tests/features/client_pool_handle.feature
tests/steps/client_pool_handle_steps.rs
tests/scenarios/client_pool_handle_scenarios.rs
tests/bdd_pool/fixtures.rs
tests/bdd_pool/scenarios.rs

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Mar 10, 2026

Walkthrough

Summarise the addition of a fairness-oriented PoolHandle API with PoolFairnessPolicy (RoundRobin, Fifo); refactor pool internals into ClientPoolInner with a PoolScheduler; implement handle-based acquire/call and close paths; extend lease drop semantics; and add tests and docs to validate fairness, back‑pressure and reuse behaviours.

Changes

Cohort / File(s) Summary
Documentation
docs/execplans/11-2-2-expose-pool-handle-api.md, docs/roadmap.md, docs/users-guide.md, docs/wireframe-client-design.md
Add ExecPlan and design decision record; update roadmap and user guide to describe PoolHandle, fairness policies, API surface and validation plan.
Public API wiring
src/client/mod.rs, src/client/pool/mod.rs
Re-export PoolFairnessPolicy and PoolHandle; add new submodules handle, policy, scheduler.
Pool configuration
src/client/pool/config.rs, src/client/pool/policy.rs
Introduce PoolFairnessPolicy (RoundRobin, Fifo); add ClientPoolConfig::fairness_policy() setter and fairness_policy_value() getter; default to RoundRobin.
Core pool implementation
src/client/pool/client_pool.rs, src/client/pool/lease.rs
Refactor WireframeClientPool to own Arc<ClientPoolInner>; move slots/next_slot/scheduler/shutdown into ClientPoolInner; add handle(), acquire(), close(); extend PooledClientLease with optional release_inner and notify scheduler on Drop.
Handle & scheduler
src/client/pool/handle.rs, src/client/pool/scheduler.rs
Add PoolHandle type with acquire() and call() and Drop deregistration; add PoolScheduler implementing enqueueing, RoundRobin/FIFO selection, background service loop and capacity notifications.
Unit tests
src/client/tests/mod.rs, src/client/tests/pool_handle.rs
Add pool_handle unit tests covering round‑robin, FIFO, back‑pressure, warm reuse and idle recycle behaviours.
BDD tests
tests/features/client_pool_handle.feature, tests/bdd_pool/..., tests/fixtures/client_pool_handle.rs, tests/scenarios/client_pool_handle_scenarios.rs, tests/steps/client_pool_handle_steps.rs
Add Gherkin feature, fixtures, steps and scenario wiring to validate handle fairness, ordering, blocking and reuse/recycle scenarios.
Test helpers
src/test_helpers.rs, src/test_helpers/pool_client.rs
Export and add helpers acquire_and_record and build_preamble_pool to support tests and recording of grants.

Sequence Diagram(s)

sequenceDiagram
    participant Client
    participant PoolHandle as Pool Handle
    participant PoolScheduler as Pool Scheduler
    participant PoolInner as Pool Inner
    participant PoolSlot as Pool Slot

    Client->>PoolHandle: acquire()
    PoolHandle->>PoolScheduler: acquire_for_handle(inner, handle_id)

    alt Immediate availability
        PoolScheduler->>PoolInner: try_acquire_immediately()
        PoolInner->>PoolSlot: check availability
        PoolSlot-->>PoolInner: available
        PoolInner-->>PoolScheduler: AcquirePermit
        PoolScheduler-->>PoolHandle: PooledClientLease
        PoolHandle-->>Client: Ok(lease)
    else Enqueue and wait
        PoolScheduler->>PoolScheduler: enqueue_waiter(handle_id) / kick()
        PoolScheduler->>PoolScheduler: service loop spawns

        loop Service loop
            PoolScheduler->>PoolInner: acquire_slot_permit()
            PoolInner->>PoolSlot: acquire permit (rotated)
            PoolSlot-->>PoolInner: permit acquired
            PoolInner-->>PoolScheduler: AcquirePermit
            PoolScheduler->>Waiter: send lease via oneshot
        end

        PoolHandle->>PoolScheduler: await oneshot
        PoolScheduler-->>PoolHandle: PooledClientLease
        PoolHandle-->>Client: Ok(lease)
    end

    Client->>PoolHandle: drop(lease)
    PoolHandle->>PoolScheduler: notify_capacity_available()
    PoolScheduler->>PoolScheduler: service next waiter
Loading

Estimated Code Review Effort

🎯 4 (Complex) | ⏱️ ~50 minutes

Poem

🎪 Handles line up, fair and spry,
Round‑robin waltz and FIFO sigh,
Scheduler hums, permits return,
Leases flow and servers learn,
Code keeps order while sessions fly.

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description check ✅ Passed The description comprehensively outlines public API additions, internal refactoring, test changes, and documentation updates, all of which align directly with the changeset content.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.
Title check ✅ Passed The title accurately summarises the primary changes: exposing a PoolHandle API with fair scheduling. It is concise, specific, and reflects the main objective without noise or vagueness.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch expose-poolhandle-api-cgnmjg
📝 Coding Plan
  • Generate coding plan for human review comments

Comment @coderabbitai help to get the list of available commands and usage tips.

codescene-delta-analysis[bot]

This comment was marked as outdated.

- Introduce PoolHandle as a logical-session identity for fair and orderly pooled lease acquisition.
- Implement PoolScheduler to queue blocked handles and grant leases according to fairness policies.
- Add PoolFairnessPolicy enum with RoundRobin (default) and Fifo variants.
- Update WireframeClientPool to expose handle() for creating PoolHandle instances.
- Modify lease and pool inner types to cooperate with scheduler for fairness and back-pressure.
- Include comprehensive unit, BDD scenario, and integration tests for PoolHandle fairness behaviors.
- Update docs and examples to showcase PoolHandle usage and fairness configurations.

This enhancement enables clients to multiplex logical sessions fairly over pooled connections without compromising back-pressure or warm socket reuse.

Co-authored-by: devboxerhub[bot] <devboxerhub[bot]@users.noreply.github.com>
@leynos leynos changed the title Expose PoolHandle API with fair acquisition (11.2.2) Implement PoolHandle API with fairness scheduling (11.2.2) Mar 12, 2026
codescene-delta-analysis[bot]

This comment was marked as outdated.

@leynos
Copy link
Copy Markdown
Owner Author

leynos commented Mar 13, 2026

@coderabbitai Please suggest a fix for this issue and supply a prompt for an AI coding agent to enable it to apply the fix:

src/client/tests/pool_handle.rs

Comment on file

async fn handle_path_recycles_after_idle_timeout(
    client_pool_config: ClientPoolConfig,
) -> TestResult {
    let preamble_callback_count = Arc::new(AtomicUsize::new(0));
    let server = PoolTestServer::start().await?;
    let idle_timeout = Duration::from_millis(50);
    let pool = build_pooled_client(
        server.addr,
        client_pool_config.pool_size(1).idle_timeout(idle_timeout),
        preamble_callback_count.clone(),
    )
    .await?;
    let mut handle = pool.handle();

    let first: Pong = handle.call(&Ping(1)).await?;
    assert_eq!(first, Pong(1));

    advance(idle_timeout + idle_timeout).await;
    tokio::task::yield_now().await;

    let second: Pong = handle.call(&Ping(2)).await?;
    assert_eq!(second, Pong(2));
    assert_eq!(preamble_callback_count.load(Ordering::SeqCst), 2);
    assert_eq!(server.preamble_count(), 2);
    assert_eq!(server.connection_count(), 2);
    Ok(())
}

❌ New issue: Code Duplication
The module contains 2 functions with similar structure: handle_path_preserves_warm_reuse_and_preamble,handle_path_recycles_after_idle_timeout

@leynos
Copy link
Copy Markdown
Owner Author

leynos commented Mar 13, 2026

@coderabbitai Please suggest a fix for this issue and supply a prompt for an AI coding agent to enable it to apply the fix:

tests/fixtures/client_pool_handle.rs

Comment on file

    pub async fn run_warm_reuse_then_idle_recycle_scenario(&mut self) -> TestResult {
        tokio::time::pause();
        self.start_server().await?;
        let idle_timeout = Duration::from_millis(50);
        self.connect_pool(
            ClientPoolConfig::default()
                .pool_size(1)
                .idle_timeout(idle_timeout),
        )
        .await?;
        let pool = self.pool.as_ref().ok_or("pool missing")?;
        let mut handle = pool.handle();

        let first: Pong = handle.call(&Ping(1)).await?;
        let second: Pong = handle.call(&Ping(2)).await?;
        if first != Pong(1) || second != Pong(2) {
            return Err("unexpected warm reuse response sequence".into());
        }
        let warm_reuse_preserved = self.preamble_callback_count.load(Ordering::SeqCst) == 1
            && self.server.as_ref().is_some_and(|server| {
                server.preamble_count() == 1 && server.connection_count() == 1
            });

        tokio::time::advance(idle_timeout + idle_timeout).await;
        tokio::task::yield_now().await;

        let third: Pong = handle.call(&Ping(3)).await?;
        self.warm_reuse_then_recycle = third == Pong(3)
            && warm_reuse_preserved
            && self.preamble_callback_count.load(Ordering::SeqCst) == 2
            && self.server.as_ref().is_some_and(|server| {
                server.preamble_count() == 2 && server.connection_count() == 2
            });
        Ok(())
    }

❌ New issue: Complex Method
ClientPoolHandleWorld.run_warm_reuse_then_idle_recycle_scenario has a cyclomatic complexity of 9, threshold = 9

@coderabbitai

This comment was marked as resolved.

@coderabbitai

This comment was marked as resolved.

…ount tracking

Refactored pool_handle tests to introduce a helper function build_preamble_pool that
constructs the test pool along with the preamble callback counter. Updated existing tests
handle_path_preserves_warm_reuse_and_preamble and handle_path_recycles_after_idle_timeout
to use the new helper. Added helper methods in client_pool_handle fixture to simplify
assertions on preamble callback counts and server state. Also fixed minor markdown formatting
in related docs.

Co-authored-by: devboxerhub[bot] <devboxerhub[bot]@users.noreply.github.com>
@leynos leynos changed the title Implement PoolHandle API with fairness scheduling (11.2.2) Refactor tests to use build_preamble_pool helper (11.2.2) Mar 15, 2026
codescene-delta-analysis[bot]

This comment was marked as outdated.

@leynos leynos marked this pull request as ready for review March 16, 2026 19:55
Copy link
Copy Markdown
Contributor

@sourcery-ai sourcery-ai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - I've found 4 issues, and left some high level feedback:

  • The PR title/summary mention only refactoring tests, but the diff introduces a new public PoolHandle API, scheduler, and config surface; consider updating the PR metadata so reviewers immediately see the API/behavioural scope being changed.
  • The helper logic for building handle-aware pools and repeatedly acquiring/recording grants is duplicated between tests/fixtures/client_pool_handle.rs and src/client/tests/pool_handle.rs; consider consolidating these into a shared test helper to keep the scenarios and unit tests in sync.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- The PR title/summary mention only refactoring tests, but the diff introduces a new public `PoolHandle` API, scheduler, and config surface; consider updating the PR metadata so reviewers immediately see the API/behavioural scope being changed.
- The helper logic for building handle-aware pools and repeatedly acquiring/recording grants is duplicated between `tests/fixtures/client_pool_handle.rs` and `src/client/tests/pool_handle.rs`; consider consolidating these into a shared test helper to keep the scenarios and unit tests in sync.

## Individual Comments

### Comment 1
<location path="src/client/pool/scheduler.rs" line_range="143-144" />
<code_context>
+        inner: Arc<ClientPoolInner<S, P, C>>,
+        handle_id: u64,
+    ) -> Result<PooledClientLease<S, P, C>, ClientError> {
+        if let Some(lease) = inner.try_acquire_immediately() {
+            return Ok(lease);
+        }
+
</code_context>
<issue_to_address>
**question (bug_risk):** Immediate acquisition here can bypass fairness for already-queued waiters.

In `acquire_for_handle`, the fast path calls `inner.try_acquire_immediately()` before considering `state.waiters`. When waiters already exist, this lets a new handle take a free permit without going through `SchedulerState::take_next_waiter` or honoring `PoolFairnessPolicy`, so newer callers can effectively jump the queue under contention. If strict fairness is required whenever `has_waiters()` is true, consider only using this immediate path when there are no recorded waiters, or clearly document that fairness is only guaranteed when using a long-lived `PoolHandle` and not per-call `acquire()`.
</issue_to_address>

### Comment 2
<location path="src/client/tests/pool_handle.rs" line_range="138-153" />
<code_context>
+
+#[rstest]
+#[tokio::test(flavor = "current_thread")]
+async fn handle_acquire_respects_back_pressure(client_pool_config: ClientPoolConfig) -> TestResult {
+    let (_server, pool) = build_handle_pool(client_pool_config.pool_size(1)).await?;
+    let mut first = pool.handle();
</code_context>
<issue_to_address>
**suggestion (testing):** Add a test for the case where a waiting handle is dropped before it acquires a lease, to cover the scheduler’s cancellation path.

The scheduler has explicit logic for when `sender.send(result)` fails and a lease is dropped, to avoid leaking capacity, but this path isn’t currently exercised because all `acquire_and_record` tasks complete successfully. Please add a test where a `PoolHandle` (or its task) is dropped while blocked in `acquire()`, then verify that capacity isn’t leaked (a subsequent `acquire()` still succeeds) and that remaining waiters are still serviced. For example, spawn two waiters, drop one before it acquires, and assert the other can still acquire and the pool remains usable, thereby covering the `sender.send(..)` error branch in `PoolScheduler::service_waiters`.

```suggestion
#[rstest]
#[tokio::test(flavor = "current_thread")]
async fn handle_acquire_respects_back_pressure(client_pool_config: ClientPoolConfig) -> TestResult {
    let (_server, pool) = build_handle_pool(client_pool_config.pool_size(1)).await?;
    let mut first = pool.handle();
    let mut second = pool.handle();

    let held_lease = first.acquire().await?;
    let blocked = timeout(Duration::from_millis(25), second.acquire()).await;
    assert!(blocked.is_err(), "second handle should stay blocked");

    drop(held_lease);
    let recovered = timeout(Duration::from_millis(100), second.acquire()).await?;
    let _recovered = recovered?;
    Ok(())
}

#[rstest]
#[tokio::test(flavor = "current_thread")]
async fn handle_acquire_dropped_waiter_does_not_leak_capacity(
    client_pool_config: ClientPoolConfig,
) -> TestResult {
    let (_server, pool) = build_handle_pool(client_pool_config.pool_size(1)).await?;

    // First handle acquires the only available lease, causing subsequent acquires to block.
    let mut holder = pool.handle();
    let mut waiter1 = pool.handle();
    let mut waiter2 = pool.handle();

    let held_lease = holder.acquire().await?;

    // Spawn two waiters that will block on acquire().
    let waiter1_task = tokio::spawn(async move { waiter1.acquire().await });
    let waiter2_task = tokio::spawn(async move { waiter2.acquire().await });

    // Give the scheduler a moment to register both waiters.
    tokio::time::sleep(Duration::from_millis(25)).await;

    // Drop the first waiter before any lease is released to exercise the sender.send(..) error path.
    drop(waiter1_task);

    // Releasing the held lease should not leak capacity and should service remaining waiters.
    drop(held_lease);

    // The remaining waiter should still be able to acquire a lease.
    let lease2 = timeout(Duration::from_millis(100), waiter2_task).await???;
    drop(lease2);

    // The pool should remain usable: a new acquire should succeed.
    let mut later = pool.handle();
    let _lease3 = timeout(Duration::from_millis(100), later.acquire()).await??;

    Ok(())
}
```
</issue_to_address>

### Comment 3
<location path="src/client/pool/scheduler.rs" line_range="27" />
<code_context>
+
+type WaiterSender<S, P, C> = oneshot::Sender<Result<PooledClientLease<S, P, C>, ClientError>>;
+
+struct SchedulerState<S, P, C>
+where
+    S: Serializer + Clone + Send + Sync + 'static,
</code_context>
<issue_to_address>
**issue (complexity):** Consider restructuring the scheduler state to use per-handle waiter queues plus a single FIFO queue instead of three shared collections keyed by handle IDs.

The main complexity comes from tracking handle IDs in three places and the round‑robin logic having to keep those structures consistent. You can keep the same behaviour while simplifying the state shape and `take_next_*` logic.

### 1. Unify state: per‑handle queues

Instead of `waiters: HashMap<u64, WaiterSender<…>> + fifo_waiters: VecDeque<u64> + round_robin_handles: VecDeque<u64>`, you can:

- Keep FIFO as a simple queue of senders.
- Keep round‑robin as a queue of handles, each with its own waiter queue.
- Stop storing `handle_id` in multiple collections and doing `retain` scans.

Example sketch:

```rust
struct HandleQueue<S, P, C> {
    waiters: VecDeque<WaiterSender<S, P, C>>,
}

struct SchedulerState<S, P, C> {
    // All handles that participate in round-robin, in rotation order.
    round_robin_handles: VecDeque<u64>,
    // Per-handle queues (only used for round-robin).
    handle_queues: HashMap<u64, HandleQueue<S, P, C>>,
    // FIFO queue (policy = Fifo).
    fifo: VecDeque<WaiterSender<S, P, C>>,
}

impl<S, P, C> SchedulerState<S, P, C> {
    fn register_handle(&mut self, handle_id: u64) {
        self.round_robin_handles.push_back(handle_id);
        self.handle_queues.entry(handle_id).or_insert_with(|| HandleQueue {
            waiters: VecDeque::new(),
        });
    }

    fn deregister_handle(&mut self, handle_id: u64) {
        self.handle_queues.remove(&handle_id);
        self.round_robin_handles
            .retain(|queued_id| *queued_id != handle_id);
        // nothing to do for FIFO; waiters are anonymous
    }

    fn enqueue_waiter(
        &mut self,
        handle_id: u64,
        sender: WaiterSender<S, P, C>,
        policy: PoolFairnessPolicy,
    ) {
        match policy {
            PoolFairnessPolicy::Fifo => self.fifo.push_back(sender),
            PoolFairnessPolicy::RoundRobin => {
                let queue = self.handle_queues
                    .entry(handle_id)
                    .or_insert_with(|| HandleQueue { waiters: VecDeque::new() });
                queue.waiters.push_back(sender);
            }
        }
    }

    fn has_waiters(&self) -> bool {
        !self.fifo.is_empty()
            || self.handle_queues.values().any(|h| !h.waiters.is_empty())
    }
}
```

### 2. Simplify round‑robin selection

With the above state, `take_next_round_robin_waiter` no longer needs to:

- Look up a sender in a global map, **and**
- Prune `fifo_waiters` with `retain`.

It can simply rotate over handles and pop from per‑handle queues:

```rust
impl<S, P, C> SchedulerState<S, P, C> {
    fn take_next_waiter(&mut self, policy: PoolFairnessPolicy)
        -> Option<WaiterSender<S, P, C>>
    {
        match policy {
            PoolFairnessPolicy::Fifo => self.take_next_fifo_waiter(),
            PoolFairnessPolicy::RoundRobin => self.take_next_round_robin_waiter(),
        }
    }

    fn take_next_fifo_waiter(&mut self) -> Option<WaiterSender<S, P, C>> {
        self.fifo.pop_front()
    }

    fn take_next_round_robin_waiter(&mut self) -> Option<WaiterSender<S, P, C>> {
        let len = self.round_robin_handles.len();
        for _ in 0..len {
            let handle_id = self.round_robin_handles.pop_front()?;
            self.round_robin_handles.push_back(handle_id);

            if let Some(queue) = self.handle_queues.get_mut(&handle_id) {
                if let Some(sender) = queue.waiters.pop_front() {
                    return Some(sender);
                }
            }
        }
        None
    }
}
```

This removes:

- The `waiters: HashMap<u64, WaiterSender<…>>`.
- The `fifo_waiters: VecDeque<u64>` and its repeated `retain` calls.
- Cross‑collection consistency concerns (no need to synchronize a global map with two queues of IDs).

All functionality is preserved: FIFO still gets strict queueing, and round‑robin still rotates over handles, but the scheduling logic is easier to follow and cheaper to maintain.
</issue_to_address>

### Comment 4
<location path="docs/users-guide.md" line_range="1431" />
<code_context>
 admission budget, not as a guarantee of parallel writes on one TCP stream.

+Create a `PoolHandle` when one logical session needs repeated pooled access and
+you want that session to participate in the configured fairness policy over
+time:
+
</code_context>
<issue_to_address>
**suggestion (review_instructions):** This sentence uses the 2nd-person pronoun "you", which the documentation style guide forbids.

To comply with the style guide, rephrase this line in the third person, for example: "when one logical session needs repeated pooled access and the session should participate in the configured fairness policy over time" or similar.

<details>
<summary>Review instructions:</summary>

**Path patterns:** `**/*.md`

**Instructions:**
Avoid 2nd person or 1st person pronouns ("I", "you", "we").

</details>
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Comment thread src/client/pool/scheduler.rs Outdated
Comment thread src/client/tests/pool_handle.rs
Comment thread src/client/pool/scheduler.rs
Comment thread docs/users-guide.md Outdated
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 13a635c326

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment thread src/client/pool/scheduler.rs Outdated
Comment thread src/client/pool/scheduler.rs Outdated
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@docs/roadmap.md`:
- Line 505: Update the checklist item text "11.2.2. Expose a `PoolHandle` API
with fairness policies so callers can" to insert a comma before the conjunction
so, changing it to "11.2.2. Expose a `PoolHandle` API with fairness policies, so
callers can" to improve readability and satisfy prose linting.

In `@src/client/pool/client_pool.rs`:
- Around line 130-133: The close() implementation currently just drops the
wrapper allowing existing PoolHandle instances to keep ClientPoolInner alive and
continue to acquire sockets; modify ClientPoolInner to hold a shutdown flag
(e.g., AtomicBool or Mutex<bool>) and update close() to set that flag,
notify/await any background tasks, and actively tear down slot state (close warm
sockets and wake pending waiters). Ensure handle() and PoolHandle::acquire()
check the shutdown flag and return an error immediately if set, and make close()
signal any waiting/acquiring futures so they fail fast; update any methods
manipulating slots to short-circuit when ClientPoolInner is closed.
- Around line 143-147: try_acquire_immediately currently constructs
PooledClientLease with release_inner = None, so immediately-acquired leases
won't call notify_capacity_available on drop; change the method signature to
take self: &Arc<Self> and, when creating the lease in try_acquire_immediately,
pass Some(Arc::clone(self)) to PooledClientLease::new (instead of None) so the
drop path (lease.rs drop calling notify_capacity_available()) will wake the next
queued handle—this mirrors the service_waiters/acquire_for_handle behavior in
scheduler.rs.

In `@src/client/pool/scheduler.rs`:
- Around line 143-151: The fast-path early return using try_acquire_immediately
bypasses queued waiters and breaks FIFO fairness; remove the early return and
always route acquisition through the waiter queue by first creating the oneshot,
calling recover_mutex(&self.state).enqueue_waiter(handle_id, sender), then
attempt try_acquire_immediately and, if it succeeds, fulfill the just-enqueued
waiter (send the lease on the oneshot) so the queued order is preserved;
otherwise call self.kick(inner) and await the receiver as before (use the
existing symbols try_acquire_immediately, enqueue_waiter, kick, sender/receiver,
handle_id, inner, state).

In `@src/client/tests/pool_handle.rs`:
- Around line 88-90: The test currently asserts a fixed ordering for the grants
vector (observed) which is order-dependent; instead, change the assertion in the
pool handle test to verify alternation and correct counts: confirm
observed.len() is 6, that counts for "a" and "b" are each 3, and that for each
adjacent pair observed[i] != observed[i - 1] to ensure strict alternation;
update the assertions around the grants.lock().await.clone() check in the test
(the observed variable) to use these checks rather than assert_eq! with a fixed
vector.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: 97794d0b-3933-4ea6-82cb-7a13f3400901

📥 Commits

Reviewing files that changed from the base of the PR and between 663a754 and 13a635c.

📒 Files selected for processing (20)
  • docs/execplans/11-2-2-expose-pool-handle-api.md
  • docs/roadmap.md
  • docs/users-guide.md
  • docs/wireframe-client-design.md
  • src/client/mod.rs
  • src/client/pool/client_pool.rs
  • src/client/pool/config.rs
  • src/client/pool/handle.rs
  • src/client/pool/lease.rs
  • src/client/pool/mod.rs
  • src/client/pool/policy.rs
  • src/client/pool/scheduler.rs
  • src/client/tests/mod.rs
  • src/client/tests/pool_handle.rs
  • tests/bdd_pool/fixtures.rs
  • tests/bdd_pool/scenarios.rs
  • tests/features/client_pool_handle.feature
  • tests/fixtures/client_pool_handle.rs
  • tests/scenarios/client_pool_handle_scenarios.rs
  • tests/steps/client_pool_handle_steps.rs

Comment thread docs/roadmap.md Outdated
Comment thread src/client/pool/client_pool.rs
Comment thread src/client/pool/client_pool.rs Outdated
Comment thread src/client/pool/scheduler.rs Outdated
Comment thread src/client/tests/pool_handle.rs
This commit introduces a shutdown mechanism to the client connection pool to prevent new leases from being acquired after shutdown. It adds an AtomicBool flag to track pool shutdown state, modifies acquisition logic to respect shutdown status, and enhances the scheduler to notify all waiting tasks with disconnection errors during shutdown.

Additionally, new tests were added to verify that dropped waiters do not leak capacity and that shutdown behavior is correctly handled.

Minor documentation improvements and test helper refactors accompany these changes.

Co-authored-by: devboxerhub[bot] <devboxerhub[bot]@users.noreply.github.com>
@leynos leynos changed the title Refactor tests to use build_preamble_pool helper (11.2.2) Expose PoolHandle API with fairness scheduling (11.2.2) Mar 16, 2026
codescene-delta-analysis[bot]

This comment was marked as outdated.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

♻️ Duplicate comments (2)
src/test_helpers/pool_client.rs (1)

139-150: 🛠️ Refactor suggestion | 🟠 Major

Finish centralizing the preamble-pool setup.

This helper still requires a pre-started PoolTestServer, so the warm-reuse
tests continue to repeat server startup boilerplate. The Rustdoc also says
server startup can fail, but this function never starts one. Move
PoolTestServer::start() into this helper and return the server alongside the
pool and counter.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/test_helpers/pool_client.rs` around lines 139 - 150, The helper
build_preamble_pool currently expects a pre-started PoolTestServer but the doc
and tests expect it to start the server itself; change build_preamble_pool to
call PoolTestServer::start() inside the function (handle and propagate any start
errors), construct the preamble_callback_count, call build_pooled_client with
the started server.addr, and return the started PoolTestServer along with the
TestClientPool and Arc<AtomicUsize> (i.e. change the Result return type to
Result<(PoolTestServer, TestClientPool, Arc<AtomicUsize>), ClientError> and
update all call sites accordingly).
tests/fixtures/client_pool_handle.rs (1)

189-201: ⚠️ Potential issue | 🟠 Major

Assert alternation, not a fixed first grant.

Either session can win the first slot while round-robin is still correct.
Hard-coding session-a as the first grant makes the behavioural scenario flap
on task start order.

🛠️ Proposed fix
-    pub fn sessions_alternate_fairly(&self) -> bool {
-        self.grant_order
-            == [
-                "session-a",
-                "session-b",
-                "session-a",
-                "session-b",
-                "session-a",
-                "session-b",
-            ]
-    }
+    pub fn sessions_alternate_fairly(&self) -> bool {
+        self.grant_order.len() == 6
+            && self
+                .grant_order
+                .iter()
+                .filter(|label| label.as_str() == "session-a")
+                .count()
+                == 3
+            && self
+                .grant_order
+                .iter()
+                .filter(|label| label.as_str() == "session-b")
+                .count()
+                == 3
+            && self.grant_order.windows(2).all(|pair| pair[0] != pair[1])
+    }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/fixtures/client_pool_handle.rs` around lines 189 - 201, The test
sessions_alternate_fairly currently asserts a fixed start ("session-a") which
makes the test flaky; change it to verify alternation generically by checking
that grant_order has the expected length (6), that grant_order[0] !=
grant_order[1], and that for every even index i grant_order[i] == grant_order[0]
and for every odd index i grant_order[i] == grant_order[1]; update the
sessions_alternate_fairly function to implement this alternating pattern check
against self.grant_order instead of the hard-coded array.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/client/pool/scheduler.rs`:
- Around line 216-225: The loop in service_waiters removes the waiter early with
take_next_waiter_or_stop() then awaits inner.acquire_slot_permit(), which can be
missed by shutdown notifications; fix by racing the permit acquisition with the
pool shutdown notifier: use a select (or futures::select / tokio::select)
between inner.acquire_slot_permit() and inner.shutdown_notified() (or
notify_shutdown()) so if shutdown fires you send an Err (or cancellation) to the
removed sender and stop/continue appropriately, and if the permit wins you
construct the PooledClientLease::new(slot, permit, Some(Arc::clone(&inner))) and
send it as before; reference service_waiters, take_next_waiter_or_stop,
inner.acquire_slot_permit, ClientPoolInner and PooledClientLease in the change.
- Around line 151-156: The code currently calls std::mem::forget(receiver) in
the success path of try_acquire_immediately which leaks the oneshot receiver and
can strand a lease when service_waiters() races and later sender.send(Ok(lease))
succeeds; replace the mem::forget with dropping the receiver (e.g.,
drop(receiver)) so the oneshot is closed and any later sender.send will return
Err(lease), returning the lease to the sender instead of permanently consuming
pool capacity; update the block inside the if let Some(lease) =
inner.try_acquire_immediately() (the branch that calls
take_next_waiter_or_stop()) to explicitly drop the receiver rather than
forgetting it.

In `@src/client/tests/pool_handle.rs`:
- Around line 159-168: The test currently drops the JoinHandle waiter1_task
which detaches the Tokio task and leaves waiter1.acquire() queued; instead abort
the task so the waiter is cancelled: call waiter1_task.abort() and then await
the handle (e.g., let _ = waiter1_task.await) before dropping held_lease so the
pool capacity will be assigned to waiter2_task; update the test to replace
drop(waiter1_task) with abort + await to ensure waiter1 is actually cancelled
and the timeout on waiter2_task exercises the intended path.

---

Duplicate comments:
In `@src/test_helpers/pool_client.rs`:
- Around line 139-150: The helper build_preamble_pool currently expects a
pre-started PoolTestServer but the doc and tests expect it to start the server
itself; change build_preamble_pool to call PoolTestServer::start() inside the
function (handle and propagate any start errors), construct the
preamble_callback_count, call build_pooled_client with the started server.addr,
and return the started PoolTestServer along with the TestClientPool and
Arc<AtomicUsize> (i.e. change the Result return type to Result<(PoolTestServer,
TestClientPool, Arc<AtomicUsize>), ClientError> and update all call sites
accordingly).

In `@tests/fixtures/client_pool_handle.rs`:
- Around line 189-201: The test sessions_alternate_fairly currently asserts a
fixed start ("session-a") which makes the test flaky; change it to verify
alternation generically by checking that grant_order has the expected length
(6), that grant_order[0] != grant_order[1], and that for every even index i
grant_order[i] == grant_order[0] and for every odd index i grant_order[i] ==
grant_order[1]; update the sessions_alternate_fairly function to implement this
alternating pattern check against self.grant_order instead of the hard-coded
array.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: 5dde50f4-268e-4353-9f56-3d484cbe3bcd

📥 Commits

Reviewing files that changed from the base of the PR and between 13a635c and 6bcbcd0.

📒 Files selected for processing (8)
  • docs/roadmap.md
  • docs/users-guide.md
  • src/client/pool/client_pool.rs
  • src/client/pool/scheduler.rs
  • src/client/tests/pool_handle.rs
  • src/test_helpers.rs
  • src/test_helpers/pool_client.rs
  • tests/fixtures/client_pool_handle.rs

Comment thread src/client/pool/scheduler.rs
Comment thread src/client/pool/scheduler.rs Outdated
Comment thread src/client/tests/pool_handle.rs
@leynos

This comment was marked as resolved.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Mar 17, 2026

Tip

For best results, initiate chat on the files or code changes.

🧩 Analysis chain

I'll inspect the current state of the relevant files to determine what has and hasn't been resolved.


🏁 Script executed:

#!/bin/bash
echo "=== src/client/pool/scheduler.rs ==="
cat src/client/pool/scheduler.rs

echo ""
echo "=== src/client/tests/pool_handle.rs ==="
cat src/client/tests/pool_handle.rs

echo ""
echo "=== docs/users-guide.md (lines around 1420-1440) ==="
sed -n '1420,1450p' docs/users-guide.md

echo ""
echo "=== tests/fixtures/client_pool_handle.rs ==="
cat tests/fixtures/client_pool_handle.rs

Length of output: 22561


@leynos: Here is a status assessment for each item. Nothing needs to be annulled — no requirement violates en-GB-oxendict conventions, and none unnecessarily increases cyclomatic complexity.


Overall Comments

# Comment Status
1 PR title/summary does not reflect the new public PoolHandle API, scheduler, and config surface Open — the PR title still reads "Refactor tests to use build_preamble_pool helper (11.2.2)", which undersells the API scope
2 Helper logic duplicated between tests/fixtures/client_pool_handle.rs and src/client/tests/pool_handle.rs Resolvedacquire_and_record and build_preamble_pool are now shared via src/test_helpers; spawn_fifo_waiters is local to the fixture but is not duplicated elsewhere

Individual Comments

# Location Comment Status
1 src/client/pool/scheduler.rs lines 143–144 Immediate acquisition in acquire_for_handle could bypass fairness for already-queued waiters Resolved — the implementation now enqueues the caller before calling try_acquire_immediately, then dispatches the resulting lease through take_next_waiter_or_stop, which respects the configured PoolFairnessPolicy
2 src/client/tests/pool_handle.rs lines 138–153 Add a test for a dropped waiter not leaking capacity Resolved — both handle_acquire_respects_back_pressure and handle_acquire_dropped_waiter_does_not_leak_capacity are present
3 src/client/pool/scheduler.rs line 27 Restructure SchedulerState to use per-handle waiter queues and a single FIFO queue, removing the three-collection design Open — the state still uses waiters: HashMap<u64, WaiterSender<…>>, fifo_waiters: VecDeque<u64>, and round_robin_handles: VecDeque<u64> with retain scans
4 docs/users-guide.md line 1431 Second-person pronoun "you" violates the documentation style guide Resolved — the sentence now reads "that session should participate in the configured fairness policy over time"

Two items remain open: the PR title (Overall Comment 1) and the SchedulerState restructure (Comment 3). Would you like me to raise a follow-up issue for the scheduler refactor, or address either item now?

…uler

- Introduce a Tokio Notify to signal shutdown state in ClientPool.
- Modify PoolScheduler to asynchronously respond to shutdown via notification.
- Prevent waiter starvation by servicing waiters one at a time asynchronously.
- Improve resource cleanup on waiter cancellations.
- Update tests to handle aborted waiter tasks correctly.
- Refactor client pool test helpers and fixtures for correctness.

These changes enable more graceful and responsive shutdown handling in the connection pool implementation.

Co-authored-by: devboxerhub[bot] <devboxerhub[bot]@users.noreply.github.com>
@leynos leynos changed the title Expose PoolHandle API with fairness scheduling (11.2.2) Expose PoolHandle API; refactor tests to use build_preamble_pool Mar 17, 2026
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Mar 18, 2026

Tip

For best results, initiate chat on the files or code changes.

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
echo "=== src/client/pool/scheduler.rs ==="
cat src/client/pool/scheduler.rs

echo ""
echo "=== src/client/tests/pool_handle.rs ==="
cat src/client/tests/pool_handle.rs

echo ""
echo "=== docs/users-guide.md (lines 1420-1450) ==="
sed -n '1420,1450p' docs/users-guide.md

echo ""
echo "=== tests/fixtures/client_pool_handle.rs ==="
cat tests/fixtures/client_pool_handle.rs

echo ""
echo "=== PR title via git log ==="
git log --oneline -5

Length of output: 23782


@leynos: No requirements need to be annulled — none violate en-GB-oxendict conventions, and none unnecessarily increase cyclomatic complexity. Comment 3 (scheduler restructure) reduces complexity, not increases it.

Here is the full status assessment against the current file contents:


Overall Comments

# Comment Status
1 PR title/summary should reflect the new public API, scheduler, and config surface Resolved — the PR title now reads "Expose PoolHandle API with fairness scheduling (11.2.2)"
2 Helper logic duplicated between tests/fixtures/client_pool_handle.rs and src/client/tests/pool_handle.rs Resolvedacquire_and_record and build_preamble_pool are shared via src/test_helpers; both the fixture and the unit tests import them from there

Individual Comments

# Location Comment Status
1 scheduler.rs lines 143–144 Fast path in acquire_for_handle could bypass fairness for already-queued waiters Resolved — the caller is now enqueued via enqueue_waiter before try_acquire_immediately() is called, and the resulting lease is dispatched through take_next_waiter_or_stop(), which honours PoolFairnessPolicy
2 pool_handle.rs lines 138–153 Add a test for a dropped waiter not leaking capacity Resolved — both handle_acquire_respects_back_pressure and handle_acquire_dropped_waiter_does_not_leak_capacity are present; the latter uses waiter1_task.abort() to exercise the sender.send(..) error branch
3 scheduler.rs line 27 Restructure SchedulerState to per-handle waiter queues + single FIFO queue Open — the state still holds waiters: HashMap<u64, WaiterSender<…>>, fifo_waiters: VecDeque<u64>, and round_robin_handles: VecDeque<u64>, with retain scans in take_next_round_robin_waiter and deregister_handle
4 docs/users-guide.md line 1431 Second-person "you" violates the documentation style guide Resolved — the sentence now reads "that session should participate in the configured fairness policy over time"

The sole remaining open item is the SchedulerState three-collection restructure (Comment 3). Would you like to address it in this PR, or shall I open a follow-up issue?

@leynos
Copy link
Copy Markdown
Owner Author

leynos commented Mar 18, 2026

@coderabbitai Have the following now been resolved?

Annul any requirements that violate the en-GB-oxendict spelling (-ize / -yse / -our) conventions (for example a request to replace "normalize" with "normalise" or "artefact" with "artifact"), or where the requirement unnecessarily increases cyclomatic complexity.

Please address the comments from this code review:

## Overall Comments
- In `FragmentWorld::assert_reassembly_over_limit` you re-pattern-match `ReassemblyError` just to recover the `message_id` before delegating to `assert_fragment_reassembly_error`; consider extending `FragmentReassemblyErrorExpectation` (e.g., a variant that doesn’t require a `MessageId`) or adding a helper that extracts the id, so fixtures stay thin and don’t duplicate low-level error matching.
- The various `describe_last_result` / `assert_usize_field`-style helpers in `reassembly::message` and the length/buffered-count checks in `reassembly::fragment` are structurally similar; you could factor out a small internal utility module for common formatting and equality checks to keep the assertion modules smaller and easier to maintain.

## Individual Comments

### Comment 1
<location path="tests/reassembly_assertion_helpers.rs" line_range="68-77" />
<code_context>
+        .expect("completed-for-key assertion should pass");
+}
+
+#[rstest]
+#[case(
+    MessageAssemblyError::Series(MessageSeriesError::SequenceMismatch {
+        expected: FrameSequence(2),
+        found: FrameSequence(3),
+    }),
+    MessageAssemblyErrorExpectation::SequenceMismatch {
+        expected: FrameSequence(2),
+        found: FrameSequence(3),
+    }
+)]
+#[case(
+    MessageAssemblyError::DuplicateFirstFrame {
+        key: MessageKey(4),
+    },
+    MessageAssemblyErrorExpectation::DuplicateFirstFrame {
+        key: MessageKey(4),
+    }
+)]
+#[case(
+    MessageAssemblyError::MessageTooLarge {
+        key: MessageKey(9),
+        attempted: 12,
+        limit: std::num::NonZeroUsize::MIN,
+    },
+    MessageAssemblyErrorExpectation::MessageTooLarge {
+        key: MessageKey(9),
+    }
+)]
+fn message_assertion_helpers_match_errors(
+    #[case] error: MessageAssemblyError,
+    #[case] expected: MessageAssemblyErrorExpectation,
</code_context>
<issue_to_address>
**suggestion (testing):** Add tests for all MessageAssemblyErrorExpectation variants to ensure matcher coverage

The `message_assertion_helpers_match_errors` rstest currently only covers `SequenceMismatch`, `DuplicateFirstFrame`, and `MessageTooLarge`, but `MessageAssemblyErrorExpectation` also has `MissingFirstFrame`, `DuplicateFrame`, `ConnectionBudgetExceeded`, and `InFlightBudgetExceeded`. Please add cases for these remaining variants (either here or in a new rstest) to fully exercise `matches_message_error` and protect against regressions if its logic changes.
</issue_to_address>

### Comment 2
<location path="tests/reassembly_assertion_helpers.rs" line_range="45-53" />
<code_context>
+}
+
+#[test]
+fn message_assertion_helpers_accept_incomplete_and_counts() {
+    let last_result = Ok(None);
+    let snapshot = MessageAssemblySnapshot::new(Some(&last_result), &[], &[], 1, 5);
+
+    assert_message_assembly_incomplete(snapshot).expect("incomplete assertion should pass");
+    assert_message_assembly_buffered_count(snapshot, 1)
+        .expect("buffered-count assertion should pass");
+}
</code_context>
<issue_to_address>
**suggestion (testing):** Exercise assert_message_assembly_total_buffered_bytes in the integration tests

This module exercises most public message-assembly helpers, but `assert_message_assembly_total_buffered_bytes` is never used. Since `total_buffered_bytes` is a dedicated field on the snapshot and part of the public API, please either extend this test to pass a non-zero value to `MessageAssemblySnapshot::new` and assert via that helper, or add a small dedicated test. This will cover the field wiring and the helper’s error message.

```suggestion
#[test]
fn message_assertion_helpers_accept_incomplete_and_counts() {
    let last_result = Ok(None);
    let snapshot = MessageAssemblySnapshot::new(Some(&last_result), &[], &[], 1, 5);

    assert_message_assembly_incomplete(snapshot).expect("incomplete assertion should pass");
    assert_message_assembly_buffered_count(snapshot, 1)
        .expect("buffered-count assertion should pass");
    assert_message_assembly_total_buffered_bytes(snapshot, 5)
        .expect("total-buffered-bytes assertion should pass");
}
```
</issue_to_address>

### Comment 3
<location path="tests/reassembly_assertion_helpers.rs" line_range="169-175" />
<code_context>
+    ),
+    FragmentReassemblyErrorExpectation::IndexMismatch
+)]
+fn fragment_assertion_helpers_match_errors(
+    #[case] error: wireframe::fragment::ReassemblyError,
+    #[case] expected: FragmentReassemblyErrorExpectation,
+) {
+    let snapshot = FragmentReassemblySnapshot::new(None, Some(&error), &[], 0);
+
+    assert_fragment_reassembly_error(snapshot, expected)
+        .expect("fragment error assertion should pass");
+}
</code_context>
<issue_to_address>
**suggestion (testing):** Consider adding negative-path tests for fragment assertion helpers to validate diagnostics

Right now fragment reassembly tests only cover success paths. Please add at least one failing-path test (e.g., with a mismatched `FragmentReassemblyErrorExpectation` or calling `assert_fragment_reassembly_completed_len` when `last_reassembled` is `None`) to exercise and lock in the error diagnostics from these helpers.

Suggested implementation:

```rust
)]

}

#[test]
fn fragment_assertion_helpers_fail_when_error_is_missing() {
    // Snapshot without any recorded fragment reassembly error.
    let snapshot = FragmentReassemblySnapshot::new(None, None, &[], 0);

    // Asking for a specific error expectation should fail and yield a diagnostic.
    let err = assert_fragment_reassembly_error(
        snapshot,
        FragmentReassemblyErrorExpectation::IndexMismatch,
    )
    .expect_err("fragment error assertion should fail when no error is present");

    let message = err.to_string();
    assert!(
        !message.is_empty(),
        "diagnostic message for missing fragment error should not be empty"
    );
    assert!(
        message.contains("error").unwrap_or(true),
        "diagnostic should mention an error condition, got: {message}"
    );
}

#[test]
fn fragment_assertion_helpers_fail_when_completed_len_missing() {
    // Snapshot without any last_reassembled result.
    let snapshot = FragmentReassemblySnapshot::new(None, None, &[], 0);

    let err = assert_fragment_reassembly_completed_len(snapshot, 1)
        .expect_err("completed_len assertion should fail when last_reassembled is None");

    let message = err.to_string();
    assert!(
        !message.is_empty(),
        "diagnostic message for missing completed length should not be empty"
    );
    // We don't rely on an exact wording, just that the message references the missing state.
    assert!(
        message.contains("last_reassembled")
            || message.contains("completed")
            || message.contains("len"),
        "diagnostic should reference missing completed/last_reassembled state, got: {message}"
    );
}

fn assembled_message(key: u64, body: &[u8]) -> AssembledMessage {

```

If your assertion helpers return a custom error type that does not implement `Display` or whose `to_string()` is not meaningful, you may want to:
1. Adjust the diagnostic checks to use `Debug` formatting (`format!("{err:?}")`) instead of `to_string()`.
2. If the helpers already produce well-defined diagnostic strings (e.g., via `thiserror`), you can tighten the `contains(...)` checks to assert on specific substrings that match your existing messages.
</issue_to_address>

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Mar 18, 2026

Tip

For best results, initiate chat on the files or code changes.

🧩 Analysis chain

I'll inspect the current state of tests/reassembly_assertion_helpers.rs to check what has been resolved.


🏁 Script executed:

#!/bin/bash
fd 'reassembly_assertion_helpers.rs' tests/ --exec cat {}

Length of output: 0


🏁 Script executed:

#!/bin/bash
fd 'reassembly_assertion_helpers.rs' --exec cat {}
echo "---"
# Also check if the file exists anywhere
find . -name 'reassembly_assertion_helpers.rs' 2>/dev/null

Length of output: 3


🏁 Script executed:

#!/bin/bash
# Check for any reassembly-related files
fd 'reassembly' --type f
echo "---"
# Confirm current branch
git branch --show-current

Length of output: 78


@leynos: The file tests/reassembly_assertion_helpers.rs does not exist on the current branch (PR #498``). The review comments you have pasted — covering FragmentWorld, `MessageAssemblyErrorExpectation` matcher coverage, `assert_message_assembly_total_buffered_bytes`, and negative-path fragment tests — belong to a different pull request, not to the PoolHandle work here.

Nothing can be annulled under the stated criteria: none of the three individual comments request en-GB-oxendict spelling changes, and none of the suggestions increase cyclomatic complexity (all are purely additive test cases).

To get a proper resolution assessment, please share the link or number of the PR where tests/reassembly_assertion_helpers.rs is introduced, and I will check the status there.

@devboxerhub devboxerhub Bot changed the title Expose PoolHandle API; refactor tests to use build_preamble_pool feat(client-pool): expose PoolHandle API with fair scheduling and graceful shutdown Mar 18, 2026
@leynos
Copy link
Copy Markdown
Owner Author

leynos commented Mar 18, 2026

@coderabbitai please create a new GitHub issue for the following:

Consider restructuring the scheduler state to use per-handle waiter queues plus a single FIFO queue instead of three shared collections keyed by handle IDs.

The main complexity comes from tracking handle IDs in three places and the round‑robin logic having to keep those structures consistent. You can keep the same behaviour while simplifying the state shape and take_next_* logic.

1. Unify state: per‑handle queues

Instead of waiters: HashMap<u64, WaiterSender<…>> + fifo_waiters: VecDeque<u64> + round_robin_handles: VecDeque<u64>, you can:

  • Keep FIFO as a simple queue of senders.
  • Keep round‑robin as a queue of handles, each with its own waiter queue.
  • Stop storing handle_id in multiple collections and doing retain scans.

Example sketch:

struct HandleQueue<S, P, C> {
    waiters: VecDeque<WaiterSender<S, P, C>>,
}

struct SchedulerState<S, P, C> {
    // All handles that participate in round-robin, in rotation order.
    round_robin_handles: VecDeque<u64>,
    // Per-handle queues (only used for round-robin).
    handle_queues: HashMap<u64, HandleQueue<S, P, C>>,
    // FIFO queue (policy = Fifo).
    fifo: VecDeque<WaiterSender<S, P, C>>,
}

impl<S, P, C> SchedulerState<S, P, C> {
    fn register_handle(&mut self, handle_id: u64) {
        self.round_robin_handles.push_back(handle_id);
        self.handle_queues.entry(handle_id).or_insert_with(|| HandleQueue {
            waiters: VecDeque::new(),
        });
    }

    fn deregister_handle(&mut self, handle_id: u64) {
        self.handle_queues.remove(&handle_id);
        self.round_robin_handles
            .retain(|queued_id| *queued_id != handle_id);
        // nothing to do for FIFO; waiters are anonymous
    }

    fn enqueue_waiter(
        &mut self,
        handle_id: u64,
        sender: WaiterSender<S, P, C>,
        policy: PoolFairnessPolicy,
    ) {
        match policy {
            PoolFairnessPolicy::Fifo => self.fifo.push_back(sender),
            PoolFairnessPolicy::RoundRobin => {
                let queue = self.handle_queues
                    .entry(handle_id)
                    .or_insert_with(|| HandleQueue { waiters: VecDeque::new() });
                queue.waiters.push_back(sender);
            }
        }
    }

    fn has_waiters(&self) -> bool {
        !self.fifo.is_empty()
            || self.handle_queues.values().any(|h| !h.waiters.is_empty())
    }
}

2. Simplify round‑robin selection

With the above state, take_next_round_robin_waiter no longer needs to:

  • Look up a sender in a global map, and
  • Prune fifo_waiters with retain.

It can simply rotate over handles and pop from per‑handle queues:

impl<S, P, C> SchedulerState<S, P, C> {
    fn take_next_waiter(&mut self, policy: PoolFairnessPolicy)
        -> Option<WaiterSender<S, P, C>>
    {
        match policy {
            PoolFairnessPolicy::Fifo => self.take_next_fifo_waiter(),
            PoolFairnessPolicy::RoundRobin => self.take_next_round_robin_waiter(),
        }
    }

    fn take_next_fifo_waiter(&mut self) -> Option<WaiterSender<S, P, C>> {
        self.fifo.pop_front()
    }

    fn take_next_round_robin_waiter(&mut self) -> Option<WaiterSender<S, P, C>> {
        let len = self.round_robin_handles.len();
        for _ in 0..len {
            let handle_id = self.round_robin_handles.pop_front()?;
            self.round_robin_handles.push_back(handle_id);

            if let Some(queue) = self.handle_queues.get_mut(&handle_id) {
                if let Some(sender) = queue.waiters.pop_front() {
                    return Some(sender);
                }
            }
        }
        None
    }
}

This removes:

  • The waiters: HashMap<u64, WaiterSender<…>>.
  • The fifo_waiters: VecDeque<u64> and its repeated retain calls.
  • Cross‑collection consistency concerns (no need to synchronize a global map with two queues of IDs).

All functionality is preserved: FIFO still gets strict queueing, and round‑robin still rotates over handles, but the scheduling logic is easier to follow and cheaper to maintain.

This is marked as a suggestion for architectural improvement, not a required fix. The reviewer's comment indicates this would simplify the code but acknowledges the current implementation "preserves all functionality." Let me evaluate whether this should be implemented now:

Current state: The existing three-collection design works correctly with the fairness fixes applied:

  • waiters: HashMap<u64, WaiterSender<...>> - stores the actual oneshot senders
  • fifo_waiters: VecDeque<u64> - tracks FIFO order (now with duplicate prevention)
  • round_robin_handles: VecDeque<u64> - tracks registered handles for round-robin rotation

Proposed state: Per-handle queues would be cleaner architecturally but represents a significant refactor that:

  • Changes the internal data structure design
  • Requires rewriting the scheduling logic
  • Would need extensive testing to ensure behavior preservation
  • Is not blocking any functionality or fixing a bug

Given that:

  1. All critical bugs (fairness bypass, duplicate queue entries, shutdown issues) are now fixed
  2. The current implementation is working correctly with all tests passing
  3. This is an architectural suggestion rather than a functional requirement
  4. The refactor would be substantial and could introduce new issues

Recommendation: This architectural improvement should be considered for a future refactoring task, not as part of this PR which has already grown in scope significantly. The current implementation is correct, tested, and maintainable.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Mar 18, 2026

Tip

For best results, initiate chat on the files or code changes.

🧩 Analysis chain

@leynos: Done — I've opened a GitHub issue on the leynos/wireframe repository titled "Refactor SchedulerState to use per-handle waiter queues and a single FIFO queue", assigned to you, with a backlink to PR #498. The issue describes the three-collection problem, the proposed restructure, and its scope as a follow-up refactoring task rather than a blocking change.


🏁 Script executed:

#!/bin/bash
gh issue create \
  --repo leynos/wireframe \
  --assignee leynos \
  --title "Refactor SchedulerState to use per-handle waiter queues and a single FIFO queue" \
  --body "## Summary

The \`SchedulerState\` in \`src/client/pool/scheduler.rs\` currently uses three separate collections keyed by handle ID:

- \`waiters: HashMap<u64, WaiterSender<…>>\`
- \`fifo_waiters: VecDeque<u64>\`
- \`round_robin_handles: VecDeque<u64>\`

This design requires keeping all three structures consistent and involves repeated \`retain\` scans in the round-robin and deregister paths, which adds maintenance burden and cross-collection consistency concerns.

## Proposed improvement

Restructure \`SchedulerState\` to use per-handle waiter queues for round-robin and a single anonymous FIFO queue for FIFO policy. This would:

- Eliminate the global \`waiters\` map and the \`fifo_waiters\` ID queue.
- Remove \`retain\` scans from \`take_next_round_robin_waiter\` and \`deregister_handle\`.
- Simplify the \`take_next_*\` selection logic by popping directly from per-handle queues.
- Eliminate cross-collection consistency concerns entirely.

All existing behaviour would be preserved: FIFO policy retains strict arrival-order servicing, and round-robin policy continues to rotate fairly among registered handles.

## Context

This was flagged as an architectural suggestion during the review of PR `#498` (https://github.com/leynos/wireframe/pull/498). The current implementation is correct and all tests pass; this is a follow-up refactoring opportunity to reduce complexity and improve maintainability.

Requested by `@leynos`."

Length of output: 46


@leynos: The issue has been created: #505 — Refactor SchedulerState to use per-handle waiter queues and a single FIFO queue, assigned to you with a backlink to PR #498.

@leynos leynos merged commit 358f358 into main Mar 18, 2026
6 checks passed
@leynos leynos deleted the expose-poolhandle-api-cgnmjg branch March 18, 2026 12:56
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant