Skip to content

[V4] Allocator aware deque#96

Merged
ConorWilliams merged 13 commits intomodulesfrom
v4-aa-deque
Apr 20, 2026
Merged

[V4] Allocator aware deque#96
ConorWilliams merged 13 commits intomodulesfrom
v4-aa-deque

Conversation

@ConorWilliams
Copy link
Copy Markdown
Owner

@ConorWilliams ConorWilliams commented Apr 20, 2026

Summary by CodeRabbit

  • New Features

    • Enhanced queue data structure with custom allocator support for flexible memory management.
  • Improvements

    • Updated return types across queue methods for improved type consistency.
    • Optimized function alignment in Release builds for better runtime performance.

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 20, 2026

Important

Review skipped

Auto reviews are disabled on base/target branches other than the default branch.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 2a46ebef-6bd7-44b7-b056-c85c7cfbebfa

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Use the checkbox below for a quick retry:

  • 🔍 Trigger review
✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch v4-aa-deque

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@ConorWilliams ConorWilliams marked this pull request as ready for review April 20, 2026 18:26
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
src/batteries/geometric_stack.cxx (1)

331-337: ⚠️ Potential issue | 🟠 Major

Cast before adding the header node.

Line 335 adds 1 + ptr->size in diff_type before safe_cast, creating a signed overflow risk when a stacklet's stored size approaches std::numeric_limits<diff_type>::max(). The allocation-side code at line 313 correctly orders the operation as 1 + safe_cast<size_type>(num_nodes) (unsigned arithmetic). Mirror this by casting ptr->size to size_type first, then adding in the unsigned domain.

Proposed fix
-      size_type allocated_nodes = safe_cast<size_type>(1 + ptr->size);
+      size_type allocated_nodes = size_type{1} + safe_cast<size_type>(ptr->size);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/batteries/geometric_stack.cxx` around lines 331 - 337, The delete_node
function computes allocated_nodes using signed arithmetic then casts, which can
overflow; change the order so you first cast ptr->size to size_type using
safe_cast and then add 1 in the unsigned domain (i.e., allocated_nodes =
safe_cast<size_type>(ptr->size) + 1) before calling node_traits::destroy and
node_traits::deallocate to mirror the allocation-side ordering and avoid signed
overflow.
src/batteries/adaptor_stack.cxx (1)

82-94: ⚠️ Potential issue | 🟠 Major

Avoid overflow when rounding allocation sizes.

Lines 84 and 93 can overflow size + (k_new_align - 1) in std::size_t arithmetic before safe_cast receives the value, producing incorrect (too-small) allocation/deallocation counts for very large inputs. The safe_cast bounds check cannot prevent this prior overflow. Use the equivalent non-overflowing form ((size - 1) / k_new_align) + 1 where the precondition size > 0 ensures size - 1 stays within range.

Proposed fix
   [[nodiscard]]
   constexpr auto push(std::size_t size) -> void_ptr {
     LF_ASSUME(size > 0);
-    size_type num_aligned = safe_cast<size_type>((size + (k_new_align - 1)) / k_new_align);
+    size_type num_aligned = safe_cast<size_type>(((size - 1) / k_new_align) + 1);
     return static_cast<void_ptr>(align_trait::allocate(m_alloc, num_aligned));
   }
@@
   constexpr void pop(void_ptr ptr, [[maybe_unused]] std::size_t size) noexcept {
     LF_ASSUME(size > 0);
-    size_type num_aligned = safe_cast<size_type>((size + (k_new_align - 1)) / k_new_align);
+    size_type num_aligned = safe_cast<size_type>(((size - 1) / k_new_align) + 1);
     align_trait::deallocate(m_alloc, static_cast<alloc_ptr>(ptr), num_aligned);
   }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/batteries/adaptor_stack.cxx` around lines 82 - 94, The rounding
computation for num_aligned in push and pop can overflow when evaluating size +
(k_new_align - 1); change the calculation to the non-overflowing form ((size -
1) / k_new_align) + 1 (respecting the LF_ASSUME(size > 0) precondition) before
passing to safe_cast. Update both occurrences where num_aligned is computed (in
push and pop), keep using size_type and safe_cast, and continue to call
align_trait::allocate(m_alloc, num_aligned) and align_trait::deallocate(m_alloc,
static_cast<alloc_ptr>(ptr), num_aligned) unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/batteries/deque.cxx`:
- Around line 323-324: Add a precondition-checked capacity rounding helper and
use it in the deque constructor to avoid UB from std::bit_ceil and casts:
implement a static constexpr round_capacity(size_type cap) -> diff_type that
computes max_rounded_cap =
std::bit_floor(safe_cast<size_type>(std::numeric_limits<diff_type>::max())),
asserts LF_ASSUME(cap > 0 && cap <= max_rounded_cap), and returns
safe_cast<diff_type>(std::bit_ceil(cap)); then replace the direct
std::bit_ceil(cap) call in the deque(size_type cap, Allocator const &alloc)
constructor with round_capacity(cap).

---

Outside diff comments:
In `@src/batteries/adaptor_stack.cxx`:
- Around line 82-94: The rounding computation for num_aligned in push and pop
can overflow when evaluating size + (k_new_align - 1); change the calculation to
the non-overflowing form ((size - 1) / k_new_align) + 1 (respecting the
LF_ASSUME(size > 0) precondition) before passing to safe_cast. Update both
occurrences where num_aligned is computed (in push and pop), keep using
size_type and safe_cast, and continue to call align_trait::allocate(m_alloc,
num_aligned) and align_trait::deallocate(m_alloc, static_cast<alloc_ptr>(ptr),
num_aligned) unchanged.

In `@src/batteries/geometric_stack.cxx`:
- Around line 331-337: The delete_node function computes allocated_nodes using
signed arithmetic then casts, which can overflow; change the order so you first
cast ptr->size to size_type using safe_cast and then add 1 in the unsigned
domain (i.e., allocated_nodes = safe_cast<size_type>(ptr->size) + 1) before
calling node_traits::destroy and node_traits::deallocate to mirror the
allocation-side ordering and avoid signed overflow.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 1a6d25b7-4ab7-4624-8cb8-7e41e34c3fd5

📥 Commits

Reviewing files that changed from the base of the PR and between 56eec7d and ea7a3a3.

📒 Files selected for processing (4)
  • CMakePresets.json
  • src/batteries/adaptor_stack.cxx
  • src/batteries/deque.cxx
  • src/batteries/geometric_stack.cxx

Comment thread src/batteries/deque.cxx Outdated
@ConorWilliams ConorWilliams merged commit 387d763 into modules Apr 20, 2026
7 of 10 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant