Skip to content

Fix multiple bugs and performance issues across the workspace#60

Merged
Meru143 merged 11 commits intomainfrom
fix-codebase-issues
Feb 26, 2026
Merged

Fix multiple bugs and performance issues across the workspace#60
Meru143 merged 11 commits intomainfrom
fix-codebase-issues

Conversation

@Meru143
Copy link
Copy Markdown
Owner

@Meru143 Meru143 commented Feb 26, 2026

Summary

This PR addresses several critical bugs, architectural flaws, and performance bottlenecks identified during a comprehensive codebase review.

Fixes include:

  • Patch Application Corrupts Files: Fixed apply_patches to accurately replace the targeted line without accidentally deleting adjacent original lines if the patch content is multi-line.
  • UTF-8 Panic: Swapped &ex[..200] byte-slicing for ex.chars().take(200) to safely truncate negative examples containing multi-byte UTF-8 characters.
  • Non-deterministic Prompts: Switched from HashMap to BTreeMap in group_related_diffs so LLM prompts are consistently generated in alphabetical order, making tests and outputs reproducible.
  • Async Runtime Blocking: Wrapped heavy synchronous I/O operations (like argus_repomap::generate_map and SQLite queries) in tokio::task::block_in_place to prevent them from starving the Tokio worker threads.
  • Gemini Double-Retries: Removed the internal exponential backoff retry loop in chat_gemini, deferring to the standard 3-retry pipeline used by other providers to avoid hanging the review process for minutes.
  • Error Handling Standardization: Migrated feedback.rs and state.rs away from miette::Result to Result<T, ArgusError>, aligning the argus-review library crate with the established project conventions (thiserror for libraries, miette for binaries).
  • Redundant Config Requirement: Allowed argus review to run gracefully with default settings when .argus.toml is missing, improving the out-of-the-box user experience.
  • Wasted Allocations: Replaced heavy .to_vec() clones of FileDiff structures and redundant diffs_to_text formatting for token estimation with lightweight iteration and ownership passing.

Test plan

  • Run cargo check and cargo test to verify no regressions in the pipeline, error handling, or patch application logic.
  • Verify argus review runs without .argus.toml present using default settings.
  • Verify multi-line patches are applied correctly to a single target line.

Summary by CodeRabbit

  • Bug Fixes

    • Fixed multi-line patches overwriting subsequent lines
    • Corrected truncation of non-ASCII review examples
  • Improvements

    • Missing configuration now warns and proceeds with defaults instead of blocking startup
    • Faster, more consistent token estimation for code reviews
    • Simplified handling of rate-limit errors with clearer retry guidance

…mples

The previous `&ex[..200]` byte-slicing could panic when truncating
multi-byte UTF-8 characters. Replaced with `ex.chars().take(200).collect()`
which correctly handles character boundaries.
HashMap iteration order is non-deterministic, causing the same set of
diffs to produce different LLM prompts across runs. Switching to BTreeMap
ensures files are grouped in a consistent lexicographic order by directory.
The review subcommand previously required .argus.toml to exist, bailing
with an error if missing. This was redundant since the config loader
already falls back to ArgusConfig::default(). Changed the hard error
to a stderr hint so users can run reviews without creating a config file.
feedback.rs and state.rs violated the project convention of using
thiserror/ArgusError for library crate errors and reserving miette
for the binary crate only. Replaced miette::Result with Result<T, ArgusError>
and removed the now-unused miette dependency from argus-review.
…tries

chat_gemini had its own 5-retry loop with exponential backoff for 429
responses, but the pipeline already handles retries via
chat_with_rate_limit_retries. The duplicated retry logic caused up to
5*3=15 retries for Gemini while other providers got only 3. Now all
providers retry consistently through the pipeline's retry mechanism.
… tokio

generate_map, build_related_code_context, and build_history_insights
perform synchronous file I/O and CPU-heavy parsing (tree-sitter, git2)
that can block the tokio worker thread. Wrapped each in
tokio::task::block_in_place so the runtime can schedule other tasks
while these operations run. Uses block_in_place rather than
spawn_blocking to avoid cloning referenced data.
1. Changed review() to take Vec<FileDiff> by value instead of &[FileDiff]
   to avoid cloning the entire diff vector for the filter.

2. Added estimate_diffs_tokens() that computes token estimates directly
   from diff components without building an intermediate String. The full
   diff_text is now only built when needed for self-reflection and summary
   rather than eagerly for the split decision.
…replacements

The previous logic used the patch line count to determine how many
original lines to remove, causing a 3-line patch targeting line 5 to
delete lines 5-7 and replace them. This silently destroyed unrelated
code when the patch expanded a single line into multiple lines.

Fixed by always replacing only the single target line with the full
patch content. Added a regression test verifying that a multi-line
patch expanding one line does not overwrite subsequent lines.
Copilot AI review requested due to automatic review settings February 26, 2026 08:25
@chatgpt-codex-connector
Copy link
Copy Markdown

You have reached your Codex usage limits for code reviews. You can see your limits in the Codex usage dashboard.

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Feb 26, 2026

Warning

Rate limit exceeded

@Meru143 has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 6 minutes and 6 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

📥 Commits

Reviewing files that changed from the base of the PR and between cff9c37 and 71205b9.

📒 Files selected for processing (2)
  • crates/argus-review/src/feedback.rs
  • crates/argus-review/src/pipeline.rs
📝 Walkthrough

Walkthrough

The PR removes the miette dependency and migrates error handling to ArgusError across argus-review, changes ReviewPipeline to take owned diffs and improve token estimation and grouping, simplifies Gemini request logic, fixes patch application to avoid overwriting subsequent lines, and makes missing config non-fatal.

Changes

Cohort / File(s) Summary
Error Handling Migration
crates/argus-review/Cargo.toml, crates/argus-review/src/feedback.rs, crates/argus-review/src/state.rs
Removed miette from the crate manifest and replaced miette-based Result/diagnostic propagation with ArgusError, updating public method signatures to Result<..., ArgusError> and converting error mappings throughout these modules.
Pipeline Refactoring
crates/argus-review/src/pipeline.rs
Changed ReviewPipeline::review to accept Vec<FileDiff> (owned), added tokio::task::block_in_place wrappers for blocking repo ops, replaced full-text token estimation with estimate_diffs_tokens, and switched grouping to BTreeMap for deterministic ordering.
LLM / External Calls
crates/argus-review/src/llm.rs
Removed Gemini retry loop; now issues a single POST and returns simplified 429 messaging and adjusted provider-error sanitization.
Patch Logic
crates/argus-review/src/patch.rs
Changed patch application to replace only the single target line (allowing multi-line expansion) and added a unit test ensuring subsequent lines are preserved.
Prompt Truncation
crates/argus-review/src/prompt.rs
Switched negative example truncation from byte-based slicing to character-based counting and slicing to correctly handle multi-byte Unicode.
State Persistence
crates/argus-review/src/state.rs
Load/save now return Result<..., ArgusError>; replaced .into_diagnostic() flows with explicit ArgusError::Config mappings and adjusted file/dir IO error handling.
CLI / Main
src/main.rs
Made missing config non-fatal (warning) and updated pipeline invocation to pass owned diffs (pipeline.review(diffs, ...)).

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~50 minutes

Possibly related PRs

Poem

🐰 I nibbled miette from the crate tonight,
Rewrote errors to Argus' light,
Patches now grow without eating rows,
One Gemini knock — no retry echoes,
Tokens counted tidy, pipeline leaps bright.

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 inconclusive)

Check name Status Explanation Resolution
Title check ❓ Inconclusive The title 'Fix multiple bugs and performance issues across the workspace' is vague and generic, failing to highlight the primary or most important change from the changeset. Consider a more specific title that identifies the main fix (e.g., 'Fix patch application, UTF-8 handling, and error handling standardization') or the most critical issue addressed.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch fix-codebase-issues

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (1)
crates/argus-review/src/pipeline.rs (1)

411-411: Avoid rebuilding full diff text in the single-call path.

Line 411 already materializes diff_text for review, then Lines 476-479 rebuild it again unconditionally. This reintroduces avoidable large allocation work in non-split flows.

Also applies to: 476-479

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/argus-review/src/pipeline.rs` at line 411, The code is rebuilding the
full diff text twice: once at let diff_text = diffs_to_text(&kept_diffs) and
again unconditionally later; remove the second diffs_to_text call and reuse the
already-materialized diff_text (or pass a reference) in the single-call path.
Update the block that currently calls diffs_to_text(&kept_diffs) again (the
later unconditional allocation) to use the existing diff_text variable (or
borrow &diff_text) so large allocations are avoided in the non-split flow.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@crates/argus-review/src/pipeline.rs`:
- Around line 236-238: Format the closure passed to tokio::task::block_in_place
to satisfy rustfmt: replace the braced closure form with a single-expression
closure so it reads tokio::task::block_in_place(||
build_related_code_context(&kept_diffs, &index_path)); — locate the call to
tokio::task::block_in_place(...) that invokes build_related_code_context and
change the closure body to the concise single-expression form (and add a
trailing semicolon if appropriate for the surrounding expression).

In `@crates/argus-review/src/state.rs`:
- Around line 35-43: cargo fmt failures are caused by unformatted Rust code in
the block that reads/parses the review state (the read_to_string call and
serde_json::from_str call producing ArgusError::Config), so run rustfmt (cargo
fmt) or reflow the closure formatting to match rustfmt style: normalize spacing
and line breaks in the map_err closures around
read_to_string(&state_path).map_err(|e| { ArgusError::Config(format!(...)) })?
and serde_json::from_str(&content).map_err(|e|
ArgusError::Config(format!(...)))? so that the closures, format! calls, and ?
operators follow rustfmt conventions and CI will pass.

In `@src/main.rs`:
- Around line 1263-1266: The new hint block is failing rustfmt; reformat it (or
run cargo fmt) so it matches rustfmt style—ensure the if condition using
cli.config.is_none() and std::path::Path::new(".argus.toml").exists() and the
eprintln!(...) line are rustfmt-normalized (preserve the logic but adjust
spacing/indentation to rustfmt rules) and commit the resulting change.

---

Nitpick comments:
In `@crates/argus-review/src/pipeline.rs`:
- Line 411: The code is rebuilding the full diff text twice: once at let
diff_text = diffs_to_text(&kept_diffs) and again unconditionally later; remove
the second diffs_to_text call and reuse the already-materialized diff_text (or
pass a reference) in the single-call path. Update the block that currently calls
diffs_to_text(&kept_diffs) again (the later unconditional allocation) to use the
existing diff_text variable (or borrow &diff_text) so large allocations are
avoided in the non-split flow.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 5deb3ac and a699c14.

📒 Files selected for processing (8)
  • crates/argus-review/Cargo.toml
  • crates/argus-review/src/feedback.rs
  • crates/argus-review/src/llm.rs
  • crates/argus-review/src/patch.rs
  • crates/argus-review/src/pipeline.rs
  • crates/argus-review/src/prompt.rs
  • crates/argus-review/src/state.rs
  • src/main.rs
💤 Files with no reviewable changes (1)
  • crates/argus-review/Cargo.toml

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR addresses multiple critical bugs and performance issues across the Argus codebase, focusing on patch application, UTF-8 handling, determinism, async runtime efficiency, error handling standardization, and allocation optimization.

Changes:

  • Fixed patch application to replace only the target line instead of multiple lines, preventing file corruption when patches expand single lines into multiple lines
  • Replaced byte-slicing with character-based truncation to prevent UTF-8 panics in negative example handling
  • Switched from HashMap to BTreeMap for deterministic LLM prompt generation
  • Wrapped synchronous blocking operations in tokio::task::block_in_place to prevent async runtime starvation
  • Removed internal retry logic from Gemini client to avoid double-retries with the existing pipeline retry mechanism
  • Migrated argus-review library crate from miette::Result to Result<T, ArgusError> for consistency with library error handling conventions
  • Changed review command to print a warning instead of failing when .argus.toml is missing, allowing operation with defaults
  • Optimized token estimation and diff handling to reduce allocations by passing ownership instead of cloning

Reviewed changes

Copilot reviewed 8 out of 8 changed files in this pull request and generated 3 comments.

Show a summary per file
File Description
src/main.rs Changed config requirement from error to warning; passed diffs by value instead of reference
crates/argus-review/src/state.rs Migrated from miette::Result to ArgusError with detailed error messages
crates/argus-review/src/prompt.rs Fixed UTF-8 panic by using character-based truncation instead of byte-slicing
crates/argus-review/src/pipeline.rs Added BTreeMap for determinism, block_in_place for async safety, optimized token estimation; deferred diff_text allocation
crates/argus-review/src/patch.rs Fixed multi-line patch application to replace only the target line; added regression test
crates/argus-review/src/llm.rs Removed internal Gemini retry loop to defer to pipeline-level retry mechanism
crates/argus-review/src/feedback.rs Migrated from miette::Result to ArgusError
crates/argus-review/Cargo.toml Removed miette dependency from library crate

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +719 to +720
total_bytes += 6 + diff.old_path.as_os_str().len() + 1;
total_bytes += 6 + diff.new_path.as_os_str().len() + 1;
Copy link

Copilot AI Feb 26, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The token estimation for file paths may be inaccurate. Using as_os_str().len() counts the platform-specific byte representation, but diffs_to_text uses display() which produces a UTF-8 string. On some platforms, these can differ. Consider using diff.old_path.to_string_lossy().len() and diff.new_path.to_string_lossy().len() to match the actual format used in diffs_to_text.

Suggested change
total_bytes += 6 + diff.old_path.as_os_str().len() + 1;
total_bytes += 6 + diff.new_path.as_os_str().len() + 1;
total_bytes += 6 + diff.old_path.to_string_lossy().len() + 1;
total_bytes += 6 + diff.new_path.to_string_lossy().len() + 1;

Copilot uses AI. Check for mistakes.
Comment on lines +100 to +101
let display_ex = if ex.chars().count() > 200 {
let truncated: String = ex.chars().take(200).collect();
Copy link

Copilot AI Feb 26, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The character count check iterates through all characters twice for strings longer than 200 characters. Consider optimizing by collecting first 200 characters unconditionally and checking the length, or by using a single pass approach. However, this is a minor performance issue since negative_examples are typically small in number.

Suggested change
let display_ex = if ex.chars().count() > 200 {
let truncated: String = ex.chars().take(200).collect();
let mut truncated = String::with_capacity(200);
let mut char_count = 0usize;
let mut exceeded = false;
for ch in ex.chars() {
if char_count < 200 {
truncated.push(ch);
}
char_count += 1;
if char_count > 200 {
exceeded = true;
break;
}
}
let display_ex = if exceeded {

Copilot uses AI. Check for mistakes.

// Build the full diff text for self-reflection and summary
// (deferred from earlier to avoid allocation when splitting into groups)
let diff_text = diffs_to_text(&kept_diffs);
Copy link

Copilot AI Feb 26, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When the single LLM call path is taken, diff_text is allocated twice: once at line 411 for the review prompt, and again at line 478 for self-reflection. Consider reusing the diff_text variable or restructuring to avoid this double allocation. One approach: declare diff_text before the if/else and initialize it conditionally, or move the line 478 allocation inside the split path only.

Copilot uses AI. Check for mistakes.
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
crates/argus-review/src/pipeline.rs (1)

703-705: ⚠️ Potential issue | 🟠 Major

Fix CI blocker: estimate_tokens is now dead code.

The pipeline failure at Line 703 is valid. Since this helper is only used by tests, gate it to test builds and remove the intra-doc link dependency.

🛠️ Proposed fix
-fn estimate_tokens(text: &str) -> usize {
+#[cfg(test)]
+fn estimate_tokens(text: &str) -> usize {
     text.len() / 4
 }
 
 /// Estimate tokens for a slice of diffs without building the full text string.
 ///
-/// Uses the same `len / 4` heuristic as [`estimate_tokens`] but computes
+/// Uses the same `len / 4` heuristic but computes
 /// the byte length directly from the diff components to avoid a large
 /// intermediate allocation.

Also applies to: 709-710

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/argus-review/src/pipeline.rs` around lines 703 - 705, The helper
function estimate_tokens is only used by tests and is causing a CI failure as
dead code; mark it as test-only and remove any intra-doc link that references
it: add #[cfg(test)] above fn estimate_tokens(text: &str) -> usize (or move the
function into the tests module) so it is compiled only for test builds, and
delete or update any intra-doc link pointing to estimate_tokens in docs/comments
to avoid the dependency.
🧹 Nitpick comments (1)
crates/argus-review/src/pipeline.rs (1)

409-409: Avoid rebuilding full diff text in the single-call path.

Line 409 builds diff_text, then Lines 474-476 build it again. Reusing the first string avoids a large duplicate allocation on big diffs.

♻️ Suggested refactor sketch
+        let mut cached_diff_text: Option<String> = None;
...
-            let diff_text = diffs_to_text(&kept_diffs);
+            let diff_text = diffs_to_text(&kept_diffs);
+            cached_diff_text = Some(diff_text.clone());
...
-        let diff_text = diffs_to_text(&kept_diffs);
+        let diff_text = cached_diff_text.unwrap_or_else(|| diffs_to_text(&kept_diffs));

Also applies to: 474-476

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/argus-review/src/pipeline.rs` at line 409, The code currently calls
diffs_to_text(&kept_diffs) twice (once into diff_text at the top and again in
the single-call path), causing a large duplicate allocation; change the
single-call path to reuse the previously-created diff_text variable (or borrow
it) instead of calling diffs_to_text a second time so the string is allocated
once — update any branches that call diffs_to_text(&kept_diffs) (the second call
in the single-call path) to use diff_text from the outer scope (or pass
&diff_text) and remove the redundant invocation.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@crates/argus-review/src/pipeline.rs`:
- Around line 222-227: The calls to tokio::task::block_in_place inside
ReviewPipeline::review (used around argus_repomap::generate_map and two other
places) are unsafe on a current-thread Tokio runtime and can panic; update
ReviewPipeline::review to either (a) perform a runtime check before each
block_in_place invocation (use tokio::runtime::Handle::try_current() and
error/return if the handle indicates a current-thread runtime), (b) replace
block_in_place with tokio::task::spawn_blocking to run the blocking work off the
async thread, or (c) add a clear doc comment on ReviewPipeline::review requiring
a multi-threaded runtime and validate at runtime; choose one approach and apply
consistently for all three block_in_place usages (identify them by
tokio::task::block_in_place and the surrounding argus_repomap::generate_map call
sites) so the method no longer panics under a current-thread runtime.

---

Outside diff comments:
In `@crates/argus-review/src/pipeline.rs`:
- Around line 703-705: The helper function estimate_tokens is only used by tests
and is causing a CI failure as dead code; mark it as test-only and remove any
intra-doc link that references it: add #[cfg(test)] above fn
estimate_tokens(text: &str) -> usize (or move the function into the tests
module) so it is compiled only for test builds, and delete or update any
intra-doc link pointing to estimate_tokens in docs/comments to avoid the
dependency.

---

Nitpick comments:
In `@crates/argus-review/src/pipeline.rs`:
- Line 409: The code currently calls diffs_to_text(&kept_diffs) twice (once into
diff_text at the top and again in the single-call path), causing a large
duplicate allocation; change the single-call path to reuse the
previously-created diff_text variable (or borrow it) instead of calling
diffs_to_text a second time so the string is allocated once — update any
branches that call diffs_to_text(&kept_diffs) (the second call in the
single-call path) to use diff_text from the outer scope (or pass &diff_text) and
remove the redundant invocation.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a699c14 and cff9c37.

📒 Files selected for processing (4)
  • crates/argus-review/src/pipeline.rs
  • crates/argus-review/src/state.rs
  • npm/bin/argus
  • src/main.rs
🚧 Files skipped from review as they are similar to previous changes (1)
  • src/main.rs

Comment on lines +222 to +227
tokio::task::block_in_place(|| {
match argus_repomap::generate_map(root, 1024, &focus_files, OutputFormat::Text) {
Ok(map) if !map.is_empty() => Some(map),
_ => None,
}
})
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Verify Tokio runtime flavor usage and where review() is invoked.

# 1) Find Tokio runtime entrypoints/flavors.
echo "=== Tokio runtime entrypoints/decorators ==="
rg -nP --type rust '#\[tokio::main(?:\([^)]*\))?\]|#\[tokio::test(?:\([^)]*\))?\]'

echo ""
echo "=== Tokio Builder usage ==="
rg -nP --type rust 'Builder::new_current_thread\(|Builder::new_multi_thread\(|new_current_thread\(|new_multi_thread\('

# 2) Find block_in_place usage sites.
echo ""
echo "=== block_in_place usage sites ==="
rg -nP --type rust '\bblock_in_place\s*\('

# 3) Find ReviewPipeline::review call sites to validate they run under multi-thread runtime paths.
echo ""
echo "=== ReviewPipeline::review call sites ==="
rg -nP --type rust '\.review\s*\('

Repository: Meru143/argus

Length of output: 682


🏁 Script executed:

#!/bin/bash

echo "=== Check tokio::main at src/main.rs:813 ==="
sed -n '810,820p' src/main.rs

echo ""
echo "=== Check review() call at src/main.rs:1473 ==="
sed -n '1470,1480p' src/main.rs

echo ""
echo "=== Check Tokio version ==="
rg -A 5 '^\s*tokio\s*=' Cargo.toml | head -20

echo ""
echo "=== Check ReviewPipeline::review signature ==="
rg -B 5 -A 10 'fn review\s*\(' crates/argus-review/src/pipeline.rs | head -30

Repository: Meru143/argus

Length of output: 2002


🏁 Script executed:

#!/bin/bash

echo "=== Check ReviewPipeline visibility ==="
rg -B 10 'pub.*struct ReviewPipeline' crates/argus-review/src/

echo ""
echo "=== Check if review() is public and could be called externally ==="
rg -B 2 'pub async fn review' crates/argus-review/src/

echo ""
echo "=== Check for tokio::test with current_thread flavor ==="
rg -n '#\[tokio::test.*current_thread' --type rust

echo ""
echo "=== Check all tokio::test decorators ==="
rg -n '#\[tokio::test' --type rust | head -20

echo ""
echo "=== Verify library/public API exports ==="
rg -n '^pub use.*ReviewPipeline|^pub.*ReviewPipeline' crates/argus-review/src/lib.rs

Repository: Meru143/argus

Length of output: 1265


🏁 Script executed:

#!/bin/bash

echo "=== Check crates/argus-review directory structure ==="
ls -la crates/argus-review/src/ | head -20

echo ""
echo "=== Check if lib.rs exists ==="
ls -la crates/argus-review/src/lib.rs 2>&1

echo ""
echo "=== Check crates/argus-review/Cargo.toml ==="
cat crates/argus-review/Cargo.toml

echo ""
echo "=== Inspect crates/argus-review/src/lib.rs if it exists ==="
if [ -f crates/argus-review/src/lib.rs ]; then
  cat crates/argus-review/src/lib.rs
fi

Repository: Meru143/argus

Length of output: 2081


🏁 Script executed:

#!/bin/bash

echo "=== Inspect review() method and surrounding context ==="
sed -n '170,240p' crates/argus-review/src/pipeline.rs

echo ""
echo "=== Check for any runtime assertions or checks ==="
rg -n 'try_current\|current_thread\|is_finished\|Handle::current' crates/argus-review/src/pipeline.rs

Repository: Meru143/argus

Length of output: 2824


Verify runtime safety: block_in_place will panic on current-thread Tokio runtime.

Lines 222, 236, and 246 unconditionally call tokio::task::block_in_place, which panics when invoked from a current-thread Tokio runtime. Since ReviewPipeline::review() is a public async method in the library crate, external callers could invoke it from a current-thread runtime context without protective documentation or runtime checks.

Either add runtime validation before each block_in_place call (e.g., using tokio::runtime::Handle::try_current()), document the multi-thread requirement explicitly in the method's doc comment, or refactor to avoid block_in_place.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/argus-review/src/pipeline.rs` around lines 222 - 227, The calls to
tokio::task::block_in_place inside ReviewPipeline::review (used around
argus_repomap::generate_map and two other places) are unsafe on a current-thread
Tokio runtime and can panic; update ReviewPipeline::review to either (a) perform
a runtime check before each block_in_place invocation (use
tokio::runtime::Handle::try_current() and error/return if the handle indicates a
current-thread runtime), (b) replace block_in_place with
tokio::task::spawn_blocking to run the blocking work off the async thread, or
(c) add a clear doc comment on ReviewPipeline::review requiring a multi-threaded
runtime and validate at runtime; choose one approach and apply consistently for
all three block_in_place usages (identify them by tokio::task::block_in_place
and the surrounding argus_repomap::generate_map call sites) so the method no
longer panics under a current-thread runtime.

@Meru143 Meru143 merged commit afd2c84 into main Feb 26, 2026
2 checks passed
@Meru143 Meru143 deleted the fix-codebase-issues branch February 26, 2026 08:50
@Meru143 Meru143 mentioned this pull request Feb 26, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants