feat(tokenjuice): Rust port of terminal-output compaction engine#644
feat(tokenjuice): Rust port of terminal-output compaction engine#644senamakel merged 7 commits intotinyhumansai:mainfrom
Conversation
…mpaction engine - Introduced the `tokenjuice` module, which includes the classification and reduction of tool outputs based on JSON-configured rules. - Added new dependencies for Unicode handling: `unicode-segmentation` and `unicode-width`. - Implemented the `classify` module to match tool execution inputs against predefined rules, enhancing the ability to process and summarize terminal outputs. - Created a comprehensive set of types and utilities for managing tool execution inputs and classification results. - Established a built-in rule set for common tools, improving the initial setup and usability of the `tokenjuice` engine. - Enhanced testing framework with integration tests to ensure the accuracy of output compaction and classification. These changes lay the groundwork for a robust terminal-output management system, facilitating better interaction with various tools and improving overall user experience.
…paction - Introduced the `tokenjuice` module, which includes functionality for classifying and reducing terminal output based on JSON-configured rules. - Added new dependencies: `unicode-segmentation` and `unicode-width` to support text processing. - Created a new `classify.rs` file for rule classification logic, including matching helpers and scoring functions. - Implemented a `reduce.rs` file to handle the main reduction pipeline and text normalization. - Established a structured approach for loading and compiling rules from multiple sources, including built-in and user-defined rules. - Added integration tests to ensure the correctness of the output reduction process. These changes enhance the application's ability to manage and compact verbose tool outputs, improving overall efficiency and user experience.
…ion logic - Added a series of unit tests to `classify.rs` to validate the behavior of tool name filters and argument matching, ensuring correct classification of tool executions. - Introduced tests for edge cases in `reduce.rs`, including command tokenization and normalization of execution inputs, to improve robustness against various input formats. - Expanded tests in `builtin.rs` to cover duplicate ID reporting and compile issues, enhancing error handling and reporting mechanisms. - Implemented additional tests in `compiler.rs` to verify regex handling in rule definitions, ensuring invalid patterns are correctly ignored. These enhancements improve the overall test coverage and reliability of the tokenjuice module, facilitating better maintenance and future development.
|
Caution Review failedPull request was closed or merged during review 📝 WalkthroughWalkthroughIntroduces TokenJuice, a rule-based output compaction engine for tool execution results. Adds 96 built-in JSON rules covering various CLI tools, text processing utilities, rule compilation/loading infrastructure, and integration with the agent tool loop to compact verbose tool output before LLM context processing. Changes
Sequence Diagram(s)sequenceDiagram
participant Agent as Agent Tool Loop
participant Compact as compact_tool_output
participant Classify as classify_execution
participant Reduce as reduce_execution_with_rules
participant Text as Text Processors
Agent->>Compact: tool_output (verbose)
Note over Compact: Check output size
alt Size < threshold
Compact->>Agent: (original, applied=false)
else
Compact->>Compact: Extract command/argv from JSON args
Compact->>Classify: ToolExecutionInput
Classify->>Classify: Match rules by tool_name, argv0, command
Classify-->>Reduce: ClassificationResult (rule_id, confidence)
Reduce->>Reduce: Normalize input (tokenize, derive argv)
Reduce->>Text: Strip ANSI, dedupe, trim empty
Reduce->>Reduce: Apply rule filters (skip_patterns, keep_patterns)
alt Special git/status rule
Reduce->>Text: Rewrite to compact M/A/D/R format
else Special gh rule
Reduce->>Text: Parse JSON or reformat table rows
end
Reduce->>Reduce: Count matches via counters
Reduce->>Text: Clamp output (head/tail or middle)
Reduce->>Reduce: Build CompactResult with inline_text
Reduce-->>Compact: CompactResult
Compact->>Compact: Check compaction_ratio & size reduction
alt Ratio acceptable
Compact->>Agent: (compacted_text, applied=true)
else
Compact->>Agent: (original, applied=false)
end
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes This diff introduces substantial new functionality across multiple interconnected modules: dense logic in the reduction pipeline with regex-based filtering/matching, complex text manipulation with grapheme-aware width calculations, a three-layer rule-loading system, 96 JSON rule definitions requiring validation, and integration points in the agent tool loop. The variety of submodules, specialized regex patterns, text processing edge cases (ANSI stripping, Unicode combining marks, emoji width), and compaction heuristics demand careful cross-module reasoning. Possibly related PRs
Poem
✨ Finishing Touches🧪 Generate unit tests (beta)
|
Add `tokenjuice::compact_tool_output` helper and call it in the agent tool loop after credential scrubbing (and on error paths with exit=1), before any optional payload_summarizer. Derives argv/command heuristically from JSON tool arguments (command / args / argv / cmd shapes) so shell-wrapping tools still match upstream family rules (git/*, package/*, tests/*, etc.). Pass-through safe: outputs under 512 bytes or where compaction saves <5% are returned untouched.
There was a problem hiding this comment.
Actionable comments posted: 17
Note
Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.
🟡 Minor comments (10)
src/openhuman/tokenjuice/vendor/rules/archive__unzip.json-24-27 (1)
24-27:⚠️ Potential issue | 🟡 Minor
warningcounter currently counts normal unzip progress lines.Line 26 includes
inflating|extracting, which are typically non-warning events, so this metric can be misleading even on clean runs.Suggested adjustment
"counters": [ { "name": "warning", - "pattern": "inflating|extracting|replace|error", + "pattern": "warning|replace|error|cannot|skipping", "flags": "i" } ]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/tokenjuice/vendor/rules/archive__unzip.json` around lines 24 - 27, The "warning" rule's regex currently matches normal progress words ("inflating", "extracting", "replace") so the warning counter is inflated; update the rule named "warning" (the JSON object with "name": "warning" and "pattern") to remove progress verbs and instead match true warning/error tokens (e.g., keep terms like "error", "warn", "warning", "failed", "permission denied", or other archive failure phrases) or use a stricter pattern (negative lookbehind/explicit word boundaries) so ordinary progress lines are excluded while real errors/warnings are still captured.src/openhuman/tokenjuice/vendor/rules/transfer__rsync.json-23-29 (1)
23-29:⚠️ Potential issue | 🟡 MinorCounter pattern has inconsistent spacing and may match non-errors.
The pattern
"error|failed|connection|sent "has two issues:
- Inconsistent trailing space after
"sent "but not other alternatives"connection"matches both error states ("connection refused") and success states ("connection established")🔧 Proposed fix for more precise error matching
{ "name": "error", - "pattern": "error|failed|connection|sent ", + "pattern": "\\berror\\b|\\bfailed\\b|connection (refused|reset|timed out)|rsync error", "flags": "i" }Or, if "sent" was intended to match data transfer stats, clarify with a more specific pattern:
{ "name": "error", - "pattern": "error|failed|connection|sent ", + "pattern": "\\berror\\b|\\bfailed\\b|refused|reset|timed out", "flags": "i" }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/tokenjuice/vendor/rules/transfer__rsync.json` around lines 23 - 29, The "error" counter's pattern is too loose and has an accidental trailing space; update the "pattern" value for the counter with "name":"error" under "counters" to use precise, anchored alternatives and word boundaries (remove the trailing space) so it doesn't match benign texts like "connection established" or stray "sent " substrings; for example, replace the current pattern with a single regex that uses \b boundaries and explicitly lists failure forms (e.g., error|failed|connection (?:refused|timed out)|sent(?: bytes)?), or otherwise restrict "sent" to a clear metric form, ensuring consistent spacing and case-insensitive behavior via the existing "flags":"i".src/openhuman/tokenjuice/vendor/rules/git__show.json-21-21 (1)
21-21:⚠️ Potential issue | 🟡 MinorFix
indexkeepPattern: it currently misses normal git headers.
git showemitsindex <old>..<new>; the existing regex matches only one literal dot, so those lines get dropped.Suggested patch
- "^index\\s+[a-f0-9]+\\.[a-f0-9]+", + "^index\\s+[0-9a-fA-F]+\\.\\.[0-9a-fA-F]+(?:\\s+\\d+)?$",🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/tokenjuice/vendor/rules/git__show.json` at line 21, The keepPattern for the "index" header is wrong: it matches a single literal dot instead of the two-dot range used by git ("old..new"). Update the regex string in src/openhuman/tokenjuice/vendor/rules/git__show.json from "^index\\s+[a-f0-9]+\\.[a-f0-9]+" to match two dots, e.g. "^index\\s+[a-f0-9]+\\.\\.[a-f0-9]+" so lines like "index <old>..<new>" are preserved by the rule.src/openhuman/tokenjuice/rules/builtin.rs-389-393 (1)
389-393:⚠️ Potential issue | 🟡 MinorAssert registry key matches
JsonRule.idduring parse checks.The test currently accepts a mismatch between tuple key
idand parsedrule.id, which can mask bad builtin mappings and mislead diagnostics.Suggested patch
for (id, json) in BUILTIN_RULE_JSONS { match serde_json::from_str::<JsonRule>(json) { Ok(rule) => { + assert_eq!( + rule.id.as_str(), + *id, + "builtin entry key/id mismatch: key='{}', json.id='{}'", + id, + rule.id + ); id_count.entry(rule.id.clone()).or_default().push(id); } Err(e) => {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/tokenjuice/rules/builtin.rs` around lines 389 - 393, The loop over BUILTIN_RULE_JSONS currently parses each JSON into JsonRule and groups by rule.id but doesn't verify that the tuple key id matches the parsed rule.id; update the parse-check in the for loop (where BUILTIN_RULE_JSONS is iterated and serde_json::from_str::<JsonRule>(json) is called) to assert or return an error when id != rule.id (e.g., panic! or use ensure!/expect with a clear message) before inserting into id_count, so registry keys and JsonRule.id cannot diverge and will fail fast with a useful diagnostic.src/openhuman/tokenjuice/vendor/rules/tests__npm-test.json-47-53 (1)
47-53:⚠️ Potential issue | 🟡 MinorCounter patterns are overly broad and can overcount.
Lines 47 and 52 match any substring, so unrelated tokens can increment
failed/passed.Suggested fix
- "pattern": "fail", + "pattern": "\\bfail(?:ed|ure)?\\b", "flags": "i" @@ - "pattern": "pass", + "pattern": "\\bpass(?:ed)?\\b", "flags": "i"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/tokenjuice/vendor/rules/tests__npm-test.json` around lines 47 - 53, The counter patterns "fail" and "pass" are too broad and match substrings; update the rules in tests__npm-test.json (the entries with "name": "failed"/"passed" or the shown "name": "passed") to use stricter regexes (e.g., use word boundaries like \bpass\b and \bfail\b or anchors like ^pass$ and ^fail$) while keeping the "i" flag, so only whole words are counted and unrelated tokens do not increment the counters.src/openhuman/tokenjuice/vendor/rules/tests__pytest.json-20-20 (1)
20-20:⚠️ Potential issue | 🟡 MinorPlural-only skip regex misses
collected 1 itemlines.Line 20 only matches
items, so single-test runs leak non-actionable collection noise.Suggested fix
- "^collected \\d+ items$" + "^collected \\d+ items?.*$"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/tokenjuice/vendor/rules/tests__pytest.json` at line 20, Update the plural-only regex entry "^collected \\d+ items$" in tests__pytest.json so it matches both singular and plural collection lines (e.g., "collected 1 item" and "collected 2 items"); replace it with a pattern that allows an optional "s" such as "^collected \\d+ items?$" (or use "^collected \\d+ item(s)?$") to ensure single-test runs are also filtered.src/openhuman/tokenjuice/vendor/rules/tests__go-test.json-17-18 (1)
17-18:⚠️ Potential issue | 🟡 MinorAdd
"counterSource": "preKeep"to count passed packages before skip filtering.The rule skips lines matching
^ok\s.+(line 17) but attempts to count them as passed packages (lines 44-46). Since counters default topostKeep, they run after skip patterns are applied, resulting in zero passed package counts. Set"counterSource": "preKeep"to sample counter matches before filtering.Context
Lines 15–26 define filters with
skipPatterns: ["^ok\\s.+"], removing all passed package lines from output.
Lines 37–48 define a counter for"passed package"matching"^ok\\s".
The file lackscounterSource, defaulting to"postKeep"pertypes.rs:141, which means counters run after skip patterns have already removed matching lines.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/tokenjuice/vendor/rules/tests__go-test.json` around lines 17 - 18, The counters are running after skip filtering so the `"passed package"` counter never sees lines matched by `skipPatterns: ["^ok\\s.+"]`; update the counters configuration to include `"counterSource": "preKeep"` so counters run before skipPatterns are applied (ensure this key is added at the same level as the `counters` array), keeping the existing counter that matches `"^ok\\s"` and the skip pattern `^ok\\s.+` unchanged.src/openhuman/tokenjuice/reduce.rs-675-679 (1)
675-679:⚠️ Potential issue | 🟡 MinorStabilize fact ordering before joining.
factsis aHashMap, so the rendered fact order is nondeterministic. The same input can produce different inline text ordering across runs, which is awkward for fixtures, snapshots, and prompt diffs. Sort by counter name before formatting.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/tokenjuice/reduce.rs` around lines 675 - 679, The code builds fact_parts from a HashMap named facts which yields nondeterministic ordering; change the pipeline so you first collect the filtered (name, count) pairs into a Vec, sort that Vec by name (the counter key) and then map to pluralize(count, name) to produce fact_parts; update the block around fact_parts (the variables facts, fact_parts and the pluralize call) to perform a stable sort-by-name before formatting.src/openhuman/tokenjuice/reduce.rs-1851-1862 (1)
1851-1862:⚠️ Potential issue | 🟡 MinorThese assertions are tautologies.
!result.inline_text.is_empty() || result.inline_text.is_empty()can never fail, so these tests do not verify the empty-output or missing-title fallback behavior they claim to cover. Please assert the actual expected branch/result instead.Also applies to: 2183-2197
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/tokenjuice/reduce.rs` around lines 1851 - 1862, The test gh_empty_lines_returns_empty currently asserts a tautology; change it to assert the real expected behavior by calling run(ToolExecutionInput...) and then asserting the concrete branch: for an all-whitespace stdout expect result.inline_text.is_empty() (or if the intended fallback is a title, assert result.title == "<expected fallback>" and result.inline_text.is_empty()); update the same pattern for the other tests referenced around lines 2183-2197 so each test asserts a specific condition on result.inline_text or result.title rather than using the always-true `!... || ...` expression, locating fixes in the gh_empty_lines_returns_empty test and corresponding tests that call run and inspect result.inline_text/title.src/openhuman/tokenjuice/reduce.rs-486-505 (1)
486-505:⚠️ Potential issue | 🟡 MinorStrip ANSI before attempting JSON pretty-print.
If a rule enables both
strip_ansiandpretty_print_json, ANSI-decorated JSON will fail parsing here and never get pretty-printed. Apply ANSI stripping totextfirst, then run the JSON transform.Suggested change
- if rule - .transforms - .as_ref() - .and_then(|t| t.pretty_print_json) - .unwrap_or(false) - { - text = pretty_print_json_if_possible(&text); - } - - let mut lines = normalize_lines(&text); - let mut facts: HashMap<String, usize> = HashMap::new(); - if rule .transforms .as_ref() .and_then(|t| t.strip_ansi) .unwrap_or(false) { - lines = normalize_lines(&strip_ansi(&lines.join("\n"))); + text = strip_ansi(&text); + } + + if rule + .transforms + .as_ref() + .and_then(|t| t.pretty_print_json) + .unwrap_or(false) + { + text = pretty_print_json_if_possible(&text); } + + let mut lines = normalize_lines(&text); + let mut facts: HashMap<String, usize> = HashMap::new();🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/tokenjuice/reduce.rs` around lines 486 - 505, The JSON pretty-print currently runs before ANSI stripping, so ANSI-decorated JSON fails to parse; modify the flow in reduce.rs to apply strip_ansi to text first when rule.transforms.as_ref().and_then(|t| t.strip_ansi).unwrap_or(false) is true, by calling strip_ansi(&text) (and then normalize_lines) before invoking pretty_print_json_if_possible(&text); keep references to normalize_lines, pretty_print_json_if_possible, strip_ansi, text, lines and facts intact and ensure the subsequent pretty_print_json check uses the cleaned text.
🧹 Nitpick comments (30)
src/openhuman/tokenjuice/vendor/rules/git__diff-name-only.json (1)
5-8: Tighten matcher scope withtoolNamesfor consistency.Consider adding
toolNames: ["exec"]to avoid unintended matches outside exec-based tool calls.Suggested change
"match": { + "toolNames": ["exec"], "argv0": ["git"], "argvIncludes": [["diff"], ["--name-only"]] },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/tokenjuice/vendor/rules/git__diff-name-only.json` around lines 5 - 8, The rule's matcher currently only checks "match.argv0" and "match.argvIncludes" which can yield false positives; add a top-level "toolNames": ["exec"] entry alongside the existing "match" object to restrict this rule to exec-based tool calls (i.e., update the rule that contains "match": {"argv0": ["git"], "argvIncludes": [["diff"], ["--name-only"]] } to also include "toolNames": ["exec"]).src/openhuman/tokenjuice/vendor/rules/tests__pnpm-test.json (1)
46-53: Use word boundaries for pass/fail counters to reduce false positives.
pass/failsubstring matches can overcount. Prefer token-level patterns.Suggested change
{ "name": "failed", - "pattern": "fail", + "pattern": "\\bfail(?:ed|ure|ures)?\\b", "flags": "i" }, { "name": "passed", - "pattern": "pass", + "pattern": "\\bpass(?:ed)?\\b", "flags": "i" }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/tokenjuice/vendor/rules/tests__pnpm-test.json` around lines 46 - 53, The "failed" and "passed" regex patterns currently match substrings and overcount (patterns "fail" and "pass"); update those rule patterns used in the tests to require token boundaries (e.g., use word-boundary anchors around the tokens, such as \bpass\b and \bfail\b, keeping the "flags": "i") so only whole words are matched and false positives are reduced; adjust the entries named "failed" and "passed" accordingly.src/openhuman/tokenjuice/vendor/rules/git__status.json (1)
15-24: HardenskipPatternsto handle leading indentation in hint lines.A few patterns anchor at column 0, so indented advisory lines can leak through. Making whitespace optional improves compaction consistency.
Suggested regex tightening
"filters": { "skipPatterns": [ "^On branch ", "^Your branch is ", - "^and have \\d+ and \\d+ different commits each.*$", - "^\\(use \"git .+\" to .+\\)$", + "^\\s*and have \\d+ and \\d+ different commits each.*$", + "^\\s*\\(use \"git .+\" to .+\\)$", "^no changes added to commit.*$", "^nothing added to commit but untracked files present.*$", "^nothing to commit, working tree clean$", - "^use \"git .+\" to .+" + "^\\s*use \"git .+\" to .+$" ] },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/tokenjuice/vendor/rules/git__status.json` around lines 15 - 24, The skipPatterns array contains regexes anchored to column 0 (e.g., "^On branch ", "^Your branch is ", "^and have \\d+ and \\d+ different commits each.*$", "^\\(use \"git .+\" to .+\\)$", "^no changes added to commit.*$", "^nothing added to commit but untracked files present.*$", "^nothing to commit, working tree clean$", "^use \"git .+\" to .+"), which misses lines with leading indentation; update each pattern in the skipPatterns list to accept optional leading whitespace by prepending "\\s*" after the start anchor (i.e., change "^" to "^\\s*") so indented advisory lines are also matched and skipped consistently.src/openhuman/tokenjuice/vendor/rules/system__df.json (1)
25-27:filesystemcounter pattern is too broad for its label.
".+"counts every non-empty line, including headers. Consider narrowing to row-like lines to keep metrics meaningful.Proposed refinement
- "pattern": ".+" + "pattern": "^(?!Filesystem\\b).+\\s+\\d+%\\s+.+$"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/tokenjuice/vendor/rules/system__df.json` around lines 25 - 27, The "filesystem" metric uses an overly broad pattern ".+" which counts every non-empty line (including headers); narrow the regex in the "pattern" field for the "filesystem" entry in system__df.json to match only actual data rows (e.g., lines that look like device/path entries or rows starting with a mount/device identifier, numeric size/timestamp, or a path) so headers are excluded—update the "pattern" value for the "filesystem" object to a more specific row-like regex that matches the expected df output format.src/openhuman/tokenjuice/vendor/rules/media__ffmpeg.json (1)
25-27:errorcounter includes a common non-error token (frame=).
frame=is typically progress noise in ffmpeg and can distort error counts. Consider removing it from the error pattern.Proposed refinement
- "pattern": "error|invalid|failed|frame=", + "pattern": "error|invalid|failed",🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/tokenjuice/vendor/rules/media__ffmpeg.json` around lines 25 - 27, The "error" rule in media__ffmpeg.json currently treats "frame=" as an error token; remove "frame=" from the "pattern" value so the rule only matches true error tokens (e.g., keep "error|invalid|failed" or add word boundaries like "\b(error|invalid|failed)\b"). Update the pattern string in the rule with "name": "error" (and adjust flags if needed) to exclude progress tokens such as "frame=".src/openhuman/tokenjuice/vendor/rules/service__ss.json (1)
26-26: Regex pattern is slightly redundant.The pattern
^(?!Netid\\s|State\\s).+\\S.*$works correctly but the trailing.*$after\\Sis redundant when combined with.+at the start. The pattern could be simplified to^(?!Netid\\s|State\\s).+\\Sor^(?!Netid\\s|State\\s).*\\S.*$for clarity.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/tokenjuice/vendor/rules/service__ss.json` at line 26, The regex in the "pattern" field of service__ss.json is slightly redundant; replace the current value "^(?!Netid\\s|State\\s).+\\S.*$" with a simplified equivalent such as "^(?!Netid\\s|State\\s).+\\S" (or alternatively "^(?!Netid\\s|State\\s).*\\S.*$") to remove the unnecessary trailing ".*$" while preserving the negative lookahead for "Netid " and "State " and the requirement that the string ends with a non-space.src/openhuman/tokenjuice/vendor/rules/install__yarn-install.json (1)
5-10: Consider adding case-insensitive flag for robustness.The
matchOutputpattern"Already up-to-date\\."is case-sensitive (no"flags": "i"), while the similar bun-install rule uses case-insensitive matching. For consistency and robustness against potential upstream output changes, consider adding"flags": "i".♻️ Proposed enhancement for case-insensitive matching
"matchOutput": [ { "pattern": "Already up-to-date\\.", - "message": "yarn install: up to date" + "message": "yarn install: up to date", + "flags": "i" } ],🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/tokenjuice/vendor/rules/install__yarn-install.json` around lines 5 - 10, The JSON rule in matchOutput currently uses a case-sensitive pattern "Already up-to-date\\."—update that match object by adding a "flags": "i" property next to "pattern" (within the same matchOutput entry) so the regex matches case-insensitively and stays consistent with the bun-install rule; ensure the pattern string remains unchanged and only the flags field is added.src/openhuman/tokenjuice/vendor/rules/git__branch.json (1)
23-27: Counter pattern is very broad.The counter pattern
.+matches any non-empty line, which will count all non-empty lines as branches. Whilegit branchoutput is typically clean, this pattern might count header lines or other non-branch content if git's output format changes or includes additional information.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/tokenjuice/vendor/rules/git__branch.json` around lines 23 - 27, The "branch" counter currently uses a too-broad pattern (pattern: ".+") and will count any non-empty line; tighten the regex in the "counters" entry for name "branch" to only match git branch lines (lines that start with an optional '*' or space followed by a valid branch name) so header/footer or other non-branch text is ignored; update the "pattern" value under the "counters" object for "name": "branch" to a stricter branch-line pattern that matches typical branch-name characters (alphanumerics, slash, dot, hyphen, underscore) and anchors the match to the line start/end.src/openhuman/tokenjuice/vendor/rules/network__ping.json (1)
29-33: Counter name may be misleading.The counter named
"packet loss"matches the literal phrase"packet loss"in the output, which typically appears once in ping's summary statistics. The name suggests it might count lost packets or capture the loss percentage, but it actually just counts occurrences of the phrase (usually 0 or 1). Consider renaming to"has_packet_loss_stats"or similar for clarity, or adjust the pattern to extract the actual loss percentage.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/tokenjuice/vendor/rules/network__ping.json` around lines 29 - 33, The rule named "packet loss" currently only matches the literal phrase "packet loss" and is misleading; update the rule in network__ping.json by renaming the "name" to something like "has_packet_loss_stats" to reflect it's a presence check, or change the "pattern" to capture the loss percentage (e.g., a regex capturing "\d+(?:\.\d+)?% packet loss") so the rule either accurately indicates presence or extracts the actual loss value.src/openhuman/tokenjuice/vendor/rules/install__bun-install.json (1)
7-7: Minor inconsistency in regex styling for plural matching.Line 7 uses
packages?while line 39 usespackage(s)?to match the same plural pattern. Both work correctly, but the capture group(s)?in line 39 is unnecessary for counter pattern matching and creates a slight stylistic inconsistency.♻️ Proposed fix for consistent styling
{ "name": "package", - "pattern": "\\bpackage(s)?\\b", + "pattern": "\\bpackages?\\b", "flags": "i" }Also applies to: 39-39
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/tokenjuice/vendor/rules/install__bun-install.json` at line 7, Two similar regex entries use inconsistent plural styles: one uses "packages?" and the other uses "package(s)?"; update the latter to match the former by replacing "package(s)?" with "packages?" so both patterns use the non-capturing optional "s" style, e.g. standardize the pattern string "Checked \\d+ installs? across \\d+ package(s)? \\(no changes\\)" to "Checked \\d+ installs? across \\d+ packages? \\(no changes\\)" wherever it appears.src/openhuman/tokenjuice/vendor/rules/filesystem__ls.json (1)
26-26: Regex pattern is slightly redundant.Similar to
service__ss.json, the pattern^(?!total\\s+\\d+).+\\S.*$works correctly but has redundant parts. The.+\\S.*$construction could be simplified to.+\\Sor.*\\S.*$for clarity.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/tokenjuice/vendor/rules/filesystem__ls.json` at line 26, The regex in the "pattern" value inside filesystem__ls.json is redundant: change the trailing ".+\\S.*$" part to the simpler equivalent (e.g., use ".*\\S" to ensure a non-space character exists) so the whole pattern still excludes lines starting with "total <number>" while being clearer; update the "pattern" entry accordingly.src/openhuman/tokenjuice/vendor/rules/devops__kubectl-logs.json (2)
31-41: Consider consistent use of word boundaries in counters.The counter patterns
"error"(line 34) and"warn"(line 39) lack word boundaries, which differs from the oxlint rule that uses\\berror\\band\\bwarning\\b. This will count substring matches. If you want consistency with the more precise oxlint approach, consider adding word boundaries.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/tokenjuice/vendor/rules/devops__kubectl-logs.json` around lines 31 - 41, The "counters" entries for name "error" and name "warning" use patterns "error" and "warn" which match substrings; update their "pattern" values to use word-boundary regexes (e.g., "\berror\b" and "\bwarning\b") to match whole words consistently with the oxlint rules, ensuring you keep the "flags" field (e.g., "i") unchanged and only modify the pattern strings in the counters array.
15-20: Consider word boundaries in keepPatterns for precision.The first keepPattern
"error|warn|fatal|panic|exception|traceback|timeout|refused|fail"will match substrings within words (e.g., "failure", "warning", "errors"). If the intent is to match these as distinct tokens, consider using word boundaries:"\\b(error|warn|fatal|panic|exception|traceback|timeout|refused|fail)\\b".📝 More precise pattern with word boundaries
"keepPatterns": [ - "error|warn|fatal|panic|exception|traceback|timeout|refused|fail", + "\\b(error|warn|fatal|panic|exception|traceback|timeout|refused|fail)\\b", "^Caused by:", "^Traceback" ]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/tokenjuice/vendor/rules/devops__kubectl-logs.json` around lines 15 - 20, The keepPatterns entry in the "filters" object currently uses a plain alternation that will match substrings inside words; update the pattern in the "keepPatterns" array (the entry under filters → keepPatterns that contains "error|warn|fatal|panic|exception|traceback|timeout|refused|fail") to use word boundaries so tokens are matched precisely (e.g., replace it with a pattern using \b around the alternation, properly escaped for JSON like "\\b(error|warn|fatal|panic|exception|traceback|timeout|refused|fail)\\b").src/openhuman/tokenjuice/vendor/rules/task__just.json (1)
25-27:errorcounter pattern is too broad for reliable signaling.Line 26 can match non-failure text (e.g., “0 errors”), which may overcount failures.
Suggested regex tightening
"counters": [ { "name": "error", - "pattern": "error", + "pattern": "\\berror\\b|failed|recipe `[^`]+` failed", "flags": "i" } ]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/tokenjuice/vendor/rules/task__just.json` around lines 25 - 27, The current "error" token (name "error", pattern "error") is too broad and matches benign phrases like "0 errors"; update the regex to require whole-word matches and exclude cases where "error(s)" is preceded by a digit. Replace the pattern "error" with a tightened, case-insensitive pattern such as one using word boundaries and a negative lookbehind (e.g., (?<!\d)\berrors?\b with the "i" flag) so only standalone error words indicating failures are counted.src/openhuman/tokenjuice/vendor/rules/lint__prettier-check.json (1)
23-28: Counter pattern for "warning" lacks word boundaries.The pattern
"warn"(line 26) uses substring matching without word boundaries, which may produce false positives (e.g., matching"prewarning","warned","swarn"). Consider using\\bwarn\\bfor more precise matching:{ "name": "warning", - "pattern": "warn", + "pattern": "\\bwarn\\b", "flags": "i" },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/tokenjuice/vendor/rules/lint__prettier-check.json` around lines 23 - 28, The "warning" counter uses a broad substring pattern "warn" which causes false positives; update the pattern value in the counters entry with "name": "warning" to use word boundaries (e.g., \\bwarn\\b) and keep the existing "flags": "i" so the matcher remains case-insensitive, ensuring only standalone "warn" tokens are matched.src/openhuman/tokenjuice/vendor/rules/network__wget.json (1)
23-29: Counter pattern lacks word boundaries.The pattern
"error|failed"(line 26) uses substring matching without word boundaries, which may produce false positives (e.g., matching"errors","failedx"). Consider using word boundaries:{ "name": "error", - "pattern": "error|failed", + "pattern": "\\b(error|failed)\\b", "flags": "i" }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/tokenjuice/vendor/rules/network__wget.json` around lines 23 - 29, The "error" counter's pattern currently uses substring matching ("pattern": "error|failed") which can yield false positives; update the counter with name "error" (under the "counters" array) to use word-boundary anchors around each token (e.g., \berror\b and \bfailed\b) and preserve the "flags": "i" so the regex remains case-insensitive; ensure the JSON string properly escapes backslashes for the \b sequences.src/openhuman/tokenjuice/vendor/rules/devops__docker-logs.json (2)
15-21: Filter pattern lacks word boundaries.The keepPattern
"error|warn|fatal|panic|exception|traceback|timeout|refused|fail"(line 17) uses substring matching without word boundaries, which may produce false positives (e.g., matching"errors","warns","failures"). Consider using word boundaries for more precise filtering:"keepPatterns": [ - "error|warn|fatal|panic|exception|traceback|timeout|refused|fail", + "\\b(error|warn|fatal|panic|exception|traceback|timeout|refused|fail)\\b", "^Caused by:", "^Traceback" ]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/tokenjuice/vendor/rules/devops__docker-logs.json` around lines 15 - 21, The keepPatterns entry under "filters" currently uses substring matching ("error|warn|fatal|panic|exception|traceback|timeout|refused|fail") which can produce false positives; update that pattern to enforce word boundaries (e.g., replace with a single regex using \b around the alternation like \b(?:error|warn|fatal|panic|exception|traceback|timeout|refused|fail)\b) so the "keepPatterns" in the filters block only matches whole words; ensure the pattern string is properly escaped if needed in the JSON value for the keepPatterns array.
31-42: Counter patterns lack word boundaries.The counter patterns for
"error"(line 34) and"warn"(line 39) use substring matching without word boundaries, which may produce false positives (e.g.,"error"matches"errors","warn"matches"warning"). Consider using word boundaries:{ "name": "error", - "pattern": "error", + "pattern": "\\berror\\b", "flags": "i" }, { "name": "warning", - "pattern": "warn", + "pattern": "\\bwarn\\b", "flags": "i" }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/tokenjuice/vendor/rules/devops__docker-logs.json` around lines 31 - 42, The JSON counters use substring patterns that can produce false positives: update the "pattern" values for the counters named "error" and "warning" to use regex word boundaries (e.g., \berror\b and \bwarn\b) and escape backslashes for JSON (e.g., "\\berror\\b" and "\\bwarn\\b") so only whole-word matches are counted; modify the entries under the "counters" array where name == "error" and name == "warning".src/openhuman/tokenjuice/vendor/rules/service__launchctl.json (2)
14-20: Filter pattern lacks word boundaries.The keepPattern
"error|failed|stopped|disabled"(line 18) uses substring matching without word boundaries, which may produce false positives. Consider using word boundaries for more precise filtering:"keepPatterns": [ "^-?\\d+\\s+\\S+\\s+.+", "^PID\\s+Status\\s+Label$", - "error|failed|stopped|disabled" + "\\b(error|failed|stopped|disabled)\\b" ]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/tokenjuice/vendor/rules/service__launchctl.json` around lines 14 - 20, The "keepPatterns" filter uses the loose substring pattern "error|failed|stopped|disabled" which can yield false positives; update the pattern inside the keepPatterns array (the entry currently equal to that string) to require word boundaries (e.g., wrap the alternation in word-boundary anchors) so it only matches whole words; ensure the JSON string escapes backslashes correctly when you add the \b anchors.
30-40: Counter pattern lacks word boundaries.The pattern
"error|failed|stopped|disabled"(line 37) uses substring matching without word boundaries, matching the filter pattern on line 18 (which has the same issue). Consider using word boundaries:{ "name": "error", - "pattern": "error|failed|stopped|disabled", + "pattern": "\\b(error|failed|stopped|disabled)\\b", "flags": "i" }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/tokenjuice/vendor/rules/service__launchctl.json` around lines 30 - 40, Update the "counters" entry where "name": "error" to use word boundaries so tokens like "error", "failed", "stopped", and "disabled" are matched as full words; replace the pattern "error|failed|stopped|disabled" with a bounded group such as "\\b(?:error|failed|stopped|disabled)\\b" (keep the existing "flags": "i") to avoid unintended substring matches.src/openhuman/tokenjuice/vendor/rules/observability__htop.json (1)
23-29: Counter pattern could benefit from word boundaries.The pattern
"load average|tasks|zombie"(line 26) uses substring matching. While"load average"is a phrase, the terms"tasks"and"zombie"could produce false positives without word boundaries (e.g., matching"subtasks","zombies"). Consider:{ "name": "warning", - "pattern": "load average|tasks|zombie", + "pattern": "load average|\\btasks\\b|\\bzombie\\b", "flags": "i" }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/tokenjuice/vendor/rules/observability__htop.json` around lines 23 - 29, The "warning" counter's "pattern" uses substring matching ("load average|tasks|zombie") which can produce false positives; update the "pattern" value for the counter with name "warning" to use word boundaries (e.g., wrap each token or the alternation in word-boundary anchors) so "tasks" and "zombie" only match whole words while preserving the "load average" phrase; modify the "pattern" field in the counters array accordingly.src/openhuman/tokenjuice/vendor/rules/tests__jest.json (1)
29-40: Inconsistent counter naming: "failed test" vs "passed suite".The counter names use inconsistent terminology:
"failed test"(line 31) vs"passed suite"(line 36). Both patterns match the same Jest output format (FAILandPASSprefixes), so the naming should be consistent. Consider using either"failed suite"and"passed suite", or"failed test"and"passed test".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/tokenjuice/vendor/rules/tests__jest.json` around lines 29 - 40, The counters array contains inconsistent names ("failed test" vs "passed suite") for patterns that both target Jest suite-level output (^FAIL\s and ^PASS\s); rename one to match the other (e.g., change "failed test" to "failed suite" so both use "suite", or change "passed suite" to "passed test" so both use "test") by updating the "name" entries in the counters object that correspond to the "^FAIL\\s" and "^PASS\\s" patterns.src/openhuman/tokenjuice/vendor/rules/generic__fallback.json (1)
20-31: Consider using word boundaries in counter patterns.The counter patterns for
errorandwarninguse substring matching without word boundaries, which will produce false positives (e.g.,"error"matches"operations","warning"matches"prewarning"). For more accurate counts, consider anchoring with\b:{ "name": "error", - "pattern": "error", + "pattern": "\\berror\\b", "flags": "i" }, { "name": "warning", - "pattern": "warning", + "pattern": "\\bwarning\\b", "flags": "i" }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/tokenjuice/vendor/rules/generic__fallback.json` around lines 20 - 31, The counter patterns for the "error" and "warning" counters currently use plain substring matching and can produce false positives; update the "pattern" values for the counters named "error" and "warning" to use word-boundary regexes (e.g. \berror\b and \bwarning\b) so only whole-word matches are counted, preserving the "flags": "i" option; remember to escape backslashes properly in the JSON string.src/openhuman/tokenjuice/vendor/rules/package__brew-install.json (1)
24-30: Counter pattern lacks word boundaries.The pattern
"warning|error|failed"(line 27) uses substring matching without word boundaries, which may produce false positives (e.g., matching"prewarning","errors","failedx"). Consider using word boundaries:{ "name": "warning", - "pattern": "warning|error|failed", + "pattern": "\\b(warning|error|failed)\\b", "flags": "i" }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/tokenjuice/vendor/rules/package__brew-install.json` around lines 24 - 30, Update the "warning" counter pattern in the counters array so it matches whole words rather than substrings; replace the current pattern "warning|error|failed" with a word-boundary-aware alternation (for example \b(?:warning|error|failed)\b), making sure to escape backslashes in the JSON string (e.g. "\\b(?:warning|error|failed)\\b") and keep the "flags":"i" unchanged.src/openhuman/tokenjuice/vendor/rules/archive__zip.json (1)
23-29: Counter name doesn't match its pattern.The counter is named
"warning"but the pattern"adding|updating|warning|error"matches both normal operational output and actual warnings/errors. Consider either:
- Renaming to
"event"or"operation"to reflect that it counts all significant zip events, or- Narrowing the pattern to
"warning|error"to match only severity indicators.✨ Option 1: Rename counter to match broader pattern
"counters": [ { - "name": "warning", + "name": "event", "pattern": "adding|updating|warning|error", "flags": "i" } ]✨ Option 2: Narrow pattern to match counter name
"counters": [ { "name": "warning", - "pattern": "adding|updating|warning|error", + "pattern": "warning|error", "flags": "i" } ]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/tokenjuice/vendor/rules/archive__zip.json` around lines 23 - 29, The "counters" entry defines a counter with name "warning" but its pattern "adding|updating|warning|error" matches general operations as well as severities; update this to be consistent by either renaming the counter (change the "name" value from "warning" to something like "event" or "operation") to reflect the broader pattern, or narrow the "pattern" (remove "adding|updating") to just "warning|error" so the "warning" counter only matches severity indicators; locate the JSON object with "name": "warning" inside the "counters" array in archive__zip.json and apply one of these two changes.src/openhuman/tokenjuice/vendor/rules/tests__yarn-test.json (1)
46-53: Tighten pass/fail counters to reduce false positives.Current patterns are broad substrings and can inflate counts.
♻️ Proposed counter refinement
- "pattern": "fail", + "pattern": "\\bfail(?:ed|ing|s)?\\b", "flags": "i" }, { "name": "passed", - "pattern": "pass", + "pattern": "\\bpass(?:ed|ing|es)?\\b", "flags": "i"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/tokenjuice/vendor/rules/tests__yarn-test.json` around lines 46 - 53, The "failed" and "passed" patterns are too broad ("fail" and "pass") and cause false positives; update the entries for name "failed" and name "passed" in tests__yarn-test.json to use stricter regex with word boundaries and optional suffixes (e.g. change pattern "fail" to "\\bfail(?:ed)?\\b" and pattern "pass" to "\\bpass(?:ed)?\\b" and keep the "i" flag), so they match whole words like "fail/failed" and "pass/passed" instead of substrings.src/openhuman/tokenjuice/vendor/rules/tests__vitest.json (1)
29-30: Make timing keep-patterns whitespace-tolerant.Line 29 and Line 30 currently require exactly three leading spaces, which is brittle across Vitest output variants.
♻️ Proposed regex hardening
- "^ Start at\\s+.+", - "^ Duration\\s+.+", + "^\\s*Start at\\s+.+", + "^\\s*Duration\\s+.+",🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/tokenjuice/vendor/rules/tests__vitest.json` around lines 29 - 30, The two timing regex patterns ("^ Start at\\s+.+", "^ Duration\\s+.+") are brittle because they demand exactly three spaces; update them in tests__vitest.json to accept any leading whitespace (e.g., change the prefixes to "^\\s+Start at\\s+.+", "^\\s+Duration\\s+.+") so Vitest output variants with different indentation still match.src/openhuman/tokenjuice/vendor/rules/devops__docker-compose.json (1)
35-37:servicecounter pattern is too broad for its label.Current regex can count non-service lines, which makes the metric misleading.
♻️ Proposed counter narrowing
- "pattern": "^(?!NAME\\s|SERVICE\\s|CONTAINER ID\\s).+\\S.*$" + "pattern": "^(?!NAME\\s|SERVICE\\s|CONTAINER ID\\s)[-a-z0-9_.]+\\s+\\S+.*$"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/tokenjuice/vendor/rules/devops__docker-compose.json` around lines 35 - 37, The "service" counter's regex is too permissive and will match arbitrary non-header lines; update the "pattern" for the "service" rule so it only matches valid Docker Compose service names (e.g. alphanumerics, dots, dashes, underscores) at line start followed by whitespace and the rest of the row. Replace the current pattern value on the "service" entry with a narrower one such as ^[A-Za-z0-9._-]+\s+\S.*$ so the rule (named "service") only counts true service-name lines.src/openhuman/tokenjuice/vendor/rules/cloud__az.json (1)
14-29: Consider adding keep-patterns to match the rule’s stated preservation goal.Right now this rule relies on head/tail windows only; long Azure CLI outputs can still drop key middle diagnostics.
Possible refinement
"transforms": { "stripAnsi": true, "dedupeAdjacent": true, "trimEmptyEdges": true }, + "filters": { + "keepPatterns": [ + "(?i)\\b(error|failed|forbidden|not found|conflict|unauthorized|exception)\\b", + "^\\s*(Name|Resource Group|Type|Location|ProvisioningState|Status)\\s*[:=].+" + ] + }, "summarize": {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/tokenjuice/vendor/rules/cloud__az.json` around lines 14 - 29, The failure preservation currently only uses head/tail windows ("failure": { "preserveOnFailure": true, "head": 14, "tail": 14 }) which can drop important middle diagnostics; add a "keep-patterns" array under the "failure" object to anchor key lines (e.g., patterns matching error messages and common Azure CLI diagnostics) — include patterns such as "error|failed|forbidden|not found", "\bException\b", "Traceback", "Request ID", and "ActivityId" (with "flags": "i" if supported) so the rule preserves any lines matching those patterns in addition to head/tail.src/openhuman/tokenjuice/reduce.rs (1)
878-898: Precompile these regexes instead of rebuilding them per call.The helper comment says “avoid repeated compilation,” but every call still does
Regex::new. Sincerewrite_git_status_lineandformat_gh_table_lineinvoke these on every line, large outputs pay regex compilation costs over and over. Fixed patterns here should be cached once withLazyLock/OnceLockor moved into prebuilt statics.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/tokenjuice/reduce.rs` around lines 878 - 898, The three helpers regex_match, regex_replace, and regex_captures currently compile patterns on every call; change them to use precompiled Regex instances by moving fixed patterns into statics (e.g., once_cell::sync::Lazy or std::sync::OnceLock) or by accepting &Regex references so callers like rewrite_git_status_line and format_gh_table_line pass prebuilt Regexes; update the helpers to take &Regex (or look up a cached Regex by pattern) and ensure all call sites construct or reference the static Lazy/OnceLock Regexes once instead of calling Regex::new on every invocation.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: a22a1c40-2131-4592-912d-2a8fe4a7400f
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (116)
Cargo.tomlsrc/openhuman/mod.rssrc/openhuman/tokenjuice/classify.rssrc/openhuman/tokenjuice/mod.rssrc/openhuman/tokenjuice/reduce.rssrc/openhuman/tokenjuice/rules/builtin.rssrc/openhuman/tokenjuice/rules/compiler.rssrc/openhuman/tokenjuice/rules/loader.rssrc/openhuman/tokenjuice/rules/mod.rssrc/openhuman/tokenjuice/tests/fixtures/cargo_test_failure.fixture.jsonsrc/openhuman/tokenjuice/tests/fixtures/fallback_long_output.fixture.jsonsrc/openhuman/tokenjuice/tests/fixtures/git_status_modified.fixture.jsonsrc/openhuman/tokenjuice/tests/integration.rssrc/openhuman/tokenjuice/text/ansi.rssrc/openhuman/tokenjuice/text/mod.rssrc/openhuman/tokenjuice/text/process.rssrc/openhuman/tokenjuice/text/width.rssrc/openhuman/tokenjuice/types.rssrc/openhuman/tokenjuice/vendor/README.mdsrc/openhuman/tokenjuice/vendor/rules/archive__tar.jsonsrc/openhuman/tokenjuice/vendor/rules/archive__unzip.jsonsrc/openhuman/tokenjuice/vendor/rules/archive__zip.jsonsrc/openhuman/tokenjuice/vendor/rules/build__esbuild.jsonsrc/openhuman/tokenjuice/vendor/rules/build__tsc.jsonsrc/openhuman/tokenjuice/vendor/rules/build__tsdown.jsonsrc/openhuman/tokenjuice/vendor/rules/build__vite.jsonsrc/openhuman/tokenjuice/vendor/rules/build__webpack.jsonsrc/openhuman/tokenjuice/vendor/rules/cloud__aws.jsonsrc/openhuman/tokenjuice/vendor/rules/cloud__az.jsonsrc/openhuman/tokenjuice/vendor/rules/cloud__flyctl.jsonsrc/openhuman/tokenjuice/vendor/rules/cloud__gcloud.jsonsrc/openhuman/tokenjuice/vendor/rules/cloud__gh.jsonsrc/openhuman/tokenjuice/vendor/rules/cloud__vercel.jsonsrc/openhuman/tokenjuice/vendor/rules/database__mongosh.jsonsrc/openhuman/tokenjuice/vendor/rules/database__mysql.jsonsrc/openhuman/tokenjuice/vendor/rules/database__psql.jsonsrc/openhuman/tokenjuice/vendor/rules/database__redis-cli.jsonsrc/openhuman/tokenjuice/vendor/rules/database__sqlite3.jsonsrc/openhuman/tokenjuice/vendor/rules/devops__docker-build.jsonsrc/openhuman/tokenjuice/vendor/rules/devops__docker-compose.jsonsrc/openhuman/tokenjuice/vendor/rules/devops__docker-images.jsonsrc/openhuman/tokenjuice/vendor/rules/devops__docker-logs.jsonsrc/openhuman/tokenjuice/vendor/rules/devops__docker-ps.jsonsrc/openhuman/tokenjuice/vendor/rules/devops__kubectl-describe.jsonsrc/openhuman/tokenjuice/vendor/rules/devops__kubectl-get.jsonsrc/openhuman/tokenjuice/vendor/rules/devops__kubectl-logs.jsonsrc/openhuman/tokenjuice/vendor/rules/filesystem__find.jsonsrc/openhuman/tokenjuice/vendor/rules/filesystem__ls.jsonsrc/openhuman/tokenjuice/vendor/rules/generic__fallback.jsonsrc/openhuman/tokenjuice/vendor/rules/generic__help.jsonsrc/openhuman/tokenjuice/vendor/rules/git__branch.jsonsrc/openhuman/tokenjuice/vendor/rules/git__diff-name-only.jsonsrc/openhuman/tokenjuice/vendor/rules/git__diff-stat.jsonsrc/openhuman/tokenjuice/vendor/rules/git__log-oneline.jsonsrc/openhuman/tokenjuice/vendor/rules/git__remote-v.jsonsrc/openhuman/tokenjuice/vendor/rules/git__show.jsonsrc/openhuman/tokenjuice/vendor/rules/git__stash-list.jsonsrc/openhuman/tokenjuice/vendor/rules/git__status.jsonsrc/openhuman/tokenjuice/vendor/rules/install__bun-install.jsonsrc/openhuman/tokenjuice/vendor/rules/install__npm-install.jsonsrc/openhuman/tokenjuice/vendor/rules/install__pnpm-install.jsonsrc/openhuman/tokenjuice/vendor/rules/install__yarn-install.jsonsrc/openhuman/tokenjuice/vendor/rules/lint__biome.jsonsrc/openhuman/tokenjuice/vendor/rules/lint__eslint.jsonsrc/openhuman/tokenjuice/vendor/rules/lint__oxlint.jsonsrc/openhuman/tokenjuice/vendor/rules/lint__prettier-check.jsonsrc/openhuman/tokenjuice/vendor/rules/media__ffmpeg.jsonsrc/openhuman/tokenjuice/vendor/rules/media__mediainfo.jsonsrc/openhuman/tokenjuice/vendor/rules/network__curl.jsonsrc/openhuman/tokenjuice/vendor/rules/network__dig.jsonsrc/openhuman/tokenjuice/vendor/rules/network__nslookup.jsonsrc/openhuman/tokenjuice/vendor/rules/network__ping.jsonsrc/openhuman/tokenjuice/vendor/rules/network__ssh.jsonsrc/openhuman/tokenjuice/vendor/rules/network__traceroute.jsonsrc/openhuman/tokenjuice/vendor/rules/network__wget.jsonsrc/openhuman/tokenjuice/vendor/rules/observability__free.jsonsrc/openhuman/tokenjuice/vendor/rules/observability__htop.jsonsrc/openhuman/tokenjuice/vendor/rules/observability__iostat.jsonsrc/openhuman/tokenjuice/vendor/rules/observability__top.jsonsrc/openhuman/tokenjuice/vendor/rules/observability__vmstat.jsonsrc/openhuman/tokenjuice/vendor/rules/package__apt-install.jsonsrc/openhuman/tokenjuice/vendor/rules/package__apt-upgrade.jsonsrc/openhuman/tokenjuice/vendor/rules/package__brew-install.jsonsrc/openhuman/tokenjuice/vendor/rules/package__brew-upgrade.jsonsrc/openhuman/tokenjuice/vendor/rules/package__dnf-install.jsonsrc/openhuman/tokenjuice/vendor/rules/package__yum-install.jsonsrc/openhuman/tokenjuice/vendor/rules/search__git-grep.jsonsrc/openhuman/tokenjuice/vendor/rules/search__grep.jsonsrc/openhuman/tokenjuice/vendor/rules/search__rg.jsonsrc/openhuman/tokenjuice/vendor/rules/service__journalctl.jsonsrc/openhuman/tokenjuice/vendor/rules/service__launchctl.jsonsrc/openhuman/tokenjuice/vendor/rules/service__lsof.jsonsrc/openhuman/tokenjuice/vendor/rules/service__netstat.jsonsrc/openhuman/tokenjuice/vendor/rules/service__service.jsonsrc/openhuman/tokenjuice/vendor/rules/service__ss.jsonsrc/openhuman/tokenjuice/vendor/rules/service__systemctl-status.jsonsrc/openhuman/tokenjuice/vendor/rules/system__df.jsonsrc/openhuman/tokenjuice/vendor/rules/system__du.jsonsrc/openhuman/tokenjuice/vendor/rules/system__file.jsonsrc/openhuman/tokenjuice/vendor/rules/system__ps.jsonsrc/openhuman/tokenjuice/vendor/rules/task__just.jsonsrc/openhuman/tokenjuice/vendor/rules/task__make.jsonsrc/openhuman/tokenjuice/vendor/rules/tests__bun-test.jsonsrc/openhuman/tokenjuice/vendor/rules/tests__cargo-test.jsonsrc/openhuman/tokenjuice/vendor/rules/tests__go-test.jsonsrc/openhuman/tokenjuice/vendor/rules/tests__jest.jsonsrc/openhuman/tokenjuice/vendor/rules/tests__mocha.jsonsrc/openhuman/tokenjuice/vendor/rules/tests__npm-test.jsonsrc/openhuman/tokenjuice/vendor/rules/tests__playwright.jsonsrc/openhuman/tokenjuice/vendor/rules/tests__pnpm-test.jsonsrc/openhuman/tokenjuice/vendor/rules/tests__pytest.jsonsrc/openhuman/tokenjuice/vendor/rules/tests__vitest.jsonsrc/openhuman/tokenjuice/vendor/rules/tests__yarn-test.jsonsrc/openhuman/tokenjuice/vendor/rules/transfer__rsync.jsonsrc/openhuman/tokenjuice/vendor/rules/transfer__scp.jsontests/tokenjuice_integration.rs
| "exitCode": 1, | ||
| "stdout": " Compiling mylib v0.1.0\n Finished test [unoptimized + debuginfo] target(s) in 2.50s\n Running unittests src/lib.rs\nrunning 3 tests\ntest tests::test_a ... ok\ntest tests::test_b ... FAILED\ntest tests::test_c ... ok\n\nfailures:\n\n---- tests::test_b stdout ----\nthread 'tests::test_b' panicked at 'assertion failed', src/lib.rs:42:5\n\nfailures:\n tests::test_b\n\ntest result: FAILED. 2 passed; 1 failed; 0 ignored\n" | ||
| }, | ||
| "expectedOutput": "exit 1\n2 failed tests, 2 passed tests\nrunning 3 tests\ntest tests::test_a ... ok\ntest tests::test_b ... FAILED\ntest tests::test_c ... ok\n\nfailures:\n\n---- tests::test_b stdout ----\nthread 'tests::test_b' panicked at 'assertion failed', src/lib.rs:42:5\n\nfailures:\n tests::test_b\n\ntest result: FAILED. 2 passed; 1 failed; 0 ignored" |
There was a problem hiding this comment.
Incorrect test count in expected output.
The expected output shows "2 failed tests, 2 passed tests" but the actual test result in the stdout (line 7) shows "test result: FAILED. 2 passed; 1 failed; 0 ignored". The expected output should show 1 failed test, not 2.
🐛 Proposed fix for the test count
- "expectedOutput": "exit 1\n2 failed tests, 2 passed tests\nrunning 3 tests\ntest tests::test_a ... ok\ntest tests::test_b ... FAILED\ntest tests::test_c ... ok\n\nfailures:\n\n---- tests::test_b stdout ----\nthread 'tests::test_b' panicked at 'assertion failed', src/lib.rs:42:5\n\nfailures:\n tests::test_b\n\ntest result: FAILED. 2 passed; 1 failed; 0 ignored"
+ "expectedOutput": "exit 1\n1 failed test, 2 passed tests\nrunning 3 tests\ntest tests::test_a ... ok\ntest tests::test_b ... FAILED\ntest tests::test_c ... ok\n\nfailures:\n\n---- tests::test_b stdout ----\nthread 'tests::test_b' panicked at 'assertion failed', src/lib.rs:42:5\n\nfailures:\n tests::test_b\n\ntest result: FAILED. 2 passed; 1 failed; 0 ignored"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "expectedOutput": "exit 1\n2 failed tests, 2 passed tests\nrunning 3 tests\ntest tests::test_a ... ok\ntest tests::test_b ... FAILED\ntest tests::test_c ... ok\n\nfailures:\n\n---- tests::test_b stdout ----\nthread 'tests::test_b' panicked at 'assertion failed', src/lib.rs:42:5\n\nfailures:\n tests::test_b\n\ntest result: FAILED. 2 passed; 1 failed; 0 ignored" | |
| "expectedOutput": "exit 1\n1 failed test, 2 passed tests\nrunning 3 tests\ntest tests::test_a ... ok\ntest tests::test_b ... FAILED\ntest tests::test_c ... ok\n\nfailures:\n\n---- tests::test_b stdout ----\nthread 'tests::test_b' panicked at 'assertion failed', src/lib.rs:42:5\n\nfailures:\n tests::test_b\n\ntest result: FAILED. 2 passed; 1 failed; 0 ignored" |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/openhuman/tokenjuice/tests/fixtures/cargo_test_failure.fixture.json` at
line 9, The expectedOutput string contains an incorrect test count: update the
`"expectedOutput"` value in the fixture so the summary line that currently reads
"2 failed tests, 2 passed tests" reflects one failed test; change it to "1
failed test, 2 passed tests" (edit the expectedOutput field in
src/openhuman/tokenjuice/tests/fixtures/cargo_test_failure.fixture.json so the
failure count matches the "test result: FAILED. 2 passed; 1 failed; 0 ignored"
output).
| //! Integration tests: iterate vendored fixtures and assert parity with | ||
| //! expected output. | ||
| //! | ||
| //! Each `*.fixture.json` in the `fixtures/` directory has the shape: | ||
| //! ```json | ||
| //! { | ||
| //! "description": "...", | ||
| //! "input": { ... ToolExecutionInput ... }, | ||
| //! "expectedOutput": "..." | ||
| //! } | ||
| //! ``` | ||
| //! | ||
| //! The test loads all rules (builtin-only) and runs `reduce_execution_with_rules`, | ||
| //! then asserts that `result.inline_text == expectedOutput`. | ||
| //! | ||
| //! If the Rust port produces different output from a fixture (e.g. due to | ||
| //! `Intl.Segmenter` vs `unicode-segmentation` boundary differences), the | ||
| //! discrepancy is documented in `KNOWN_DRIFT.md` and the fixture is listed | ||
| //! in `KNOWN_DRIFT_FIXTURES` below to be skipped in CI. | ||
|
|
||
| use openhuman_core::openhuman::tokenjuice::{ | ||
| reduce::reduce_execution_with_rules, | ||
| rules::load_builtin_rules, | ||
| types::{ReduceOptions, RuleFixture}, | ||
| }; | ||
|
|
||
| /// Fixture file names (relative to `fixtures/`) that are known to drift from | ||
| /// upstream TS output. Add entries here with a comment explaining why. | ||
| const KNOWN_DRIFT_FIXTURES: &[&str] = &[ | ||
| // None currently — add entries if parity tests fail due to | ||
| // Intl.Segmenter vs unicode-segmentation differences. | ||
| ]; | ||
|
|
||
| fn fixtures_dir() -> std::path::PathBuf { | ||
| // CARGO_MANIFEST_DIR points to the workspace root; navigate to the fixtures | ||
| let manifest = std::env::var("CARGO_MANIFEST_DIR").expect("CARGO_MANIFEST_DIR not set"); | ||
| std::path::PathBuf::from(manifest) | ||
| .join("src/openhuman/tokenjuice/tests/fixtures") | ||
| } | ||
|
|
||
| #[test] | ||
| fn all_fixtures_pass() { | ||
| let dir = fixtures_dir(); | ||
| if !dir.is_dir() { | ||
| eprintln!("fixtures directory not found: {}", dir.display()); | ||
| return; | ||
| } | ||
|
|
||
| let rules = load_builtin_rules(); | ||
| let mut passed = 0usize; | ||
| let mut skipped = 0usize; | ||
| let mut failed = 0usize; | ||
|
|
||
| let mut entries: Vec<_> = std::fs::read_dir(&dir) | ||
| .expect("read fixtures dir") | ||
| .filter_map(|e| e.ok()) | ||
| .filter(|e| { | ||
| e.file_name() | ||
| .to_string_lossy() | ||
| .ends_with(".fixture.json") | ||
| }) | ||
| .collect(); | ||
| entries.sort_by_key(|e| e.file_name()); | ||
|
|
||
| for entry in entries { | ||
| let path = entry.path(); | ||
| let name = path.file_name().unwrap().to_string_lossy().to_string(); | ||
|
|
||
| if KNOWN_DRIFT_FIXTURES.iter().any(|&s| name == s) { | ||
| eprintln!("[SKIP] {} (known drift)", name); | ||
| skipped += 1; | ||
| continue; | ||
| } | ||
|
|
||
| let json = std::fs::read_to_string(&path).expect("read fixture"); | ||
| let fixture: RuleFixture = match serde_json::from_str(&json) { | ||
| Ok(f) => f, | ||
| Err(e) => { | ||
| eprintln!("[FAIL] {} — JSON parse error: {}", name, e); | ||
| failed += 1; | ||
| continue; | ||
| } | ||
| }; | ||
|
|
||
| let opts = fixture.options.unwrap_or_default(); | ||
| let result = reduce_execution_with_rules(fixture.input, &rules, &opts); | ||
|
|
||
| if result.inline_text.trim() == fixture.expected_output.trim() { | ||
| passed += 1; | ||
| } else { | ||
| eprintln!("[FAIL] {}", name); | ||
| if let Some(desc) = &fixture.description { | ||
| eprintln!(" description: {}", desc); | ||
| } | ||
| eprintln!(" expected: {:?}", fixture.expected_output.trim()); | ||
| eprintln!(" actual: {:?}", result.inline_text.trim()); | ||
| failed += 1; | ||
| } | ||
| } | ||
|
|
||
| eprintln!( | ||
| "\nfixture summary: {} passed, {} skipped, {} failed", | ||
| passed, skipped, failed | ||
| ); | ||
|
|
||
| assert_eq!( | ||
| failed, 0, | ||
| "{} fixture(s) failed — see output above", | ||
| failed | ||
| ); | ||
| } |
There was a problem hiding this comment.
❓ Verification inconclusive
Script executed:
#!/bin/bash
set -euo pipefail
echo "Looking for any Rust module declaration that would include src/openhuman/tokenjuice/tests/integration.rs"
rg -n --type=rust -C2 'mod\s+tests\b|mod\s+integration\b|tokenjuice/tests/integration\.rs|#\[path\s*=\s*".*integration\.rs"\]' src tests
echo
echo "TokenJuice module root for context:"
sed -n '1,120p' src/openhuman/tokenjuice/mod.rsRepository: tinyhumansai/openhuman
Repository: tinyhumansai/openhuman
Exit code: 0
stdout:
Looking for any Rust module declaration that would include src/openhuman/tokenjuice/tests/integration.rs
src/rpc/dispatch.rs-43-
src/rpc/dispatch.rs-44-#[cfg(test)]
src/rpc/dispatch.rs:45:mod tests {
src/rpc/dispatch.rs-46- use serde_json::json;
src/rpc/dispatch.rs-47-
--
src/openhuman/webhooks/types.rs-160-
src/openhuman/webhooks/types.rs-161-#[cfg(test)]
src/openhuman/webhooks/types.rs:162:mod tests {
src/openhuman/webhooks/types.rs-163- use super::*;
src/openhuman/webhooks/types.rs-164- use serde_json::json;
--
src/openhuman/tokenjuice/classify.rs-205-
src/openhuman/tokenjuice/classify.rs-206-#[cfg(test)]
src/openhuman/tokenjuice/classify.rs:207:mod tests {
src/openhuman/tokenjuice/classify.rs-208- use super::*;
src/openhuman/tokenjuice/classify.rs-209- use crate::openhuman::tokenjuice::rules::load_builtin_rules;
--
src/openhuman/tokenjuice/tool_integration.rs-207-
src/openhuman/tokenjuice/tool_integration.rs-208-#[cfg(test)]
src/openhuman/tokenjuice/tool_integration.rs:209:mod tests {
src/openhuman/tokenjuice/tool_integration.rs-210- use super::*;
src/openhuman/tokenjuice/tool_integration.rs-211- use serde_json::json;
--
src/openhuman/tokenjuice/text/width.rs-96-
src/openhuman/tokenjuice/text/width.rs-97-#[cfg(test)]
src/openhuman/tokenjuice/text/width.rs:98:mod tests {
src/openhuman/tokenjuice/text/width.rs-99- use super::*;
src/openhuman/tokenjuice/text/width.rs-100-
--
src/openhuman/tokenjuice/text/process.rs-184-
src/openhuman/tokenjuice/text/process.rs-185-#[cfg(test)]
src/openhuman/tokenjuice/text/process.rs:186:mod tests {
src/openhuman/tokenjuice/text/process.rs-187- use super::*;
src/openhuman/tokenjuice/text/process.rs-188-
--
src/openhuman/tokenjuice/rules/loader.rs-249-
src/openhuman/tokenjuice/rules/loader.rs-250-#[cfg(test)]
src/openhuman/tokenjuice/rules/loader.rs:251:mod tests {
src/openhuman/tokenjuice/rules/loader.rs-252- use super::*;
src/openhuman/tokenjuice/rules/loader.rs-253-
--
src/openhuman/tokenjuice/rules/builtin.rs-368-
src/openhuman/tokenjuice/rules/builtin.rs-369-#[cfg(test)]
src/openhuman/tokenjuice/rules/builtin.rs:370:mod tests {
src/openhuman/tokenjuice/rules/builtin.rs-371- use super::*;
src/openhuman/tokenjuice/rules/builtin.rs-372- use crate::openhuman::tokenjuice::rules::compiler::compile_rule;
--
src/openhuman/tokenjuice/rules/compiler.rs-123-
src/openhuman/tokenjuice/rules/compiler.rs-124-#[cfg(test)]
src/openhuman/tokenjuice/rules/compiler.rs:125:mod tests {
src/openhuman/tokenjuice/rules/compiler.rs-126- use super::*;
src/openhuman/tokenjuice/rules/compiler.rs-127- use crate::openhuman::tokenjuice::types::{JsonRule, RuleMatch};
--
src/openhuman/tokenjuice/text/ansi.rs-38-
src/openhuman/tokenjuice/text/ansi.rs-39-#[cfg(test)]
src/openhuman/tokenjuice/text/ansi.rs:40:mod tests {
src/openhuman/tokenjuice/text/ansi.rs-41- use super::*;
src/openhuman/tokenjuice/text/ansi.rs-42-
--
src/openhuman/tokenjuice/reduce.rs-903-
src/openhuman/tokenjuice/reduce.rs-904-#[cfg(test)]
src/openhuman/tokenjuice/reduce.rs:905:mod tests {
src/openhuman/tokenjuice/reduce.rs-906- use super::*;
src/openhuman/tokenjuice/reduce.rs-907- use crate::openhuman::tokenjuice::rules::load_builtin_rules;
--
src/openhuman/webhooks/schemas.rs-402-
src/openhuman/webhooks/schemas.rs-403-#[cfg(test)]
src/openhuman/webhooks/schemas.rs:404:mod tests {
src/openhuman/webhooks/schemas.rs-405- use super::*;
src/openhuman/webhooks/schemas.rs-406- use serde_json::json;
--
src/openhuman/workspace/ops.rs-99-
src/openhuman/workspace/ops.rs-100-#[cfg(test)]
src/openhuman/workspace/ops.rs:101:mod tests {
src/openhuman/workspace/ops.rs-102- use super::*;
src/openhuman/workspace/ops.rs-103- use crate::openhuman::config::TEST_ENV_LOCK as ENV_LOCK;
--
src/openhuman/webhooks/ops.rs-214-
src/openhuman/webhooks/ops.rs-215-#[cfg(test)]
src/openhuman/webhooks/ops.rs:216:mod tests {
src/openhuman/webhooks/ops.rs-217- use super::*;
src/openhuman/webhooks/ops.rs-218- use crate::openhuman::credentials::{
--
src/openhuman/webhooks/router.rs-510-
src/openhuman/webhooks/router.rs-511-#[cfg(test)]
src/openhuman/webhooks/router.rs:512:mod tests {
src/openhuman/webhooks/router.rs-513- use super::*;
src/openhuman/webhooks/router.rs-514- use serde_json::json;
--
src/openhuman/webhooks/mod.rs-23-
src/openhuman/webhooks/mod.rs-24-#[cfg(test)]
src/openhuman/webhooks/mod.rs:25:mod tests;
--
src/openhuman/webhooks/bus.rs-137-
src/openhuman/webhooks/bus.rs-138-#[cfg(test)]
src/openhuman/webhooks/bus.rs:139:mod tests {
src/openhuman/webhooks/bus.rs-140- use super::*;
src/openhuman/webhooks/bus.rs-141- use crate::openhuman::webhooks::WebhookRequest;
--
src/openhuman/util.rs-52-
src/openhuman/util.rs-53-#[cfg(test)]
src/openhuman/util.rs:54:mod tests {
src/openhuman/util.rs-55- use super::*;
src/openhuman/util.rs-56-
--
src/openhuman/tree_summarizer/types.rs-201-
src/openhuman/tree_summarizer/types.rs-202-#[cfg(test)]
src/openhuman/tree_summarizer/types.rs:203:mod tests {
src/openhuman/tree_summarizer/types.rs-204- use super::*;
src/openhuman/tree_summarizer/types.rs-205- use chrono::TimeZone;
--
src/openhuman/tree_summarizer/schemas.rs-292-
src/openhuman/tree_summarizer/schemas.rs-293-#[cfg(test)]
src/openhuman/tree_summarizer/schemas.rs:294:mod tests {
src/openhuman/tree_summarizer/schemas.rs-295- use super::*;
src/openhuman/tree_summarizer/schemas.rs-296- use serde_json::json;
--
src/openhuman/tree_summarizer/store.rs-860-
src/openhuman/tree_summarizer/store.rs-861-#[cfg(test)]
src/openhuman/tree_summarizer/store.rs:862:mod tests {
src/openhuman/tree_summarizer/store.rs-863- use super::*;
src/openhuman/tree_summarizer/store.rs-864- use tempfile::TempDir;
--
src/openhuman/voice/dictation_listener.rs-212-
src/openhuman/voice/dictation_listener.rs-213-#[cfg(test)]
src/openhuman/voice/dictation_listener.rs:214:mod tests {
src/openhuman/voice/dictation_listener.rs-215- use super::*;
src/openhuman/voice/dictation_listener.rs-216-
--
src/openhuman/voice/hallucination.rs-196-
src/openhuman/voice/hallucination.rs-197-#[cfg(test)]
src/openhuman/voice/hallucination.rs:198:mod tests {
src/openhuman/voice/hallucination.rs-199- use super::*;
src/openhuman/voice/hallucination.rs-200-
--
src/openhuman/voice/streaming.rs-254-
src/openhuman/voice/streaming.rs-255-#[cfg(test)]
src/openhuman/voice/streaming.rs:256:mod tests {
src/openhuman/voice/streaming.rs-257- use super::*;
src/openhuman/voice/streaming.rs-258-
--
src/openhuman/voice/text_input.rs-168-
src/openhuman/voice/text_input.rs-169-#[cfg(test)]
src/openhuman/voice/text_input.rs:170:mod tests {
src/openhuman/voice/text_input.rs-171- use super::*;
src/openhuman/voice/text_input.rs-172-
--
src/openhuman/voice/schemas.rs-515-
src/openhuman/voice/schemas.rs-516-#[cfg(test)]
src/openhuman/voice/schemas.rs:517:mod tests {
src/openhuman/voice/schemas.rs-518- use super::*;
src/openhuman/voice/schemas.rs-519- use serde_json::json;
--
src/openhuman/voice/types.rs-60-
src/openhuman/voice/types.rs-61-#[cfg(test)]
src/openhuman/voice/types.rs:62:mod tests {
src/openhuman/voice/types.rs-63- use super::*;
src/openhuman/voice/types.rs-64-
--
src/openhuman/voice/server.rs-1066-
src/openhuman/voice/server.rs-1067-#[cfg(test)]
src/openhuman/voice/server.rs:1068:mod tests {
src/openhuman/voice/server.rs-1069- use super::*;
src/openhuman/voice/server.rs-1070- use crate::openhuman::voice::audio_capture::RecordingResult;
--
src/openhuman/voice/postprocess.rs-150-
src/openhuman/voice/postprocess.rs-151-#[cfg(test)]
src/openhuman/voice/postprocess.rs:152:mod tests {
src/openhuman/voice/postprocess.rs-153- use super::*;
src/openhuman/voice/postprocess.rs-154- use axum::{routing::post, Json, Router};
--
src/openhuman/voice/hotkey.rs-329-
src/openhuman/voice/hotkey.rs-330-#[cfg(test)]
src/openhuman/voice/hotkey.rs:331:mod tests {
src/openhuman/voice/hotkey.rs-332- use super::*;
src/openhuman/voice/hotkey.rs-333- use std::sync::atomic::AtomicBool;
--
src/openhuman/voice/ops.rs-286-
src/openhuman/voice/ops.rs-287-#[cfg(test)]
src/openhuman/voice/ops.rs:288:mod tests {
src/openhuman/voice/ops.rs-289- use super::*;
src/openhuman/voice/ops.rs-290-
--
src/openhuman/tree_summarizer/bus.rs-76-
src/openhuman/tree_summarizer/bus.rs-77-#[cfg(test)]
src/openhuman/tree_summarizer/bus.rs:78:mod tests {
src/openhuman/tree_summarizer/bus.rs-79- use super::*;
src/openhuman/tree_summarizer/bus.rs-80-
--
src/openhuman/voice/audio_capture.rs-614-
src/openhuman/voice/audio_capture.rs-615-#[cfg(test)]
src/openhuman/voice/audio_capture.rs:616:mod tests {
src/openhuman/voice/audio_capture.rs-617- use super::*;
src/openhuman/voice/audio_capture.rs-618- use cpal::{SampleFormat, SampleRate, SupportedBufferSize, SupportedStreamConfigRange};
--
src/openhuman/tools/traits.rs-137-
src/openhuman/tools/traits.rs-138-#[cfg(test)]
src/openhuman/tools/traits.rs:139:mod tests {
src/openhuman/tools/traits.rs-140- use super::*;
src/openhuman/tools/traits.rs-141-
--
src/openhuman/tools/schema.rs-537-
src/openhuman/tools/schema.rs-538-#[cfg(test)]
src/openhuman/tools/schema.rs:539:mod tests {
src/openhuman/tools/schema.rs-540- use super::*;
src/openhuman/tools/schema.rs-541-
--
src/openhuman/tools/orchestrator_tools.rs-180-
src/openhuman/tools/orchestrator_tools.rs-181-#[cfg(test)]
src/openhuman/tools/orchestrator_tools.rs:182:mod tests {
src/openhuman/tools/orchestrator_tools.rs-183- use super::*;
src/openhuman/tools/orchestrator_tools.rs-184- use crate::openhuman::agent::harness::definition::{
--
src/openhuman/tools/ops.rs-293-
src/openhuman/tools/ops.rs-294-#[cfg(test)]
src/openhuman/tools/ops.rs:295:mod tests {
src/openhuman/tools/ops.rs-296- use super::*;
src/openhuman/tools/ops.rs-297- use crate::openhuman::config::{BrowserConfig, Config, MemoryConfig};
--
src/openhuman/threads/schemas.rs-305-
src/openhuman/threads/schemas.rs-306-#[cfg(test)]
src/openhuman/threads/schemas.rs:307:mod tests {
src/openhuman/threads/schemas.rs-308- use super::*;
src/openhuman/threads/schemas.rs-309-
--
src/openhuman/tools/local_cli.rs-146-
src/openhuman/tools/local_cli.rs-147-#[cfg(test)]
src/openhuman/tools/local_cli.rs:148:mod tests {
src/openhuman/tools/local_cli.rs-149- use super::*;
src/openhuman/tools/local_cli.rs-150-
--
src/openhuman/team/schemas.rs-529-
src/openhuman/team/schemas.rs-530-#[cfg(test)]
src/openhuman/team/schemas.rs:531:mod tests {
src/openhuman/team/schemas.rs-532- use super::*;
src/openhuman/team/schemas.rs-533-
--
src/openhuman/text_input/types.rs-119-
src/openhuman/text_input/types.rs-120-#[cfg(test)]
src/openhuman/text_input/types.rs:121:mod tests {
src/openhuman/text_input/types.rs-122- use super::*;
src/openhuman/text_input/types.rs-123- use crate::openhuman::accessibility::ElementBounds;
--
src/openhuman/tools/impl/cron/update.rs-83-
src/openhuman/tools/impl/cron/update.rs-84-#[cfg(test)]
src/openhuman/tools/impl/cron/update.rs:85:mod tests {
src/openhuman/tools/impl/cron/update.rs-86- use super::*;
src/openhuman/tools/impl/cron/update.rs-87- use crate::openhuman::config::Config;
--
src/openhuman/tools/impl/cron/remove.rs-58-
src/openhuman/tools/impl/cron/remove.rs-59-#[cfg(test)]
src/openhuman/tools/impl/cron/remove.rs:60:mod tests {
src/openhuman/tools/impl/cron/remove.rs-61- use super::*;
src/openhuman/tools/impl/cron/remove.rs-62- use crate::openhuman::config::Config;
--
src/openhuman/tools/impl/system/tool_stats.rs-130-
src/openhuman/tools/impl/system/tool_stats.rs-131-#[cfg(test)]
src/openhuman/tools/impl/system/tool_stats.rs:132:mod tests {
src/openhuman/tools/impl/system/tool_stats.rs-133- use super::*;
src/openhuman/tools/impl/system/tool_stats.rs-134- use crate::openhuman::memory::{Memory, MemoryCategory, MemoryEntry};
--
src/openhuman/tools/impl/system/workspace_state.rs-128-
src/openhuman/tools/impl/system/workspace_state.rs-129-#[cfg(test)]
src/openhuman/tools/impl/system/workspace_state.rs:130:mod tests {
src/openhuman/tools/impl/system/workspace_state.rs-131- use super::*;
src/openhuman/tools/impl/system/workspace_state.rs-132- use serde_json::json;
--
src/openhuman/tools/impl/system/shell.rs-147-
src/openhuman/tools/impl/system/shell.rs-148-#[cfg(test)]
src/openhuman/tools/impl/system/shell.rs:149:mod tests {
src/openhuman/tools/impl/system/shell.rs-150- use super::*;
src/openhuman/tools/impl/system/shell.rs-151- use crate::openhuman::agent::host_runtime::{NativeRuntime, RuntimeAdapter};
--
src/openhuman/tools/impl/system/pushover.rs-197-
src/openhuman/tools/impl/system/pushover.rs-198-#[cfg(test)]
src/openhuman/tools/impl/system/pushover.rs:199:mod tests {
src/openhuman/tools/impl/system/pushover.rs-200- use super::*;
src/openhuman/tools/impl/system/pushover.rs-201- use crate::openhuman::security::AutonomyLevel;
--
src/openhuman/tools/impl/system/schedule.rs-303-
src/openhuman/tools/impl/system/schedule.rs-304-#[cfg(test)]
src/openhuman/tools/impl/system/schedule.rs:305:mod tests {
src/openhuman/tools/impl/system/schedule.rs-306- use super::*;
src/openhuman/tools/impl/system/schedule.rs-307- use crate::openhuman::security::AutonomyLevel;
--
src/openhuman/update/scheduler.rs-82-
src/openhuman/update/scheduler.rs-83-#[cfg(test)]
src/openhuman/update/scheduler.rs:84:mod tests {
src/openhuman/update/scheduler.rs-85- use super::*;
src/openhuman/update/scheduler.rs-86-
--
src/openhuman/update/ops.rs-119-
src/openhuman/update/ops.rs-120-#[cfg(test)]
src/openhuman/update/ops.rs:121:mod tests {
src/openhuman/update/ops.rs-122- use super::*;
src/openhuman/update/ops.rs-123-
--
src/openhuman/update/core.rs-253-
src/openhuman/update/core.rs-254-#[cfg(test)]
src/openhuman/update/core.rs:255:mod tests {
src/openhuman/update/core.rs-256- use super::*;
src/openhuman/update/core.rs-257-
--
src/openhuman/tools/impl/system/insert_sql_record.rs-141-
src/openhuman/tools/impl/system/insert_sql_record.rs-142-#[cfg(test)]
src/openhuman/tools/impl/system/insert_sql_record.rs:143:mod tests {
src/openhuman/tools/impl/system/insert_sql_record.rs-144- use super::*;
src/openhuman/tools/impl/system/insert_sql_record.rs-145-
--
src/openhuman/tools/impl/cron/runs.rs-102-
src/openhuman/tools/impl/cron/runs.rs-103-#[cfg(test)]
src/openhuman/tools/impl/cron/runs.rs:104:mod tests {
src/openhuman/tools/impl/cron/runs.rs-105- use super::*;
src/openhuman/tools/impl/cron/runs.rs-106- use crate::openhuman::config::Config;
--
src/openhuman/tools/impl/cron/run.rs-90-
src/openhuman/tools/impl/cron/run.rs-91-#[cfg(test)]
src/openhuman/tools/impl/cron/run.rs:92:mod tests {
src/openhuman/tools/impl/cron/run.rs-93- use super::*;
src/openhuman/tools/impl/cron/run.rs-94- use crate::openhuman::config::Config;
--
src/openhuman/tools/impl/system/proxy_config.rs-401-
src/openhuman/tools/impl/system/proxy_config.rs-402-#[cfg(test)]
src/openhuman/tools/impl/system/proxy_config.rs:403:mod tests {
src/openhuman/tools/impl/system/proxy_config.rs-404- use super::*;
src/openhuman/tools/impl/system/proxy_config.rs-405- use crate::openhuman::security::{AutonomyLevel, SecurityPolicy};
--
src/openhuman/tools/impl/cron/add.rs-178-
src/openhuman/tools/impl/cron/add.rs-179-#[cfg(test)]
src/openhuman/tools/impl/cron/add.rs:180:mod tests {
src/openhuman/tools/impl/cron/add.rs-181- use super::*;
src/openhuman/tools/impl/cron/add.rs-182- use crate::openhuman::config::Config;
--
src/openhuman/update/schemas.rs-117-
src/openhuman/update/schemas.rs-118-#[cfg(test)]
src/openhuman/update/schemas.rs:119:mod tests {
src/openhuman/update/schemas.rs-120- use super::*;
src/openhuman/update/schemas.rs-121-
--
src/openhuman/tools/impl/network/http_request.rs-409-
src/openhuman/tools/impl/network/http_request.rs-410-#[cfg(test)]
src/openhuman/tools/impl/network/http_request.rs:411:mod tests {
src/openhuman/tools/impl/network/http_request.rs-412- use super::*;
src/openhuman/tools/impl/network/http_request.rs-413- use crate::openhuman::security::{AutonomyLevel, SecurityPolicy};
--
src/openhuman/tools/impl/cron/list.rs-49-
src/openhuman/tools/impl/cron/list.rs-50-#[cfg(test)]
src/openhuman/tools/impl/cron/list.rs:51:mod tests {
src/openhuman/tools/impl/cron/list.rs-52- use super::*;
src/openhuman/tools/impl/cron/list.rs-53- use crate::openhuman::config::Config;
--
src/openhuman/tools/impl/network/composio.rs-747-
src/openhuman/tools/impl/network/composio.rs-748-#[cfg(test)]
src/openhuman/tools/impl/network/composio.rs:749:mod tests {
src/openhuman/tools/impl/network/composio.rs-750- use super::*;
src/openhuman/tools/impl/network/composio.rs-751- use crate::openhuman::security::{AutonomyLevel, SecurityPolicy};
--
src/openhuman/tools/impl/memory/forget.rs-77-
src/openhuman/tools/impl/memory/forget.rs-78-#[cfg(test)]
src/openhuman/tools/impl/memory/forget.rs:79:mod tests {
src/openhuman/tools/impl/memory/forget.rs-80- use super::*;
src/openhuman/tools/impl/memory/forget.rs-81- use crate::openhuman::memory::{embeddings::NoopEmbedding, MemoryCategory, UnifiedMemory};
--
src/openhuman/tools/impl/filesystem/read_diff.rs-105-
src/openhuman/tools/impl/filesystem/read_diff.rs-106-#[cfg(test)]
src/openhuman/tools/impl/filesystem/read_diff.rs:107:mod tests {
src/openhuman/tools/impl/filesystem/read_diff.rs-108- use super::*;
src/openhuman/tools/impl/filesystem/read_diff.rs-109- use serde_json::json;
--
src/openhuman/tools/impl/filesystem/update_memory_md.rs-246-
src/openhuman/tools/impl/filesystem/update_memory_md.rs-247-#[cfg(test)]
src/openhuman/tools/impl/filesystem/update_memory_md.rs:248:mod tests {
src/openhuman/tools/impl/filesystem/update_memory_md.rs-249- use super::*;
src/openhuman/tools/impl/filesystem/update_memory_md.rs-250-
--
src/openhuman/tools/impl/filesystem/run_linter.rs-125-
src/openhuman/tools/impl/filesystem/run_linter.rs-126-#[cfg(test)]
src/openhuman/tools/impl/filesystem/run_linter.rs:127:mod tests {
src/openhuman/tools/impl/filesystem/run_linter.rs-128- use super::*;
src/openhuman/tools/impl/filesystem/run_linter.rs-129- use serde_json::json;
--
src/openhuman/tools/impl/memory/store.rs-98-
src/openhuman/tools/impl/memory/store.rs-99-#[cfg(test)]
src/openhuman/tools/impl/memory/store.rs:100:mod tests {
src/openhuman/tools/impl/memory/store.rs-101- use super::*;
src/openhuman/tools/impl/memory/store.rs-102- use crate::openhuman::memory::{embeddings::NoopEmbedding, UnifiedMemory};
--
src/openhuman/tools/impl/filesystem/git_operations.rs-505-
src/openhuman/tools/impl/filesystem/git_operations.rs-506-#[cfg(test)]
src/openhuman/tools/impl/filesystem/git_operations.rs:507:mod tests {
src/openhuman/tools/impl/filesystem/git_operations.rs-508- use super::*;
src/openhuman/tools/impl/filesystem/git_operations.rs-509- use crate::openhuman::security::SecurityPolicy;
--
src/openhuman/tools/impl/memory/recall.rs-100-
src/openhuman/tools/impl/memory/recall.rs-101-#[cfg(test)]
src/openhuman/tools/impl/memory/recall.rs:102:mod tests {
src/openhuman/tools/impl/memory/recall.rs-103- use super::*;
src/openhuman/tools/impl/memory/recall.rs-104- use crate::openhuman::memory::{embeddings::NoopEmbedding, MemoryCategory, UnifiedMemory};
--
src/openhuman/tools/impl/filesystem/file_write.rs-130-
src/openhuman/tools/impl/filesystem/file_write.rs-131-#[cfg(test)]
src/openhuman/tools/impl/filesystem/file_write.rs:132:mod tests {
src/openhuman/tools/impl/filesystem/file_write.rs-133- use super::*;
src/openhuman/tools/impl/filesystem/file_write.rs-134- use crate::openhuman::security::{AutonomyLevel, SecurityPolicy};
--
src/openhuman/tools/impl/network/web_search.rs-321-
src/openhuman/tools/impl/network/web_search.rs-322-#[cfg(test)]
src/openhuman/tools/impl/network/web_search.rs:323:mod tests {
src/openhuman/tools/impl/network/web_search.rs-324- use super::*;
src/openhuman/tools/impl/network/web_search.rs-325-
--
src/openhuman/text_input/schemas.rs-244-
src/openhuman/text_input/schemas.rs-245-#[cfg(test)]
src/openhuman/text_input/schemas.rs:246:mod tests {
src/openhuman/text_input/schemas.rs-247- use super::*;
src/openhuman/text_input/schemas.rs-248- use serde_json::json;
--
src/openhuman/tools/impl/filesystem/file_read.rs-113-
src/openhuman/tools/impl/filesystem/file_read.rs-114-#[cfg(test)]
src/openhuman/tools/impl/filesystem/file_read.rs:115:mod tests {
src/openhuman/tools/impl/filesystem/file_read.rs-116- use super::*;
src/openhuman/tools/impl/filesystem/file_read.rs-117- use crate::openhuman::security::{AutonomyLevel, SecurityPolicy};
--
src/openhuman/tools/impl/filesystem/csv_export.rs-260-
src/openhuman/tools/impl/filesystem/csv_export.rs-261-#[cfg(test)]
src/openhuman/tools/impl/filesystem/csv_export.rs:262:mod tests {
src/openhuman/tools/impl/filesystem/csv_export.rs-263- use super::*;
src/openhuman/tools/impl/filesystem/csv_export.rs-264- use crate::openhuman::security::{AutonomyLevel, SecurityPolicy};
--
src/openhuman/team/ops.rs-266-
src/openhuman/team/ops.rs-267-#[cfg(test)]
src/openhuman/team/ops.rs:268:mod tests {
src/openhuman/team/ops.rs-269- use super::*;
src/openhuman/team/ops.rs-270-
--
src/openhuman/text_input/ops.rs-191-
src/openhuman/text_input/ops.rs-192-#[cfg(test)]
src/openhuman/text_input/ops.rs:193:mod tests {
src/openhuman/text_input/ops.rs-194- use super::*;
src/openhuman/text_input/ops.rs-195-
--
src/openhuman/tools/impl/browser/screenshot.rs-220-
src/openhuman/tools/impl/browser/screenshot.rs-221-#[cfg(test)]
src/openhuman/tools/impl/browser/screenshot.rs:222:mod tests {
src/openhuman/tools/impl/browser/screenshot.rs-223- use super::*;
src/openhuman/tools/impl/browser/screenshot.rs-224- use crate::openhuman::security::{AutonomyLevel, SecurityPolicy};
--
src/openhuman/tools/impl/agent/ask_clarification.rs-87-
src/openhuman/tools/impl/agent/ask_clarification.rs-88-#[cfg(test)]
src/openhuman/tools/impl/agent/ask_clarification.rs:89:mod tests {
src/openhuman/tools/impl/agent/ask_clarification.rs-90- use super::*;
src/openhuman/tools/impl/agent/ask_clarification.rs-91- use serde_json::json;
--
src/openhuman/tools/impl/computer/mouse.rs-336-
src/openhuman/tools/impl/computer/mouse.rs-337-#[cfg(test)]
src/openhuman/tools/impl/computer/mouse.rs:338:mod tests {
src/openhuman/tools/impl/computer/mouse.rs-339- use super::*;
src/openhuman/tools/impl/computer/mouse.rs-340-
--
src/openhuman/tools/impl/browser/image_output.rs-43-
src/openhuman/tools/impl/browser/image_output.rs-44-#[cfg(test)]
src/openhuman/tools/impl/browser/image_output.rs:45:mod tests {
src/openhuman/tools/impl/browser/image_output.rs-46- use super::*;
src/openhuman/tools/impl/browser/image_output.rs-47- use tempfile::TempDir;
--
src/openhuman/tools/impl/browser/image_info.rs-214-
src/openhuman/tools/impl/browser/image_info.rs-215-#[cfg(test)]
src/openhuman/tools/impl/browser/image_info.rs:216:mod tests {
src/openhuman/tools/impl/browser/image_info.rs-217- use super::*;
src/openhuman/tools/impl/browser/image_info.rs-218- use crate::openhuman::security::{AutonomyLevel, SecurityPolicy};
--
src/openhuman/tools/impl/computer/keyboard.rs-342-
src/openhuman/tools/impl/computer/keyboard.rs-343-#[cfg(test)]
src/openhuman/tools/impl/computer/keyboard.rs:344:mod tests {
src/openhuman/tools/impl/computer/keyboard.rs-345- use super::*;
src/openhuman/tools/impl/computer/keyboard.rs-346-
--
src/openhuman/tools/impl/browser/browser_open.rs-279-
src/openhuman/tools/impl/browser/browser_open.rs-280-#[cfg(test)]
src/openhuman/tools/impl/browser/browser_open.rs:281:mod tests {
src/openhuman/tools/impl/browser/browser_open.rs-282- use super::*;
src/openhuman/tools/impl/browser/browser_open.rs-283- use crate::openhuman::security::{AutonomyLevel, SecurityPolicy};
--
src/openhuman/tools/impl/browser/browser.rs-2051-
src/openhuman/tools/impl/browser/browser.rs-2052-#[cfg(test)]
src/openhuman/tools/impl/browser/browser.rs:2053:mod tests {
src/openhuman/tools/impl/browser/browser.rs-2054- use super::*;
src/openhuman/tools/impl/browser/browser.rs-2055-
--
src/openhuman/subconscious/store.rs-554-
src/openhuman/subconscious/store.rs-555-#[cfg(test)]
src/openhuman/subconscious/store.rs:556:mod tests {
src/openhuman/subconscious/store.rs-557- use super::*;
src/openhuman/subconscious/store.rs-558-
--
src/openhuman/subconscious/situation_report.rs-283-
src/openhuman/subconscious/situation_report.rs-284-#[cfg(test)]
src/openhuman/subconscious/situation_report.rs:285:mod tests {
src/openhuman/subconscious/situation_report.rs-286- use super::*;
src/openhuman/subconscious/situation_report.rs-287-
--
src/openhuman/socket/ws_loop.rs-406-
src/openhuman/socket/ws_loop.rs-407-#[cfg(test)]
src/openhuman/socket/ws_loop.rs:408:mod tests {
src/openhuman/socket/ws_loop.rs-409- use super::*;
src/openhuman/socket/ws_loop.rs-410- use parking_lot::RwLock;
--
src/openhuman/tools/impl/agent/complete_onboarding.rs-527-
src/openhuman/tools/impl/agent/complete_onboarding.rs-528-#[cfg(test)]
src/openhuman/tools/impl/agent/complete_onboarding.rs:529:mod tests {
src/openhuman/tools/impl/agent/complete_onboarding.rs-530- use super::*;
src/openhuman/tools/impl/agent/complete_onboarding.rs-531-
--
src/openhuman/subconscious/schemas.rs-482-
src/openhuman/subconscious/schemas.rs-483-#[cfg(test)]
src/openhuman/subconscious/schemas.rs:484:mod tests {
src/openhuman/subconscious/schemas.rs-485- use super::*;
src/openhuman/subconscious/schemas.rs-486-
--
src/openhuman/socket/schemas.rs-242-
src/openhuman/socket/schemas.rs-243-#[cfg(test)]
src/openhuman/socket/schemas.rs:244:mod tests {
src/openhuman/socket/schemas.rs-245- use super::*;
src/openhuman/socket/schemas.rs-246-
--
src/openhuman/subconscious/prompt.rs-211-
src/openhuman/subconscious/prompt.rs-212-#[cfg(test)]
src/openhuman/subconscious/prompt.rs:213:mod tests {
src/openhuman/subconscious/prompt.rs-214- use super::*;
src/openhuman/subconscious/prompt.rs-215- use crate::openhuman::subconscious::types::{TaskRecurrence, TaskSource};
--
src/openhuman/socket/manager.rs-202-
src/openhuman/socket/manager.rs-203-#[cfg(test)]
src/openhuman/socket/manager.rs:204:mod tests {
src/openhuman/socket/manager.rs-205- use super::*;
src/openhuman/socket/manager.rs-206- use serde_json::json;
--
src/openhuman/socket/event_handlers.rs-235-
src/openhuman/socket/event_handlers.rs-236-#[cfg(test)]
src/openhuman/socket/event_handlers.rs:237:mod tests {
src/openhuman/socket/event_handlers.rs-238- use super::*;
src/openhuman/socket/event_handlers.rs-239- use parking_lot::RwLock;
--
src/openhuman/subconscious/integration_test.rs-1-#[cfg(test)]
src/openhuman/subconscious/integration_test.rs:2:mod tests {
src/openhuman/subconscious/integration_test.rs-3- use crate::openhuman::subconscious::decision_log::DecisionLog;
src/openhuman/subconscious/integration_test.rs-4- use crate::openhuman::subconscious::store;
--
src/openhuman/subconscious/executor.rs-288-
src/openhuman/subconscious/executor.rs-289-#[cfg(test)]
src/openhuman/subconscious/executor.rs:290:mod tests {
src/openhuman/subconscious/executor.rs-291- use super::*;
src/openhuman/subconscious/executor.rs-292-
--
src/openhuman/subconscious/engine.rs-717-
src/openhuman/subconscious/engine.rs-718-#[cfg(test)]
src/openhuman/subconscious/engine.rs:719:mod tests {
src/openhuman/subconscious/engine.rs-720- use super::*;
src/openhuman/subconscious/engine.rs-721-
--
src/openhuman/subconscious/decision_log.rs-114-
src/openhuman/subconscious/decision_log.rs-115-#[cfg(test)]
src/openhuman/subconscious/decision_log.rs:116:mod tests {
src/openhuman/subconscious/decision_log.rs-117- use super::*;
src/openhuman/subconscious/decision_log.rs-118-
--
src/openhuman/socket/types.rs-27-
src/openhuman/socket/types.rs-28-#[cfg(test)]
src/openhuman/socket/types.rs:29:mod tests {
src/openhuman/socket/types.rs-30- use super::*;
src/openhuman/socket/types.rs-31-
--
src/openhuman/security/traits.rs-42-
src/openhuman/security/traits.rs-43-#[cfg(test)]
src/openhuman/security/traits.rs:44:mod tests {
src/openhuman/security/traits.rs-45- use super::*;
src/openhuman/security/traits.rs-46-
--
src/openhuman/screen_intelligence/engine.rs-595-
src/openhuman/screen_intelligence/engine.rs-596-#[cfg(test)]
src/openhuman/screen_intelligence/engine.rs:597:mod tests {
src/openhuman/screen_intelligence/engine.rs-598- #[cfg(target_os = "macos")]
src/openhuman/screen_intelligence/engine.rs-599- use super::*;
--
src/openhuman/screen_intelligence/schemas.rs-405-
src/openhuman/screen_intelligence/schemas.rs-406-#[cfg(test)]
src/openhuman/screen_intelligence/schemas.rs:407:mod tests {
src/openhuman/screen_intelligence/schemas.rs-408- use super::*;
src/openhuman/screen_intelligence/schemas.rs-409-
--
src/openhuman/screen_intelligence/ops.rs-244-
src/openhuman/screen_intelligence/ops.rs-245-#[cfg(test)]
src/openhuman/screen_intelligence/ops.rs:246:mod tests {
src/openhuman/screen_intelligence/ops.rs-247- use super::*;
src/openhuman/screen_intelligence/ops.rs-248-
--
src/openhuman/screen_intelligence/mod.rs-28-
src/openhuman/screen_intelligence/mod.rs-29-#[cfg(test)]
src/openhuman/screen_intelligence/mod.rs:30:mod tests;
--
src/openhuman/screen_intelligence/image_processing.rs-135-
src/openhuman/screen_intelligence/image_processing.rs-136-#[cfg(test)]
src/openhuman/screen_intelligence/image_processing.rs:137:mod tests {
src/openhuman/screen_intelligence/image_processing.rs-138- use super::*;
src/openhuman/screen_intelligence/image_processing.rs-139- use image::{ImageBuffer, Rgb, RgbImage};
--
src/openhuman/service/schemas.rs-257-
src/openhuman/service/schemas.rs-258-#[cfg(test)]
src/openhuman/service/schemas.rs:259:mod tests {
src/openhuman/service/schemas.rs-260- use super::*;
src/openhuman/service/schemas.rs-261-
--
src/openhuman/service/restart.rs-156-
src/openhuman/service/restart.rs-157-#[cfg(test)]
src/openhuman/service/restart.rs:158:mod tests {
src/openhuman/service/restart.rs-159- use super::*;
src/openhuman/service/restart.rs-160- use async_trait::async_trait;
--
src/openhuman/service/ops.rs-72-
src/openhuman/service/ops.rs-73-#[cfg(test)]
src/openhuman/service/ops.rs:74:mod tests {
src/openhuman/service/ops.rs-75- use super::*;
src/openhuman/service/ops.rs-76- use tempfile::TempDir;
--
src/openhuman/security/secrets.rs-282-
src/openhuman/security/secrets.rs-283-#[cfg(test)]
src/openhuman/security/secrets.rs:284:mod tests {
src/openhuman/security/secrets.rs-285- use super::*;
src/openhuman/security/secrets.rs-286- use tempfile::TempDir;
--
src/openhuman/security/policy.rs-813-
src/openhuman/security/policy.rs-814-#[cfg(test)]
src/openhuman/security/policy.rs:815:mod tests {
src/openhuman/security/policy.rs-816- use super::*;
src/openhuman/security/policy.rs-817-
--
src/openhuman/security/pairing.rs-258-
src/openhuman/security/pairing.rs-259-#[cfg(test)]
src/openhuman/security/pairing.rs:260:mod tests {
src/openhuman/security/pairing.rs-261- use super::*;
src/openhuman/security/pairing.rs-262- use tokio::test;
--
src/openhuman/service/linux.rs-129-
src/openhuman/service/linux.rs-130-#[cfg(test)]
src/openhuman/service/linux.rs:131:mod tests {
src/openhuman/service/linux.rs-132- use super::*;
src/openhuman/service/linux.rs-133- use crate::openhuman::config::Config;
--
src/openhuman/security/ops.rs-20-
src/openhuman/security/ops.rs-21-#[cfg(test)]
src/openhuman/security/ops.rs:22:mod tests {
src/openhuman/security/ops.rs-23- use super::*;
src/openhuman/security/ops.rs-24-
--
src/openhuman/security/landlock.rs-202-
src/openhuman/security/landlock.rs-203-#[cfg(test)]
src/openhuman/security/landlock.rs:204:mod tests {
src/openhuman/security/landlock.rs-205- use super::*;
src/openhuman/security/landlock.rs-206-
--
src/openhuman/security/firejail.rs-84-
src/openhuman/security/firejail.rs-85-#[cfg(test)]
src/openhuman/security/firejail.rs:86:mod tests {
src/openhuman/security/firejail.rs-87- use super::*;
src/openhuman/security/firejail.rs-88-
--
src/openhuman/security/docker.rs-95-
src/openhuman/security/docker.rs-96-#[cfg(test)]
src/openhuman/security/docker.rs:97:mod tests {
src/openhuman/security/docker.rs-98- use super::*;
src/openhuman/security/docker.rs-99-
--
src/openhuman/security/detect.rs-105-
src/openhuman/security/detect.rs-106-#[cfg(test)]
src/openhuman/security/detect.rs:107:mod tests {
src/openhuman/security/detect.rs-108- use super::*;
src/openhuman/security/detect.rs-109- use crate::openhuman::config::{SandboxConfig, SecurityConfig};
--
src/openhuman/security/core.rs-10-
src/openhuman/security/core.rs-11-#[cfg(test)]
src/openhuman/security/core.rs:12:mod tests {
src/openhuman/security/core.rs-13- use super::*;
src/openhuman/security/core.rs-14- use crate::openhuman::security::{AutonomyLevel, PairingGuard, SecretStore, SecurityPolicy};
--
src/openhuman/security/bubblewrap.rs-77-
src/openhuman/security/bubblewrap.rs-78-#[cfg(test)]
src/openhuman/security/bubblewrap.rs:79:mod tests {
src/openhuman/security/bubblewrap.rs-80- use super::*;
src/openhuman/security/bubblewrap.rs-81-
--
src/openhuman/security/audit.rs-275-
src/openhuman/security/audit.rs-276-#[cfg(test)]
src/openhuman/security/audit.rs:277:mod tests {
src/openhuman/security/audit.rs-278- use super::*;
src/openhuman/security/audit.rs-279- use tempfile::TempDir;
--
src/openhuman/service/common.rs-142-
src/openhuman/service/common.rs-143-#[cfg(test)]
src/openhuman/service/common.rs:144:mod tests {
src/openhuman/service/common.rs-145- use super::*;
src/openhuman/service/common.rs-146-
--
src/openhuman/service/bus.rs-103-
src/openhuman/service/bus.rs-104-#[cfg(test)]
src/openhuman/service/bus.rs:105:mod tests {
src/openhuman/service/bus.rs-106- use super::*;
src/openhuman/service/bus.rs-107-
--
src/openhuman/screen_intelligence/server.rs-408-
src/openhuman/screen_intelligence/server.rs-409-#[cfg(test)]
src/openhuman/screen_intelligence/server.rs:410:mod tests {
src/openhuman/screen_intelligence/server.rs-411- use super::*;
src/openhuman/screen_intelligence/server.rs-412-
--
src/openhuman/referral/schemas.rs-126-
src/openhuman/referral/schemas.rs-127-#[cfg(test)]
src/openhuman/referral/schemas.rs:128:mod tests {
src/openhuman/referral/schemas.rs-129- use super::*;
src/openhuman/referral/schemas.rs-130- use serde_json::json;
--
src/openhuman/routing/telemetry.rs-62-
src/openhuman/routing/telemetry.rs-63-#[cfg(test)]
src/openhuman/routing/telemetry.rs:64:mod tests {
src/openhuman/routing/telemetry.rs-65- use super::*;
src/openhuman/routing/telemetry.rs-66-
--
src/openhuman/routing/quality.rs-44-
src/openhuman/routing/quality.rs-45-#[cfg(test)]
src/openhuman/routing/quality.rs:46:mod tests {
src/openhuman/routing/quality.rs-47- use super::*;
src/openhuman/routing/quality.rs-48-
--
src/openhuman/routing/provider.rs-516-
src/openhuman/routing/provider.rs-517-#[cfg(test)]
src/openhuman/routing/provider.rs:518:mod tests {
src/openhuman/routing/provider.rs-519- use super::*;
src/openhuman/routing/provider.rs-520- use crate::openhuman::providers::traits::ProviderCapabilities;
--
src/openhuman/routing/policy.rs-181-
src/openhuman/routing/policy.rs-182-#[cfg(test)]
src/openhuman/routing/policy.rs:183:mod tests {
src/openhuman/routing/policy.rs-184- use super::*;
src/openhuman/routing/policy.rs-185-
--
src/openhuman/skills/ops.rs-173-
src/openhuman/skills/ops.rs-174-#[cfg(test)]
src/openhuman/skills/ops.rs:175:mod tests {
src/openhuman/skills/ops.rs-176- use super::*;
src/openhuman/skills/ops.rs-177-
--
src/openhuman/skills/types.rs-71-
src/openhuman/skills/types.rs-72-#[cfg(test)]
src/openhuman/skills/types.rs:73:mod tests {
src/openhuman/skills/types.rs-74- use super::*;
src/openhuman/skills/types.rs-75- use serde_json::json;
--
src/openhuman/skills/bus.rs-4-
src/openhuman/skills/bus.rs-5-#[cfg(test)]
src/openhuman/skills/bus.rs:6:mod tests {
src/openhuman/skills/bus.rs-7- use super::*;
src/openhuman/skills/bus.rs-8-
--
src/openhuman/providers/traits.rs-557-
src/openhuman/providers/traits.rs-558-#[cfg(test)]
src/openhuman/providers/traits.rs:559:mod tests {
src/openhuman/providers/traits.rs-560- use super::*;
src/openhuman/providers/traits.rs-561-
--
src/openhuman/providers/router.rs-177-
src/openhuman/providers/router.rs-178-#[cfg(test)]
src/openhuman/providers/router.rs:179:mod tests {
src/openhuman/providers/router.rs-180- use super::*;
src/openhuman/providers/router.rs-181- use std::sync::atomic::{AtomicUsize, Ordering};
--
src/openhuman/referral/ops.rs-72-
src/openhuman/referral/ops.rs-73-#[cfg(test)]
src/openhuman/referral/ops.rs:74:mod tests {
src/openhuman/referral/ops.rs-75- use super::*;
src/openhuman/referral/ops.rs-76- use crate::openhuman::credentials::{
--
src/openhuman/routing/health.rs-161-
src/openhuman/routing/health.rs-162-#[cfg(test)]
src/openhuman/routing/health.rs:163:mod tests {
src/openhuman/routing/health.rs-164- use super::*;
src/openhuman/routing/health.rs-165-
--
src/openhuman/providers/ops.rs-363-
src/openhuman/providers/ops.rs-364-#[cfg(test)]
src/openhuman/providers/ops.rs:365:mod tests {
src/openhuman/providers/ops.rs-366- use super::*;
src/openhuman/providers/ops.rs-367-
--
src/openhuman/providers/reliable.rs-858-
src/openhuman/providers/reliable.rs-859-#[cfg(test)]
src/openhuman/providers/reliable.rs:860:mod tests {
src/openhuman/providers/reliable.rs-861- use super::*;
src/openhuman/providers/reliable.rs-862- use std::sync::Arc;
--
src/openhuman/providers/compatible.rs-2299-
src/openhuman/providers/compatible.rs-2300-#[cfg(test)]
src/openhuman/providers/compatible.rs:2301:mod tests {
src/openhuman/providers/compatible.rs-2302- use super::*;
src/openhuman/providers/compatible.rs-2303-
--
src/openhuman/memory/ingestion.rs-1674-
src/openhuman/memory/ingestion.rs-1675-#[cfg(test)]
src/openhuman/memory/ingestion.rs:1676:mod tests {
src/openhuman/memory/ingestion.rs-1677- use std::sync::Arc;
src/openhuman/memory/ingestion.rs-1678-
--
src/openhuman/memory/schemas.rs-1126-
src/openhuman/memory/schemas.rs-1127-#[cfg(test)]
src/openhuman/memory/schemas.rs:1128:mod tests {
src/openhuman/memory/schemas.rs-1129- use super::*;
src/openhuman/memory/schemas.rs-1130- use serde_json::json;
--
src/openhuman/memory/traits.rs-115-
src/openhuman/memory/traits.rs-116-#[cfg(test)]
src/openhuman/memory/traits.rs:117:mod tests {
src/openhuman/memory/traits.rs-118- use super::*;
src/openhuman/memory/traits.rs-119-
--
src/openhuman/memory/rpc_models.rs-543-
src/openhuman/memory/rpc_models.rs-544-#[cfg(test)]
src/openhuman/memory/rpc_models.rs:545:mod tests {
src/openhuman/memory/rpc_models.rs-546- use super::*;
src/openhuman/memory/rpc_models.rs-547- use serde_json::json;
--
src/openhuman/memory/response_cache.rs-233-
src/openhuman/memory/response_cache.rs-234-#[cfg(test)]
src/openhuman/memory/response_cache.rs:235:mod tests {
src/openhuman/memory/response_cache.rs-236- // ... (tests remain unchanged)
--
src/openhuman/memory/ops.rs-1155-
src/openhuman/memory/ops.rs-1156-#[cfg(test)]
src/openhuman/memory/ops.rs:1157:mod tests {
src/openhuman/memory/ops.rs-1158- use serde_json::json;
src/openhuman/memory/ops.rs-1159-
--
src/openhuman/memory/store/factories.rs-96-
src/openhuman/memory/store/factories.rs-97-#[cfg(test)]
src/openhuman/memory/store/factories.rs:98:mod tests {
src/openhuman/memory/store/factories.rs-99- use super::*;
src/openhuman/memory/store/factories.rs-100-
--
src/openhuman/memory/store/unified/segments.rs-500-
src/openhuman/memory/store/unified/segments.rs-501-#[cfg(test)]
src/openhuman/memory/store/unified/segments.rs:502:mod tests {
src/openhuman/memory/store/unified/segments.rs-503- use super::*;
src/openhuman/memory/store/unified/segments.rs-504-
--
src/openhuman/memory/store/unified/query.rs-1267-
src/openhuman/memory/store/unified/query.rs-1268-#[cfg(test)]
src/openhuman/memory/store/unified/query.rs:1269:mod tests {
src/openhuman/memory/store/unified/query.rs-1270- use std::sync::Arc;
src/openhuman/memory/store/unified/query.rs-1271-
--
src/openhuman/memory/store/unified/profile.rs-269-
src/openhuman/memory/store/unified/profile.rs-270-#[cfg(test)]
src/openhuman/memory/store/unified/profile.rs:271:mod tests {
src/openhuman/memory/store/unified/profile.rs-272- use super::*;
src/openhuman/memory/store/unified/profile.rs-273-
--
src/openhuman/memory/store/unified/helpers.rs-191-
src/openhuman/memory/store/unified/helpers.rs-192-#[cfg(test)]
src/openhuman/memory/store/unified/helpers.rs:193:mod tests {
src/openhuman/memory/store/unified/helpers.rs-194- use super::UnifiedMemory;
src/openhuman/memory/store/unified/helpers.rs-195- use serde_json::json;
--
src/openhuman/memory/store/unified/fts5.rs-161-
src/openhuman/memory/store/unified/fts5.rs-162-#[cfg(test)]
src/openhuman/memory/store/unified/fts5.rs:163:mod tests {
src/openhuman/memory/store/unified/fts5.rs-164- use super::*;
src/openhuman/memory/store/unified/fts5.rs-165-
--
src/openhuman/memory/store/unified/events.rs-397-
src/openhuman/memory/store/unified/events.rs-398-#[cfg(test)]
src/openhuman/memory/store/unified/events.rs:399:mod tests {
src/openhuman/memory/store/unified/events.rs-400- use super::*;
src/openhuman/memory/store/unified/events.rs-401-
--
src/openhuman/memory/store/client.rs-384-
src/openhuman/memory/store/client.rs-385-#[cfg(test)]
src/openhuman/memory/store/client.rs:386:mod tests {
src/openhuman/memory/store/client.rs-387- use super::*;
src/openhuman/memory/store/client.rs-388- use tempfile::TempDir;
--
src/openhuman/memory/store/unified/documents.rs-495-
src/openhuman/memory/store/unified/documents.rs-496-#[cfg(test)]
src/openhuman/memory/store/unified/documents.rs:497:mod tests {
src/openhuman/memory/store/unified/documents.rs-498- use std::sync::Arc;
src/openhuman/memory/store/unified/documents.rs-499-
--
src/openhuman/memory/global.rs-74-
src/openhuman/memory/global.rs-75-#[cfg(test)]
src/openhuman/memory/global.rs:76:mod tests {
src/openhuman/memory/global.rs-77- use super::*;
src/openhuman/memory/global.rs-78- use tempfile::TempDir;
--
src/openhuman/memory/chunker.rs-204-
src/openhuman/memory/chunker.rs-205-#[cfg(test)]
src/openhuman/memory/chunker.rs:206:mod tests {
src/openhuman/memory/chunker.rs-207- use super::*;
src/openhuman/memory/chunker.rs-208-
--
src/openhuman/memory/conversations/store.rs-438-
src/openhuman/memory/conversations/store.rs-439-#[cfg(test)]
src/openhuman/memory/conversations/store.rs:440:mod tests {
src/openhuman/memory/conversations/store.rs-441- use tempfile::TempDir;
src/openhuman/memory/conversations/store.rs-442-
--
src/openhuman/memory/conversations/bus.rs-260-
src/openhuman/memory/conversations/bus.rs-261-#[cfg(test)]
src/openhuman/memory/conversations/bus.rs:262:mod tests {
src/openhuman/memory/conversations/bus.rs-263- use tempfile::TempDir;
src/openhuman/memory/conversations/bus.rs-264-
--
src/openhuman/migration/ops.rs-19-
src/openhuman/migration/ops.rs-20-#[cfg(test)]
src/openhuman/migration/ops.rs:21:mod tests {
src/openhuman/migration/ops.rs-22- use super::*;
src/openhuman/migration/ops.rs-23- use tempfile::TempDir;
--
src/openhuman/overlay/bus.rs-49-
src/openhuman/overlay/bus.rs-50-#[cfg(test)]
src/openhuman/overlay/bus.rs:51:mod tests {
src/openhuman/overlay/bus.rs-52- use super::*;
src/openhuman/overlay/bus.rs-53- use crate::openhuman::overlay::types::OverlayAttentionTone;
--
src/openhuman/migration/core.rs-404-
src/openhuman/migration/core.rs-405-#[cfg(test)]
src/openhuman/migration/core.rs:406:mod tests {
src/openhuman/migration/core.rs-407- use super::*;
src/openhuman/migration/core.rs-408-
--
src/openhuman/integrations/twilio.rs-151-
src/openhuman/integrations/twilio.rs-152-#[cfg(test)]
src/openhuman/integrations/twilio.rs:153:mod tests {
src/openhuman/integrations/twilio.rs-154- use super::*;
src/openhuman/integrations/twilio.rs-155-
--
src/openhuman/heartbeat/engine.rs-128-
src/openhuman/heartbeat/engine.rs-129-#[cfg(test)]
src/openhuman/heartbeat/engine.rs:130:mod tests {
src/openhuman/heartbeat/engine.rs-131- use super::*;
src/openhuman/heartbeat/engine.rs-132-
--
src/openhuman/local_ai/types.rs-152-
src/openhuman/local_ai/types.rs-153-#[cfg(test)]
src/openhuman/local_ai/types.rs:154:mod tests {
src/openhuman/local_ai/types.rs-155- use super::*;
src/openhuman/local_ai/types.rs-156-
--
src/openhuman/learning/user_profile.rs-147-
src/openhuman/learning/user_profile.rs-148-#[cfg(test)]
src/openhuman/learning/user_profile.rs:149:mod tests {
src/openhuman/learning/user_profile.rs-150- use super::*;
src/openhuman/learning/user_profile.rs-151- use crate::openhuman::agent::hooks::TurnContext;
--
src/openhuman/learning/tool_tracker.rs-177-
src/openhuman/learning/tool_tracker.rs-178-#[cfg(test)]
src/openhuman/learning/tool_tracker.rs:179:mod tests {
src/openhuman/learning/tool_tracker.rs-180- use super::*;
src/openhuman/learning/tool_tracker.rs-181- use crate::openhuman::agent::hooks::{ToolCallRecord, TurnContext};
--
src/openhuman/local_ai/schemas.rs-1028-
src/openhuman/local_ai/schemas.rs-1029-#[cfg(test)]
src/openhuman/local_ai/schemas.rs:1030:mod tests {
src/openhuman/local_ai/schemas.rs-1031- use super::*;
src/openhuman/local_ai/schemas.rs-1032-
--
src/openhuman/integrations/mod.rs-22-
src/openhuman/integrations/mod.rs-23-#[cfg(test)]
src/openhuman/integrations/mod.rs:24:mod tests {
src/openhuman/integrations/mod.rs-25- use super::*;
src/openhuman/integrations/mod.rs-26-
--
src/openhuman/local_ai/presets.rs-307-
src/openhuman/local_ai/presets.rs-308-#[cfg(test)]
src/openhuman/local_ai/presets.rs:309:mod tests {
src/openhuman/local_ai/presets.rs-310- use super::*;
src/openhuman/local_ai/presets.rs-311-
--
src/openhuman/integrations/google_places.rs-303-
src/openhuman/integrations/google_places.rs-304-#[cfg(test)]
src/openhuman/integrations/google_places.rs:305:mod tests {
src/openhuman/integrations/google_places.rs-306- use super::*;
src/openhuman/integrations/google_places.rs-307- use crate::openhuman::integrations::ToolScope;
--
src/openhuman/learning/schemas.rs-64-
src/openhuman/learning/schemas.rs-65-#[cfg(test)]
src/openhuman/learning/schemas.rs:66:mod tests {
src/openhuman/learning/schemas.rs-67- use super::*;
src/openhuman/learning/schemas.rs-68-
--
src/openhuman/local_ai/paths.rs-185-
src/openhuman/local_ai/paths.rs-186-#[cfg(test)]
src/openhuman/local_ai/paths.rs:187:mod tests {
src/openhuman/local_ai/paths.rs-188- use super::*;
src/openhuman/local_ai/paths.rs-189-
--
src/openhuman/local_ai/parse.rs-162-
src/openhuman/local_ai/parse.rs-163-#[cfg(test)]
src/openhuman/local_ai/parse.rs:164:mod tests {
src/openhuman/local_ai/parse.rs-165- use super::*;
src/openhuman/local_ai/parse.rs-166-
--
src/openhuman/local_ai/sentiment.rs-145-
src/openhuman/local_ai/sentiment.rs-146-#[cfg(test)]
src/openhuman/local_ai/sentiment.rs:147:mod tests {
src/openhuman/local_ai/sentiment.rs-148- use super::*;
src/openhuman/local_ai/sentiment.rs-149-
--
src/openhuman/local_ai/service/whisper_engine.rs-437-
src/openhuman/local_ai/service/whisper_engine.rs-438-#[cfg(test)]
src/openhuman/local_ai/service/whisper_engine.rs:439:mod tests {
src/openhuman/local_ai/service/whisper_engine.rs-440- use super::*;
src/openhuman/local_ai/service/whisper_engine.rs-441-
--
src/openhuman/local_ai/service/vision_embed.rs-193-
src/openhuman/local_ai/service/vision_embed.rs-194-#[cfg(test)]
src/openhuman/local_ai/service/vision_embed.rs:195:mod tests {
src/openhuman/local_ai/service/vision_embed.rs-196- use super::*;
src/openhuman/local_ai/service/vision_embed.rs-197- use axum::{routing::post, Json, Router};
--
src/openhuman/local_ai/service/public_infer.rs-404-
src/openhuman/local_ai/service/public_infer.rs-405-#[cfg(test)]
src/openhuman/local_ai/service/public_infer.rs:406:mod tests {
src/openhuman/local_ai/service/public_infer.rs-407- use super::*;
src/openhuman/local_ai/service/public_infer.rs-408- use axum::{routing::post, Json, Router};
--
src/openhuman/local_ai/service/ollama_admin.rs-873-
src/openhuman/local_ai/service/ollama_admin.rs-874-#[cfg(test)]
src/openhuman/local_ai/service/ollama_admin.rs:875:mod tests {
src/openhuman/local_ai/service/ollama_admin.rs-876- use super::interrupted_pull_settle_window_secs;
src/openhuman/local_ai/service/ollama_admin.rs-877-
--
src/openhuman/local_ai/service/bootstrap.rs-326-
src/openhuman/local_ai/service/bootstrap.rs-327-#[cfg(test)]
src/openhuman/local_ai/service/bootstrap.rs:328:mod tests {
src/openhuman/local_ai/service/bootstrap.rs-329- use super::*;
src/openhuman/local_ai/service/bootstrap.rs-330-
--
src/openhuman/learning/reflection.rs-305-
src/openhuman/learning/reflection.rs-306-#[cfg(test)]
src/openhuman/learning/reflection.rs:307:mod tests {
src/openhuman/learning/reflection.rs-308- use super::*;
src/openhuman/learning/reflection.rs-309- use crate::openhuman::agent::hooks::{ToolCallRecord, TurnContext};
--
src/openhuman/local_ai/ollama_api.rs-208-
src/openhuman/local_ai/ollama_api.rs-209-#[cfg(test)]
src/openhuman/local_ai/ollama_api.rs:210:mod tests {
src/openhuman/local_ai/ollama_api.rs-211- use super::*;
src/openhuman/local_ai/ollama_api.rs-212-
--
src/openhuman/learning/prompt_sections.rs-84-
src/openhuman/learning/prompt_sections.rs-85-#[cfg(test)]
src/openhuman/learning/prompt_sections.rs:86:mod tests {
src/openhuman/learning/prompt_sections.rs-87- use super::*;
src/openhuman/learning/prompt_sections.rs-88- use crate::openhuman::context::prompt::LearnedContextData;
--
src/openhuman/tools/impl/agent/dispatch.rs-103-
src/openhuman/tools/impl/agent/dispatch.rs-104-#[cfg(test)]
src/openhuman/tools/impl/agent/dispatch.rs:105:mod tests {
src/openhuman/tools/impl/agent/dispatch.rs-106- use super::*;
src/openhuman/tools/impl/agent/dispatch.rs-107- use crate::openhuman::tools::traits::Tool;
--
src/openhuman/tools/impl/agent/spawn_subagent.rs-363-
src/openhuman/tools/impl/agent/spawn_subagent.rs-364-#[cfg(test)]
... [stdout truncated: output exceeded 50000 characters (103888 total). Please modify your command to reduce output size, for example by:
- Using grep/rg to filter results
- Using head/tail to limit lines
- Using --max-count flag with grep/rg
- Processing output with awk/sed to extract only needed information]
Script executed:
cat src/openhuman/tokenjuice/mod.rsRepository: tinyhumansai/openhuman
Repository: tinyhumansai/openhuman
Exit code: 0
stdout:
//! # TokenJuice — terminal-output compaction engine
//!
//! Rust port of [vincentkoc/tokenjuice](https://github.com/vincentkoc/tokenjuice).
//!
//! Compacts verbose tool output (git, npm, cargo, docker, …) using
//! JSON-configured rules before it enters an LLM context window.
//!
//! ## Quick start
//!
//! ```rust
//! use openhuman_core::openhuman::tokenjuice::{
//! reduce::reduce_execution_with_rules,
//! rules::load_builtin_rules,
//! types::{ReduceOptions, ToolExecutionInput},
//! };
//!
//! let rules = load_builtin_rules();
//! let input = ToolExecutionInput {
//! tool_name: "bash".to_owned(),
//! argv: Some(vec!["git".to_owned(), "status".to_owned()]),
//! stdout: Some("On branch main\n\tmodified: src/lib.rs\n".to_owned()),
//! ..Default::default()
//! };
//! let result = reduce_execution_with_rules(input, &rules, &ReduceOptions::default());
//! println!("{}", result.inline_text);
//! // → "M: src/lib.rs"
//! ```
//!
//! ## Scope (v1 — library only)
//!
//! This module is purely a library. It has no JSON-RPC surface, no CLI, and
//! no artifact store. Those surfaces can be layered on later when a caller
//! inside `openhuman` needs them.
//!
//! ## Three-layer rule overlay
//!
//! Rules are loaded from three sources in ascending priority order:
//! 1. **Builtin** — vendored JSON files embedded via `include_str!`.
//! 2. **User** — `~/.config/tokenjuice/rules/` (loaded from disk).
//! 3. **Project** — `.tokenjuice/rules/` relative to `cwd` (loaded from disk).
//!
//! When two layers define the same rule `id`, the higher-priority layer wins.
pub mod classify;
pub mod reduce;
pub mod rules;
pub mod text;
pub mod tool_integration;
pub mod types;
pub use reduce::reduce_execution_with_rules;
pub use rules::{load_builtin_rules, load_rules, LoadRuleOptions};
pub use tool_integration::{compact_tool_output, CompactionStats};
pub use types::{CompactResult, ReduceOptions, ToolExecutionInput};
Wire integration.rs into tokenjuice/mod.rs or move the test to crate-root tests/.
Rust does not auto-discover src/openhuman/tokenjuice/tests/integration.rs—this file will never compile or run unless explicitly declared. Add #[cfg(test)] mod tests; to src/openhuman/tokenjuice/mod.rs or move the test to the crate-root tests/ directory.
Additionally, the test silently passes when the fixtures directory is missing (lines 43–47 return without error). Either ensure the fixtures directory is guaranteed to exist, or panic! / assert! if it is absent.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/openhuman/tokenjuice/tests/integration.rs` around lines 1 - 111, The
integration test file integration.rs is not discovered under src/... so wire it
into the tokenjuice crate by adding a test module declaration in
src/openhuman/tokenjuice/mod.rs (e.g. #[cfg(test)] mod tests;) so the test file
(integration.rs) is compiled, or alternatively move integration.rs into the
crate-root tests/ directory; also make the test fail loudly when fixtures are
missing by changing fixtures_dir()/all_fixtures_pass to assert or panic if the
fixtures directory does not exist instead of returning silently (update the
fixtures_dir helper or the early-return block in all_fixtures_pass to use
assert!(dir.is_dir(), "...") or panic! with a clear message).
| let dir = fixtures_dir(); | ||
| if !dir.is_dir() { | ||
| eprintln!("fixtures directory not found: {}", dir.display()); | ||
| return; | ||
| } |
There was a problem hiding this comment.
Fail hard when the fixtures directory is missing.
This currently returns success, which turns a broken test setup into a green run.
Suggested fix
fn all_fixtures_pass() {
let dir = fixtures_dir();
- if !dir.is_dir() {
- eprintln!("fixtures directory not found: {}", dir.display());
- return;
- }
+ assert!(
+ dir.is_dir(),
+ "fixtures directory not found: {}",
+ dir.display()
+ );📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let dir = fixtures_dir(); | |
| if !dir.is_dir() { | |
| eprintln!("fixtures directory not found: {}", dir.display()); | |
| return; | |
| } | |
| fn all_fixtures_pass() { | |
| let dir = fixtures_dir(); | |
| assert!( | |
| dir.is_dir(), | |
| "fixtures directory not found: {}", | |
| dir.display() | |
| ); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/openhuman/tokenjuice/tests/integration.rs` around lines 43 - 47, The test
silently returns when the fixtures directory is missing; change the current
soft-check around fixtures_dir() (the if !dir.is_dir() { eprintln!(...); return;
}) to fail the test hard by replacing the early return with a panic! or an
assert! that includes the directory path (e.g., assert!(dir.is_dir(), "fixtures
directory not found: {}", dir.display())), ensuring the test fails instead of
passing when fixtures are absent.
| "name": "warning", | ||
| "pattern": "error|failed|duration|format", | ||
| "flags": "i" |
There was a problem hiding this comment.
warning counter is over-broad and will false-positive on normal output.
Including duration|format means most successful mediainfo runs will be counted as warnings.
🐛 Proposed fix
- "pattern": "error|failed|duration|format",
+ "pattern": "error|failed|unable|cannot|not found",🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/openhuman/tokenjuice/vendor/rules/media__mediainfo.json` around lines 25
- 27, The "warning" rule's pattern in media__mediainfo.json is too broad (it
currently contains "error|failed|duration|format") and causes false positives;
update the "pattern" field for the rule named "warning" to remove general tokens
like "duration" and "format" and instead match explicit error indicators only
(for example use a word-boundary, case-insensitive regex such as
"\b(error|failed|warn|warning)\b" or similar), ensuring the rule still uses the
existing "flags": "i". This change targets the "warning" rule's pattern value to
narrow matches and avoid counting normal mediainfo output as warnings.
| "keepPatterns": [ | ||
| "^.+:\\d+[: -].+", | ||
| "^.+:.+", | ||
| "error|warn|binary file|permission denied|no such file", |
There was a problem hiding this comment.
Diagnostic keep-pattern misses common grep stderr due to casing.
Line 18 is case-sensitive, so standard errors like Permission denied / No such file... may be excluded.
Suggested fix
- "error|warn|binary file|permission denied|no such file",
+ "(?i)error|warn|binary file|permission denied|no such file",📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "error|warn|binary file|permission denied|no such file", | |
| "(?i)error|warn|binary file|permission denied|no such file", |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/openhuman/tokenjuice/vendor/rules/search__grep.json` at line 18, The
keep-pattern in search__grep.json is case-sensitive and misses common stderr
like "Permission denied" and "No such file"; update the pattern string used (the
entry currently "error|warn|binary file|permission denied|no such file") to be
case-insensitive (e.g. prefix with a case-insensitive flag like (?i) or add
capitalized variants) so matches cover "Permission denied", "No such file",
etc.; modify the pattern value in the JSON (the string literal for the grep
diagnostic keep-pattern) to "(?i)(error|warn|binary file|permission denied|no
such file)" or equivalent.
| "counters": [ | ||
| { | ||
| "name": "process", | ||
| "pattern": "^(?!USER\\s|PID\\s).+\\S.*$" |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
In Rust's regex crate, what is the supported way to exclude header lines (e.g., USER/PID) without lookahead?
💡 Result:
The supported way to exclude header lines like "USER PID" in Rust's regex crate, without using lookahead (which is unsupported), is to use multiline mode with (?m)^ to match line starts, combined with a pattern that specifically matches data lines but not the header format. For example, if data lines start with a number for PID (e.g., \d+ instead of letters like "USER"), use a regex like (?m)^[^U][^ ]+ \d+ or more precisely (?m)^(?!USER\b).* fails since lookahead is unsupported, so match the expected data structure explicitly. The canonical example from official docs parses grep-like output excluding any header by using (?m)^([^:]+):([0-9]+):(.+)$ which matches lines with "path:line:message" format - the header wouldn't match this specific structure. In general: 1. Use Regex::new(r"(?m)^your_data_pattern$").unwrap where your_data_pattern captures the structure of non-header lines (e.g., for ps output: (?m)^\d+\s+\d+ assuming header starts with letters). 2. Iterate with captures_iter over the multi-line haystack; only data lines will match. Code example: use regex::Regex; let re = Regex::new(r"(?m)^(\d+)\s+(\d+)").unwrap; // Matches PID USER lines, excludes "USER PID" let hay = "USER PID\n1234 bob\n5678 alice"; for cap in re.captures_iter(hay) { println!("PID: {}", &cap[1]); } This prints only data PIDs since header doesn't match \d+. If header has unique traits, negate via character class: (?m)^([^\s][^U]|\s{5}) etc., but structural matching is preferred. Lookahead/lookbehind are explicitly unsupported for performance: "lacks ... look-around" (docs.rs/regex). For complex exclusion, post-filter matches or preprocess text (split lines, skip first).
Citations:
- 1: https://docs.rs/regex/%5E1
- 2: https://docs.rs/regex/1/regex/index.html
- 3: https://docs.rs/regex/latest/regex/
- 4: https://docs.rs/regex/1.10.2/regex/
- 5: https://docs.rs/regex/^1
- 6: https://docs.rs/regex/%5E1/
- 7: https://docs.rs/regex/1.12.3
- 8: Does not support (?!...) negative lookahead assertion? rust-lang/regex#127
- 9: https://docs.rs/regex/1.10/regex/index.html
🏁 Script executed:
# Find references to the system__ps.json file
fd -type f -name "*.rs" -o -name "*.py" -o -name "*.js" -o -name "*.json" | xargs rg "system__ps" 2>/dev/null | head -20Repository: tinyhumansai/openhuman
Length of output: 363
🏁 Script executed:
# Look for how JSON rules are loaded and processed
fd -type f -name "*.rs" | xargs rg "vendor.*rules\|\.json.*regex\|pattern.*regex" -A 2 -B 2 | head -50Repository: tinyhumansai/openhuman
Length of output: 236
🏁 Script executed:
# Check the directory structure to understand the vendor rules setup
find src/openhuman/tokenjuice/vendor -type f | head -20Repository: tinyhumansai/openhuman
Length of output: 1192
🏁 Script executed:
# Look for how rules are loaded and parsed
rg "builtin\.rs\|include_str\|system.*ps" src/openhuman/tokenjuice/rules/ -A 5 -B 5 | head -60Repository: tinyhumansai/openhuman
Length of output: 48
🏁 Script executed:
# Find the code that processes these rules and compiles patterns
rg "pattern.*compile\|Regex::new\|regex.*json" src/openhuman/tokenjuice/ -A 2 -B 2 | head -80Repository: tinyhumansai/openhuman
Length of output: 48
🏁 Script executed:
# Check dependencies to see which regex library is used
cat src/Cargo.toml 2>/dev/null || cat Cargo.toml 2>/dev/null | grep -A 20 "\[dependencies\]"Repository: tinyhumansai/openhuman
Length of output: 659
🏁 Script executed:
# Look at the builtin.rs file to see how rules are loaded
cat src/openhuman/tokenjuice/rules/builtin.rs | head -100Repository: tinyhumansai/openhuman
Length of output: 3022
🏁 Script executed:
# Search for regex dependencies in full Cargo.toml
cat Cargo.toml | grep -i regex -A 2 -B 2Repository: tinyhumansai/openhuman
Length of output: 163
🏁 Script executed:
# Look for pattern compilation logic
rg "pattern\|regex\|compile" src/openhuman/tokenjuice/ -t rs | head -50Repository: tinyhumansai/openhuman
Length of output: 92
🏁 Script executed:
# Look for where patterns from JSON are processed/compiled into regex
rg "pattern\|Regex" src/openhuman/tokenjuice/ --type rust -A 3 -B 3 | head -100Repository: tinyhumansai/openhuman
Length of output: 48
🏁 Script executed:
# Check the actual system__ps.json file content
cat src/openhuman/tokenjuice/vendor/rules/system__ps.jsonRepository: tinyhumansai/openhuman
Length of output: 594
🏁 Script executed:
# Search for rule loading and pattern compilation logic
rg "load.*rule\|compile.*pattern" src/openhuman/tokenjuice/ --type rust -A 5 -B 2 | head -150Repository: tinyhumansai/openhuman
Length of output: 48
🏁 Script executed:
# Search for where patterns from the JSON rules are actually compiled into Regex objects
rg "counters\|pattern" src/openhuman/tokenjuice/ --type rust -B 5 -A 10 | head -200Repository: tinyhumansai/openhuman
Length of output: 48
🏁 Script executed:
# Look for the rule loading and compilation logic more broadly
find src/openhuman/tokenjuice -name "*.rs" -type f | xargs grep -l "counter\|Counter" | head -10Repository: tinyhumansai/openhuman
Length of output: 256
🏁 Script executed:
# Check for error handling or validation of patterns
rg "Regex::new\|regex\|compile" src/openhuman/tokenjuice/ --type rust -B 3 -A 3 | head -150Repository: tinyhumansai/openhuman
Length of output: 48
🏁 Script executed:
# Check the compiler.rs file since it likely handles pattern compilation
cat src/openhuman/tokenjuice/rules/compiler.rs | head -150Repository: tinyhumansai/openhuman
Length of output: 4846
🏁 Script executed:
# Look for counter-related logic in reduce.rs
rg "counter\|Counter" src/openhuman/tokenjuice/reduce.rs -B 3 -A 5Repository: tinyhumansai/openhuman
Length of output: 48
🏁 Script executed:
# Check types.rs to understand the data structures
rg "counter\|Counter" src/openhuman/tokenjuice/types.rs -B 2 -A 5Repository: tinyhumansai/openhuman
Length of output: 48
🏁 Script executed:
# Check if there's any validation or error reporting to the user
rg "invalid regex\|rule compiler" src/openhuman/tokenjuice/ --type rust -B 2 -A 2Repository: tinyhumansai/openhuman
Length of output: 48
🏁 Script executed:
# Look for logging output or user-facing error messages
rg "log::\|warn\|error" src/openhuman/tokenjuice/rules/compiler.rs -B 1 -A 1Repository: tinyhumansai/openhuman
Length of output: 48
🏁 Script executed:
# Verify the exact error handling behavior by looking at more context
cat src/openhuman/tokenjuice/rules/compiler.rs | sed -n '70,130p'Repository: tinyhumansai/openhuman
Length of output: 1745
Counter pattern uses unsupported lookahead; will silently fail without warning.
Line 26 pattern ^(?!USER\\s|PID\\s).+\\S.*$ uses negative lookahead (?!...), which Rust's regex crate does not support. The pattern will fail to compile at runtime and be silently dropped with only a DEBUG-level log entry. The counter will not function, but no visible error is shown to the user or developer.
Replace with a pattern compatible with Rust regex, such as:
^(?!USER)[^\s].*$(match lines starting with any non-space, non-U character)- Or restructure to match expected data format explicitly (e.g.,
^\d+\s+for PID lines starting with digits)
| "counters": [ | ||
| { | ||
| "name": "failed", | ||
| "pattern": "fail", | ||
| "flags": "i" | ||
| }, | ||
| { | ||
| "name": "passed", | ||
| "pattern": "pass", | ||
| "flags": "i" | ||
| } | ||
| ] |
There was a problem hiding this comment.
Counter patterns are too broad and will produce false positives.
Both patterns lack word boundaries and will match unintended strings:
"fail"(line 47) will match "unfailing", "failsafe", "failover""pass"(line 52) will match "password", "passenger", "compass", "passage", "bypass", "trespass"
The "pass" pattern is particularly problematic as test output commonly references credentials or navigation terms.
🛡️ Proposed fix using word boundaries
"counters": [
{
"name": "failed",
- "pattern": "fail",
+ "pattern": "\\bfail",
"flags": "i"
},
{
"name": "passed",
- "pattern": "pass",
+ "pattern": "\\bpass",
"flags": "i"
}
]Note: Using only \b at the start allows matching "fail", "failed", "failure", "pass", "passed", "passing" while excluding "unfailing", "password", etc.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "counters": [ | |
| { | |
| "name": "failed", | |
| "pattern": "fail", | |
| "flags": "i" | |
| }, | |
| { | |
| "name": "passed", | |
| "pattern": "pass", | |
| "flags": "i" | |
| } | |
| ] | |
| "counters": [ | |
| { | |
| "name": "failed", | |
| "pattern": "\\bfail", | |
| "flags": "i" | |
| }, | |
| { | |
| "name": "passed", | |
| "pattern": "\\bpass", | |
| "flags": "i" | |
| } | |
| ] |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/openhuman/tokenjuice/vendor/rules/tests__bun-test.json` around lines 44 -
55, The "failed" and "passed" counter regexes are too broad and match
substrings; update the patterns for the counters named "failed" and "passed" to
use word boundaries so they only match whole test-result words (e.g., use
patterns that anchor on word boundaries and allow common suffixes like
"failed"/"failure" or "passed"/"passing" rather than raw "fail" and "pass");
modify the "pattern" values for the counters with name "failed" and "passed" to
include appropriate word-boundary anchors (and optional suffix handling) to
prevent matching words like "unfailing", "password", or "compass".
| { | ||
| "name": "passed test", | ||
| "pattern": "ok" | ||
| } |
There was a problem hiding this comment.
Pattern "ok" is too broad and will produce false positives.
The pattern "ok" without word boundaries will match any occurrence of "ok" within larger words like "looking", "token", "booking", "broken", etc., severely inflating the passed test count.
🛡️ Proposed fix using word boundaries
{
"name": "passed test",
- "pattern": "ok"
+ "pattern": "\\bok\\b"
}Alternatively, for more specificity with Cargo's output format:
{
"name": "passed test",
- "pattern": "ok"
+ "pattern": "test .* \\.\\.\\.\\s+ok|^\\s*ok\\b"
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| { | |
| "name": "passed test", | |
| "pattern": "ok" | |
| } | |
| { | |
| "name": "passed test", | |
| "pattern": "\\bok\\b" | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/openhuman/tokenjuice/vendor/rules/tests__cargo-test.json` around lines 36
- 39, The "passed test" rule uses the overly broad pattern "ok" which triggers
on substrings (e.g., "looking", "token"); update the rule that has "name":
"passed test" and its "pattern" to use a regex that matches "ok" as a whole word
or matches the full Cargo test success line (e.g., use word-boundary markers
around ok or anchor the pattern to the typical "test ... ok" output) so only
genuine test-passed lines are matched.
- classify: derive command from argv join when input.command is unset, so commandIncludes* rules still match argv-only callers - rules/loader: log read_dir / file_type / read_to_string failures at debug level so permission or filesystem issues are observable rather than silently skipped - text/ansi: add trace log at strip_ansi entry/exit with lengths (no text content) per the project debug-logging rules - tests: remove orphan src/openhuman/tokenjuice/tests/integration.rs which was never wired into any module declaration; the real fixture- parity runner lives at tests/tokenjuice_integration.rs and asserts hard when the fixtures directory is missing Vendored-rule issues (docker-ps / kubectl-describe / git/branch / grep casing / counter-pattern overbreadth / etc.) come from upstream and are left as-is; this module is a straight port of the upstream rule set and should not fork from it in v1.
There was a problem hiding this comment.
🧹 Nitpick comments (2)
src/openhuman/tokenjuice/tool_integration.rs (1)
201-202: Whitespace-splitting doesn't preserve quoted arguments.Commands like
git commit -m "hello world"will split to["git", "commit", "-m", "\"hello", "world\""]. This is acceptable for the current use case (classifier substring matching), but worth noting if future rules need exact argument parsing.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/tokenjuice/tool_integration.rs` around lines 201 - 202, Current splitting using cmd.split_whitespace() loses quoted-argument boundaries (e.g., git commit -m "hello world"); replace it with a shell-aware splitter (for example use the shell_words or shlex crate) to parse cmd into argv while handling parse errors, i.e., call the crate's split function on cmd (referencing the cmd variable and resulting argv) and convert the returned Vec<String> into the same return structure, propagating or logging parse errors appropriately instead of split_whitespace.src/openhuman/agent/harness/tool_loop.rs (1)
618-625: Consider logging tokenjuice stats on the error path for observability parity.The success path logs compaction stats (lines 567-573), but the error path silently discards them with
_. For consistent tracing of compaction behavior across both paths, consider logging here as well.💡 Optional: Add debug logging for error path compaction
let scrubbed = scrub_credentials(&output); -let (compacted, _) = crate::openhuman::tokenjuice::compact_tool_output( +let (compacted, tj_stats) = crate::openhuman::tokenjuice::compact_tool_output( &call.name, Some(&call.arguments), &scrubbed, Some(1), ); +if tj_stats.applied { + log::debug!( + "[agent_loop] tokenjuice applied (error path) tool={} rule={} {}->{} bytes", + call.name, + tj_stats.rule_id, + tj_stats.original_bytes, + tj_stats.compacted_bytes + ); +} (format!("Error: {compacted}"), false)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/openhuman/agent/harness/tool_loop.rs` around lines 618 - 625, The error path currently discards compaction stats from compact_tool_output; change the tuple unpack to capture the stats (e.g., let (compacted, stats) = crate::openhuman::tokenjuice::compact_tool_output(...)) and emit a debug/log line with those stats before returning (use tracing::debug! or the repository's logger) so compaction metrics are recorded on errors just like the success path; keep scrub_credentials(&output) and the returned (format!("Error: {compacted}"), false) unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@src/openhuman/agent/harness/tool_loop.rs`:
- Around line 618-625: The error path currently discards compaction stats from
compact_tool_output; change the tuple unpack to capture the stats (e.g., let
(compacted, stats) = crate::openhuman::tokenjuice::compact_tool_output(...)) and
emit a debug/log line with those stats before returning (use tracing::debug! or
the repository's logger) so compaction metrics are recorded on errors just like
the success path; keep scrub_credentials(&output) and the returned
(format!("Error: {compacted}"), false) unchanged.
In `@src/openhuman/tokenjuice/tool_integration.rs`:
- Around line 201-202: Current splitting using cmd.split_whitespace() loses
quoted-argument boundaries (e.g., git commit -m "hello world"); replace it with
a shell-aware splitter (for example use the shell_words or shlex crate) to parse
cmd into argv while handling parse errors, i.e., call the crate's split function
on cmd (referencing the cmd variable and resulting argv) and convert the
returned Vec<String> into the same return structure, propagating or logging
parse errors appropriately instead of split_whitespace.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 3f0cdf7f-5c65-43a3-b7e5-49f8c94971c1
📒 Files selected for processing (3)
src/openhuman/agent/harness/tool_loop.rssrc/openhuman/tokenjuice/mod.rssrc/openhuman/tokenjuice/tool_integration.rs
🚧 Files skipped from review as they are similar to previous changes (1)
- src/openhuman/tokenjuice/mod.rs
Summary
openhuman::tokenjuicemodule for compacting verbose tool/command output before it hits the LLM context.crate::openhuman::tokenjuice::reduce_execution(...).src/rules/*except theopenclaw/subdir), embedded viainclude_str!. Three-layer overlay: builtin → user (~/.config/tokenjuice/rules/) → project (.tokenjuice/rules/).What's included
types.rs):Rule,ToolExecutionInput,ReduceOptions,CompactResultwith#[serde(rename_all = "camelCase")]so upstream rule JSON loads unmodified.reduce.rs): normalize → classify → apply transforms (stripAnsi, prettyPrintJson, dedupeAdjacent, trimEmptyEdges, skip/keep filters) → head/tail summarise → inline-vs-raw choice → grapheme-safe middle-truncate clamp. Failure-preserving behaviour on non-zero exit codes.rules/): regex pre-compilation with graceful invalid-pattern handling (diagnostic, not panic); three-layer precedence with id-based dedupe.classify.rs): matches ontoolNames,argv0,argvIncludes,commandIncludes.text/): ANSI/OSC/CSI strip, dedupe adjacent, trim empty edges, grapheme-aware width viaunicode-segmentation+unicode-width.git statusto compact notation (M:,A:,D:,R:,??).New dependencies
unicode-segmentation = "1"unicode-width = "0.2"Deferred (not v1)
wrapsubprocess runnerKnown drift
One fixture (
git_status_modified) documents a minor behavioural drift: indented(use "git …")hint lines survive in the Rust port because upstream's skip patterns (^\(use "git .+"\)$) don't match indented lines — upstream handles them via a post-classify null-return path that runs after our skip-pattern phase. Documented, not silently skipped.Test plan
cargo test tokenjuice— 169/169 unit tests passcargo test --test tokenjuice_integration— 1/1 integration (fixture-parity) test passescargo checkcleancargo fmtappliedcargo clippy -D warningsclean on the tokenjuice modulegit status/npm install/cargo testblob throughreduce_executionand eyeball the compact outputSummary by CodeRabbit
Release Notes