Redact sensitive keys from logs across the app#27
Merged
Conversation
* Add `commitbot_macros` dependency for redacting keys * Improve log security by removing sensitive field keys * Redact keys from logs in Ollama and OpenAI clients * Track token usage and logging in LLM clients
Contributor
There was a problem hiding this comment.
Pull request overview
This PR aims to reduce the risk of credential leakage in verbose logs by introducing a “sensitive fields” mechanism for config logging, and it adds token-usage tracking/logging to the OpenAI and Ollama LLM clients.
Changes:
- Add a new
commitbot_macrosproc-macro crate and deriveSensitiveFieldsonConfigwith#[sensitive]field annotations. - Redact sensitive config values in
ConfigResolver::log_decisionand adjust LLM prompt/token-usage logging behavior. - Update Taskfile install flows (including brew install/uninstall helpers) and tweak the
run:simpleverbosity flag.
Reviewed changes
Copilot reviewed 7 out of 8 changed files in this pull request and generated 8 comments.
Show a summary per file
| File | Description |
|---|---|
| Taskfile.yaml | Adjusts run/install tasks; adds install:brew helper. |
| src/config.rs | Adds sensitive-field annotations/derive and redacts sensitive values in config decision logs. |
| src/llm/openai.rs | Aggregates token usage across calls and adjusts prompt log levels. |
| src/llm/ollama.rs | Attempts to parse/aggregate token usage and logs aggregate usage after operations. |
| commitbot-macros/src/lib.rs | Introduces SensitiveFields derive macro to generate sensitive field-name list. |
| commitbot-macros/Cargo.toml | Defines the new proc-macro crate. |
| Cargo.toml | Adds commitbot_macros dependency. |
| Cargo.lock | Locks the new dependency. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
- Align `commitbot_macros` in `Cargo.toml` to 0.1.0 to match workspace API. - Remove the no-op else branch and its limitation comment in `commitbot-macros/src/lib.rs`. - Introduce optional `LlmClient` API to report and reset aggregated token usage with a default no-op implementation in `src/llm/mod.rs`. - In `src/llm/ollama.rs`, recover from poisoned usage mutex and centralize usage reset via `take_and_reset_usage`. - In `src/llm/openai.rs`, recover from poisoned mutex, stop per-call usage logging, and implement `take_and_reset_usage` returning activity, with a warning when poisoned. - Surface warnings by default in `src/logging.rs` to boost visibility. - In `src/main.rs`, improve output formatting by avoiding extra newline after LLM messages, respect trailing newlines in streaming mode, surface token usage metrics after runs, and ensure consistent PR message output. Token usage: prompt=3199, completion=7202, total=10401
Closed
Contributor
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 10 out of 11 changed files in this pull request and generated 6 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Redact sensitive keys from logs across the app
Overview
This branch implements end-to-end redaction of sensitive keys in logs for all LLM interactions (Ollama and OpenAI), preventing secret leakage in runtime logs. It adds a general redaction capability via the commitbot_macros dependency, enhances observability with token usage metrics, and improves CLI/config handling to simplify branch awareness. Live LLM tests are disabled by default to avoid requiring real LLM access. Key groundwork includes refactoring CLI/config handling, better error propagation, and preparing the codebase for safer log output. Notable commits fueling this work include d0a0596 (redaction implementation) and df091d7 (CLI/config improvements and test coverage).
Key commits to note:
Changes
Testing / Validation
Notes / Risks
Commits in this PR:
df091d7Refactor CLI and config handling, improve error propagation, and enhance test coveraged0a0596Redact sensitive keys from logs across the app6eb4d91Update src/cli_args.rsa926b57Merge pull request Refactor CLI and config handling, improve error propagation, and enha… #26 from MikeGarde/fix/tests37d208bMerge remote-tracking branch 'origin/main' into fix/25-redact-keys-from-logsbce6bfaUpdate Taskfile.yamlf88123cUpdate Cargo.tomlc86f230Unify usage reporting and resilience in logging