Skip to content

Redact sensitive keys from logs across the app#27

Merged
MikeGarde merged 5 commits intomainfrom
fix/25-redact-keys-from-logs
Apr 9, 2026
Merged

Redact sensitive keys from logs across the app#27
MikeGarde merged 5 commits intomainfrom
fix/25-redact-keys-from-logs

Conversation

@MikeGarde
Copy link
Copy Markdown
Owner

@MikeGarde MikeGarde commented Apr 9, 2026

Redact sensitive keys from logs across the app

Overview

This branch implements end-to-end redaction of sensitive keys in logs for all LLM interactions (Ollama and OpenAI), preventing secret leakage in runtime logs. It adds a general redaction capability via the commitbot_macros dependency, enhances observability with token usage metrics, and improves CLI/config handling to simplify branch awareness. Live LLM tests are disabled by default to avoid requiring real LLM access. Key groundwork includes refactoring CLI/config handling, better error propagation, and preparing the codebase for safer log output. Notable commits fueling this work include d0a0596 (redaction implementation) and df091d7 (CLI/config improvements and test coverage).

Key commits to note:

  • d0a0596: Redact sensitive keys from logs across the app
  • df091d7: Refactor CLI and config handling, improve error propagation, and enhance test coverage
  • c86f230: Unify usage reporting and resilience in logging

Changes

  • Redacted keys in logs across LLM clients (Ollama and OpenAI) to prevent sensitive data leakage.
  • Added commitbot_macros dependency to support log key redaction.
  • Token usage tracking and logging improvements in LLM clients.
  • CLI/config handling refactor with an optional branch field derived from the current git branch.
  • General error handling improvements and propagation improvements through the codebase.
  • Logging improvements to surface warnings by default and improve output formatting.
  • Tests updated to align with redaction and error handling changes; live LLM tests marked as ignored by default.

Testing / Validation

  • Run the full test suite (cargo test) and verify that:
    • Logs produced during LLM interactions no longer contain sensitive keys.
    • Token usage metrics (prompt and completion) are captured and surfaced after runs.
    • CLI branch derivation works when the branch field is omitted.
    • Config loading errors propagate clearly through the new error handling pathway.
  • Validate that previously failing or flaky tests related to live LLMs are skipped due to the ignored live tests setting.
  • Manual verification: trigger a sample LLM interaction and inspect logs for redacted keys.

Notes / Risks

  • Redaction may need refinement to ensure edge keys are consistently redacted across future keys or new services.
  • Introducing log redaction adds some processing; performance impact should be minimal but worth monitoring.
  • Some internal API surfaces were adjusted (e.g., error propagation and CLI/config references); downstream consumers should re-run tests to confirm integration.

Commits in this PR:

* Add `commitbot_macros` dependency for redacting keys
* Improve log security by removing sensitive field keys
* Redact keys from logs in Ollama and OpenAI clients
* Track token usage and logging in LLM clients
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR aims to reduce the risk of credential leakage in verbose logs by introducing a “sensitive fields” mechanism for config logging, and it adds token-usage tracking/logging to the OpenAI and Ollama LLM clients.

Changes:

  • Add a new commitbot_macros proc-macro crate and derive SensitiveFields on Config with #[sensitive] field annotations.
  • Redact sensitive config values in ConfigResolver::log_decision and adjust LLM prompt/token-usage logging behavior.
  • Update Taskfile install flows (including brew install/uninstall helpers) and tweak the run:simple verbosity flag.

Reviewed changes

Copilot reviewed 7 out of 8 changed files in this pull request and generated 8 comments.

Show a summary per file
File Description
Taskfile.yaml Adjusts run/install tasks; adds install:brew helper.
src/config.rs Adds sensitive-field annotations/derive and redacts sensitive values in config decision logs.
src/llm/openai.rs Aggregates token usage across calls and adjusts prompt log levels.
src/llm/ollama.rs Attempts to parse/aggregate token usage and logs aggregate usage after operations.
commitbot-macros/src/lib.rs Introduces SensitiveFields derive macro to generate sensitive field-name list.
commitbot-macros/Cargo.toml Defines the new proc-macro crate.
Cargo.toml Adds commitbot_macros dependency.
Cargo.lock Locks the new dependency.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment thread Taskfile.yaml Outdated
Comment thread Cargo.toml Outdated
Comment thread commitbot-macros/src/lib.rs
Comment thread src/llm/openai.rs
Comment thread src/llm/openai.rs Outdated
Comment thread src/llm/openai.rs
Comment thread src/llm/ollama.rs
Comment thread src/llm/ollama.rs
MikeGarde and others added 3 commits April 8, 2026 22:31
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
- Align `commitbot_macros` in `Cargo.toml` to 0.1.0 to match workspace API.
- Remove the no-op else branch and its limitation comment in `commitbot-macros/src/lib.rs`.
- Introduce optional `LlmClient` API to report and reset aggregated token usage with a default no-op implementation in `src/llm/mod.rs`.
- In `src/llm/ollama.rs`, recover from poisoned usage mutex and centralize usage reset via `take_and_reset_usage`.
- In `src/llm/openai.rs`, recover from poisoned mutex, stop per-call usage logging, and implement `take_and_reset_usage` returning activity, with a warning when poisoned.
- Surface warnings by default in `src/logging.rs` to boost visibility.
- In `src/main.rs`, improve output formatting by avoiding extra newline after LLM messages, respect trailing newlines in streaming mode, surface token usage metrics after runs, and ensure consistent PR message output.

Token usage: prompt=3199, completion=7202, total=10401
@MikeGarde MikeGarde linked an issue Apr 9, 2026 that may be closed by this pull request
@MikeGarde MikeGarde requested a review from Copilot April 9, 2026 04:13
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 10 out of 11 changed files in this pull request and generated 6 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment thread src/main.rs
Comment thread src/llm/openai.rs
Comment thread src/llm/ollama.rs
Comment thread Cargo.toml
Comment thread commitbot-macros/Cargo.toml
Comment thread src/config.rs
@MikeGarde MikeGarde merged commit 7e8353a into main Apr 9, 2026
6 checks passed
@MikeGarde MikeGarde deleted the fix/25-redact-keys-from-logs branch April 9, 2026 04:23
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Redact keys from logs

2 participants