Skip to content

feat(session-sync): show RTK token reduction metrics#268

Merged
skulidropek merged 3 commits into
ProverCoderAI:mainfrom
konard:issue-266-1722651e92ba
May 12, 2026
Merged

feat(session-sync): show RTK token reduction metrics#268
skulidropek merged 3 commits into
ProverCoderAI:mainfrom
konard:issue-266-1722651e92ba

Conversation

@konard
Copy link
Copy Markdown
Contributor

@konard konard commented May 11, 2026

Fixes #266

Summary

  • Adds RTK token reduction metrics to session backup dry-run/output, backup README, and PR comments.
  • Keeps the metric in the session-sync functional core as a deterministic estimate: transcript bytes are converted to approximate tokens, then compared against the retained RTK budget.
  • Adds a regression test showing a concrete reduction example: 12 KB of transcript payload is reported as ~3000 -> ~512 tokens (-~2488, 83%).

Mathematical Guarantees

  • Invariant: retainedTokens(files) <= sourceTokens(files).
  • Invariant: reducedTokens(files) = sourceTokens(files) - retainedTokens(files).
  • Boundary: no GitHub/API side effects are introduced into the metric calculation; shell code only renders and transports the computed summary.

Verification

  • bun run --filter @prover-coder-ai/docker-git-session-sync test
  • bun run --filter @prover-coder-ai/docker-git-session-sync typecheck
  • bun run check
  • bun run build
  • bun test packages/docker-git-session-sync/tests/session-files.test.ts -t "RTK token reduction summary"
  • bun run test

Demo Output

The focused test verifies the visible example requested in the issue:

RTK token reduction estimate: ~3000 -> ~512 tokens (-~2488, 83%)

Adding .gitkeep for PR creation (default mode).
This file will be removed when the task is complete.

Issue: ProverCoderAI#266
@konard
Copy link
Copy Markdown
Contributor Author

konard commented May 11, 2026

⏳ Usage Limit Reached

The automated solution draft was interrupted because the OpenAI Codex usage limit was reached.

📊 Limit Information

  • Tool: OpenAI Codex
  • Limit Type: Usage limit exceeded
  • Reset Time: in 12h 56m (May 12, 1:12 AM UTC)
  • Session ID: 019e16f6-babd-7ca2-a52d-362bb0978c45

🔄 How to Continue

Auto-resume is enabled. The session will automatically resume (with context preserved) when the limit resets.

🤖 Models used:

  • Tool: OpenAI Codex
  • Requested: gpt-5.5
  • Model: GPT-5.5 (gpt-5.5)

📎 Execution log uploaded as Gist (231KB)


This session was interrupted due to usage limits. The session will automatically resume when the limit resets.

@konard
Copy link
Copy Markdown
Contributor Author

konard commented May 12, 2026

Auto Resume (on limit reset)

Auto-resuming automated work session at 2026-05-12T01:26:10.119Z

This session automatically resumed after the usage limit reset, continuing with the previous context preserved.

The PR has been converted to draft mode while work is in progress.

This is an auto-resumed session. Please wait for the session to finish, and provide your feedback.

@konard konard changed the title [WIP] Внедрить rtk в проект feat(session-sync): show RTK token reduction metrics May 12, 2026
@konard konard marked this pull request as ready for review May 12, 2026 01:31
@konard
Copy link
Copy Markdown
Contributor Author

konard commented May 12, 2026

Working session summary

Готово: PR обновлён и переведён в ready for review.

PR: #268
Commit: cbb5aef43d249a7e4cfa2553138150ef91dcf473

Что изменено:

  • Добавлена RTK token reduction estimate в docker-git-session-sync: выводится в backup/dry-run output, README snapshot и PR comment.
  • Метрика вынесена в чистый CORE: sourceTokens, retainedTokens, reducedTokens, reductionPercent.
  • Добавлен тест с конкретным примером: ~3000 -> ~512 tokens (-~2488, 83%).

Проверки:

  • bun run --filter @prover-coder-ai/docker-git-session-sync test
  • bun run --filter @prover-coder-ai/docker-git-session-sync typecheck
  • bun run check
  • bun run build
  • bun test packages/docker-git-session-sync/tests/session-files.test.ts -t "RTK token reduction summary"
  • bun run test
  • GitHub Actions на PR: все checks прошли успешно.

Working tree чистый.


This summary was automatically extracted from the AI working session output.

@konard
Copy link
Copy Markdown
Contributor Author

konard commented May 12, 2026

🔄 Solution Draft Log (auto resume on limit reset)

This log file contains the complete execution trace of the AI solution draft process.

💰 Cost estimation:

  • Model: GPT-5.5
  • Provider: OpenAI
  • Public pricing estimate: $8.633877

📊 Context and tokens usage:

  • 112.0K / 1.1M (11%) input tokens, 12.0K / 128K (9%) output tokens

Total: (112.0K + 7.0M cached) input tokens, 12.0K output tokens, $8.633877 cost

🤖 Models used:

  • Tool: OpenAI Codex
  • Requested: gpt-5.5
  • Model: GPT-5.5 (gpt-5.5)

Note: This session was automatically resumed after a usage limit reset, with the previous context preserved.

📎 Log file uploaded as Gist (22572KB)


Now working session is ended, feel free to review and add any feedback on the solution draft.

@konard
Copy link
Copy Markdown
Contributor Author

konard commented May 12, 2026

✅ Ready to merge

This pull request is now ready to be merged:

  • All CI checks have passed
  • No merge conflicts
  • No pending changes

Monitored by hive-mind with --auto-restart-until-mergeable flag

@skulidropek skulidropek merged commit b09b4aa into ProverCoderAI:main May 12, 2026
15 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Внедрить rtk в проект

2 participants