Skip to content

[codex] decouple Codex usage sync from CC statusline#264

Merged
AnnatarHe merged 2 commits intomainfrom
codex/decouple-codex-usage-sync
Apr 6, 2026
Merged

[codex] decouple Codex usage sync from CC statusline#264
AnnatarHe merged 2 commits intomainfrom
codex/decouple-codex-usage-sync

Conversation

@AnnatarHe
Copy link
Copy Markdown
Contributor

@AnnatarHe AnnatarHe commented Apr 6, 2026

@AnnatarHe AnnatarHe marked this pull request as ready for review April 6, 2026 15:28
@codecov
Copy link
Copy Markdown

codecov Bot commented Apr 6, 2026

Codecov Report

❌ Patch coverage is 62.74510% with 57 lines in your changes missing coverage. Please review.

Files with missing lines Patch % Lines
daemon/codex_ratelimit.go 52.63% 21 Missing and 6 partials ⚠️
cmd/daemon/main.go 0.00% 15 Missing ⚠️
daemon/codex_usage_sync.go 81.25% 10 Missing and 5 partials ⚠️
Flag Coverage Δ
unittests 40.17% <62.74%> (?)

Flags with carried forward coverage won't be shown. Click here to find out more.

Files with missing lines Coverage Δ
daemon/cc_info_timer.go 77.56% <100.00%> (+12.12%) ⬆️
cmd/daemon/main.go 0.00% <0.00%> (ø)
daemon/codex_usage_sync.go 81.25% <81.25%> (ø)
daemon/codex_ratelimit.go 27.52% <52.63%> (+18.59%) ⬆️

... and 2 files with indirect coverage changes

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request refactors the Codex usage synchronization logic by moving it from CCInfoTimerService into a dedicated CodexUsageSyncService. This new service handles periodic fetching and syncing of Codex usage data to the server. Feedback includes several improvements for the new service: ensuring the service context is properly propagated to background sync operations, adding missing imports for error handling and OS checks, restoring platform restrictions (macOS and Linux only), and conditionally starting the service only when a valid token is configured to avoid unnecessary background activity.

Comment on lines +3 to +11
import (
"context"
"fmt"
"log/slog"
"sync"
"time"

"github.com/malamtime/cli/model"
)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Add missing imports for errors, os, and runtime to support context propagation, OS-specific checks, and better error handling.

Suggested change
import (
"context"
"fmt"
"log/slog"
"sync"
"time"
"github.com/malamtime/cli/model"
)
import (
"context"
"errors"
"fmt"
"log/slog"
"os"
"runtime"
"sync"
"time"
"github.com/malamtime/cli/model"
)

Comment on lines +39 to +50
s.sync()

for {
select {
case <-s.ticker.C:
s.sync()
case <-s.stopChan:
return
case <-ctx.Done():
return
}
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Propagate the service context to the sync function to ensure that background operations can be cancelled during daemon shutdown.

Suggested change
s.sync()
for {
select {
case <-s.ticker.C:
s.sync()
case <-s.stopChan:
return
case <-ctx.Done():
return
}
}
s.sync(ctx)
for {
select {
case <-s.ticker.C:
s.sync(ctx)
case <-s.stopChan:
return
case <-ctx.Done():
return
}
}

Comment on lines +67 to +75
func (s *CodexUsageSyncService) sync() {
if s.config.Token == "" {
return
}

if err := syncCodexUsage(context.Background(), s.config); err != nil {
slog.Warn("Failed to sync codex usage", slog.Any("err", err))
}
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Update the sync method to accept a context and use it when calling syncCodexUsage. Also, avoid logging a warning if the Codex authentication file is missing, as this is a common scenario for users who do not use Codex.

func (s *CodexUsageSyncService) sync(ctx context.Context) {
	if s.config.Token == "" {
		return
	}

	if err := syncCodexUsage(ctx, s.config); err != nil {
		if !errors.Is(err, os.ErrNotExist) {
			slog.Warn("Failed to sync codex usage", slog.Any("err", err))
		}
	}
}

Comment on lines +77 to +80
func syncCodexUsage(ctx context.Context, config model.ShellTimeConfig) error {
if config.Token == "" {
return nil
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Restore the OS check to restrict Codex usage syncing to macOS and Linux, maintaining parity with the previous implementation in CCInfoTimerService.

Suggested change
func syncCodexUsage(ctx context.Context, config model.ShellTimeConfig) error {
if config.Token == "" {
return nil
}
func syncCodexUsage(ctx context.Context, config model.ShellTimeConfig) error {
if runtime.GOOS != "darwin" && runtime.GOOS != "linux" {
return nil
}
if config.Token == "" {
return nil
}

Comment thread cmd/daemon/main.go Outdated
Comment on lines +151 to +157
codexUsageSyncService := daemon.NewCodexUsageSyncService(cfg)
if err := codexUsageSyncService.Start(ctx); err != nil {
slog.Error("Failed to start Codex usage sync service", slog.Any("err", err))
} else {
slog.Info("Codex usage sync service started")
defer codexUsageSyncService.Stop()
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Only start the CodexUsageSyncService if a ShellTime token is configured. This avoids unnecessary background activity and misleading log messages when the service cannot perform any syncing.

Suggested change
codexUsageSyncService := daemon.NewCodexUsageSyncService(cfg)
if err := codexUsageSyncService.Start(ctx); err != nil {
slog.Error("Failed to start Codex usage sync service", slog.Any("err", err))
} else {
slog.Info("Codex usage sync service started")
defer codexUsageSyncService.Stop()
}
if cfg.Token != "" {
codexUsageSyncService := daemon.NewCodexUsageSyncService(cfg)
if err := codexUsageSyncService.Start(ctx); err != nil {
slog.Error("Failed to start Codex usage sync service", slog.Any("err", err))
} else {
slog.Info("Codex usage sync service started")
defer codexUsageSyncService.Stop()
}
}

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 77cd44be2e

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

return err
}

return sendCodexUsageToServer(runCtx, config, usage)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Give upload its own timeout budget

syncCodexUsage reuses runCtx for both fetchCodexUsageFunc and sendCodexUsageToServer, so any slow fetch consumes the same 10s deadline and can leave the POST with little or no time to run. In practice, when the usage API call is slow (but still successful), the subsequent upload is canceled and usage never reaches /api/v1/codex-usage; this is a regression from the previous flow where upload had a separate timeout window.

Useful? React with 👍 / 👎.

devin-ai-integration[bot]

This comment was marked as resolved.

@AnnatarHe
Copy link
Copy Markdown
Contributor Author

@claude review it

@claude
Copy link
Copy Markdown
Contributor

claude Bot commented Apr 6, 2026

Claude finished @AnnatarHe's task in 1m 25s —— View job


Code Review: Decouple Codex Usage Sync from CC Statusline

  • Gather context and read changed files
  • Review daemon/codex_usage_sync.go (new service)
  • Review daemon/codex_usage_sync_test.go (new tests)
  • Review daemon/codex_ratelimit.go (modifications)
  • Review daemon/cc_info_timer.go (removed code)
  • Review cmd/daemon/main.go (service wiring)
  • Post review findings

Good refactoring overall — extracting the Codex usage sync into its own service with proper lifecycle management, sentinel errors, and a startup guard via CodexInstallationStatus(). The test coverage is solid. A few issues to address:


1. context.Background() in sync() — cancellation leak (medium)

daemon/codex_usage_sync.go:71 — The sync() method creates a context.Background() for syncCodexUsage, meaning in-flight HTTP requests won't be cancelled during daemon shutdown even though the goroutine in Start() exits on ctx.Done(). The ctx from Start() should be propagated.

// Store ctx as a field, or pass it through:
func (s *CodexUsageSyncService) sync(ctx context.Context) {
    if s.config.Token == "" {
        return
    }
    if err := syncCodexUsage(ctx, s.config); err != nil {
        // ...
    }
}

And update the call sites in Start():

s.sync(ctx)
for {
    select {
    case <-s.ticker.C:
        s.sync(ctx)
    // ...

Fix this →


2. Shared timeout budget for fetch + upload (medium)

daemon/codex_usage_sync.go:85-101 — A single 10s context.WithTimeout covers both fetchCodexUsageFunc and sendCodexUsageToServer. If the fetch takes 8s, the upload only gets 2s. Consider either:

  • Separate timeouts for each operation, or
  • Note that sendCodexUsageToServer already sets its own 5s timeout internally via HTTPRequestOptions.Timeout (line 146), so the outer 10s context may cancel it prematurely. Pick one timeout strategy and be consistent.

Fix this →


3. Missing OS platform guard (low)

daemon/codex_usage_sync.go:80 — The old code in cc_info_timer.go had a runtime.GOOS guard restricting to darwin and linux. The startup guard in main.go checks for ~/.codex existence which implicitly handles unsupported platforms, but an explicit OS check in syncCodexUsage would be more defensive and document the supported platforms clearly.


4. codexUsageCacheTTL is now unused (low)

daemon/codex_ratelimit.go:14 — The constant codexUsageCacheTTL (10 minutes) was used by the old codexRateLimitCache in CCInfoTimerService. The cache struct type still exists in this file but no longer appears to be instantiated for Codex usage (only anthropicRateLimitCache is used). If codexRateLimitCache and codexUsageCacheTTL are dead code after this refactor, they should be removed.


5. t.Helper() misuse in test functions (nit)

daemon/codex_usage_sync_test.go:19,80,104t.Helper() is designed for test helper functions (called by other tests), not for test functions themselves. It has no effect in TestXxx functions and is misleading. Safe to remove.


What looks good

  • Startup guard in main.go:151-168: The CodexInstallationStatus() check before starting the service is the right approach — it avoids noisy logs for non-Codex users while keeping the service simple. This addresses the concern other reviewers raised about unconditional startup.
  • Sentinel errors + CodexSyncSkipReason: Clean pattern for distinguishing expected skip conditions from real errors, with appropriate log levels (slog.Info for skips, slog.Warn for failures).
  • Test coverage: Good coverage of the sync flow, auth errors, installation status, and service lifecycle.
  • Function var injection for testing: Using loadCodexAuthFunc/fetchCodexUsageFunc vars allows clean test stubbing without interfaces.

@AnnatarHe AnnatarHe merged commit d2fab94 into main Apr 6, 2026
3 checks passed
@AnnatarHe AnnatarHe deleted the codex/decouple-codex-usage-sync branch April 6, 2026 15:46
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant