refactor: rework decoders logic for cleaner code#431
Conversation
WalkthroughThe diff genericizes decoders and key normalization: NormalizeKeys and DecodeEnv/DecodeFlags are rewritten as generics driven by a reflected type for field names/tags. OAuthServiceConfig field struct tags were changed (some Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor Caller
participant App as app_utils.go
participant Decoder as Decoder (DecodeEnv / DecodeFlags)
participant Normalizer as normalizeKeys[C]
participant Reflect as Reflection (getKnownKeys)
participant Parser as parser.Decode
Caller->>App: request config load
App->>Decoder: DecodeEnv[T,C](env, "providers")
Decoder->>Normalizer: normalizeKeys[C](env, "providers", "_")
Normalizer->>Reflect: getKnownKeys[C]()
Reflect-->>Normalizer: field names + `field` tags
Normalizer->>Normalizer: build normalized keys (tinyauth.{subName}.field)
Normalizer-->>Decoder: normalized map
Decoder->>Parser: parser.Decode("tinyauth.{subName}", &result)
Parser-->>Decoder: decoded result (type T)
Decoder-->>App: (T, error)
App-->>Caller: configured providers (T)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes
Possibly related PRs
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
📜 Recent review detailsConfiguration used: CodeRabbit UI Review profile: CHILL Plan: Pro ⛔ Files ignored due to path filters (1)
📒 Files selected for processing (2)
🔇 Additional comments (1)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## main #431 +/- ##
==========================================
- Coverage 23.54% 23.15% -0.40%
==========================================
Files 36 36
Lines 2862 2237 -625
==========================================
- Hits 674 518 -156
+ Misses 2153 1684 -469
Partials 35 35 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
internal/utils/decoders/decoders.go (1)
12-61: Env key normalization is incorrect; id extraction trims the wrong prefix.For env vars like TINYAUTH_PROVIDERS_GOOGLE_CLIENT_ID you lower-case and hyphenate to tinyauth-providers-google-client-id, but then do TrimPrefix(key, root+"-"), which fails (prefix is tinyauth-providers-...). Resulting ids contain tinyauth-providers-..., producing wrong paths like tinyauth.providers.tinyauthProvidersGoogle.clientId.
Also, suffix selection is ambiguous (client-secret matches client-secret-file). Prefer longest-match and require dash-delimited suffix.
Apply:
@@ -func normalizeKeys[T any](input map[string]string, root string, sep string) map[string]string { - knownKeys := getKnownKeys[T]() - normalized := make(map[string]string) - - for k, v := range input { - parts := []string{"tinyauth"} - - key := strings.ToLower(k) - key = strings.ReplaceAll(key, sep, "-") - - suffix := "" - - for _, known := range knownKeys { - if strings.HasSuffix(key, known) { - suffix = known - break - } - } - - if suffix == "" { - continue - } - - parts = append(parts, root) - - id := strings.TrimPrefix(key, root+"-") - id = strings.TrimSuffix(id, "-"+suffix) - - if id == "" { - continue - } - - parts = append(parts, id) - parts = append(parts, suffix) - - final := "" - - for i, part := range parts { - if i == 0 { - final += kebabToCamel(part) - continue - } - final += "." + kebabToCamel(part) - } - - normalized[final] = v - } - - return normalized -} +func normalizeKeys[T any](input map[string]string, root string, sep string) map[string]string { + knownKeys := getKnownKeys[T]() // longest-first + normalized := make(map[string]string) + + for k, v := range input { + key := strings.ToLower(k) + key = strings.ReplaceAll(key, sep, "-") + + var idSuffix string + switch { + case strings.HasPrefix(key, "tinyauth-"+root+"-"): + idSuffix = strings.TrimPrefix(key, "tinyauth-"+root+"-") + case strings.HasPrefix(key, root+"-"): + idSuffix = strings.TrimPrefix(key, root+"-") + default: + continue + } + + suffix := "" + for _, known := range knownKeys { // already length-sorted + if strings.HasSuffix(idSuffix, "-"+known) { + suffix = known + break + } + } + if suffix == "" { + continue + } + + id := strings.TrimSuffix(idSuffix, "-"+suffix) + if id == "" { + continue + } + + var b strings.Builder + b.WriteString("tinyauth.") + b.WriteString(kebabToCamel(root)) + b.WriteByte('.') + b.WriteString(kebabToCamel(id)) + b.WriteByte('.') + b.WriteString(kebabToCamel(suffix)) + normalized[b.String()] = v + } + return normalized +}
🧹 Nitpick comments (6)
internal/utils/decoders/decoders.go (1)
94-107: kebabToCamel OK; minor efficiency/style tweaks possible.Optional: use a strings.Builder and drop the heavy x/text cases by capitalizing via unicode.ToUpper on first rune per segment. Current approach is fine if binary size isn't a concern.
internal/utils/decoders/flags_decoder_test.go (1)
12-18: Test covers flags path; add env variant to prevent regressions.This validates flag normalization and camelized provider ids (myGithub). Please also add an env test for keys like TINYAUTH_PROVIDERS_MY_GITHUB_CLIENT_ID to catch the prefix bug class.
Example:
@@ func TestDecodeEnv_Providers(t *testing.T) { - // TODO + env := map[string]string{ + "TINYAUTH_PROVIDERS_GOOGLE_CLIENT_ID": "id", + "TINYAUTH_PROVIDERS_GOOGLE_CLIENT_SECRET": "secret", + "TINYAUTH_PROVIDERS_MY_GITHUB_CLIENT_ID": "gid", + "TINYAUTH_PROVIDERS_MY_GITHUB_CLIENT_SECRET": "gsecret", + } + got, err := decoders.DecodeEnv[config.Providers, config.OAuthServiceConfig](env, "providers") + assert.NilError(t, err) + assert.Equal(t, got.Providers["google"].ClientID, "id") + assert.Equal(t, got.Providers["myGithub"].ClientSecret, "gsecret") }Also applies to: 22-29, 33-37
internal/utils/decoders/flags_decoder.go (3)
9-11: Clarify generic contract: T should be a non-pointer structPassing T as a pointer makes &result a **T, which parser.Decode likely won’t handle.
- Add a doc comment: “T must be a non-pointer struct; C is the element struct for value normalization.”
- Optionally add a runtime guard (reflect) to fail fast if T is not a struct.
Ensure all call sites use concrete struct types for T (not pointers).
15-16: Simplify Decode prefixes (optional)Passing both "tinyauth" and "tinyauth."+subName is redundant if normalized keys are rooted at "tinyauth."+subName. Prefer a single, consistent prefix to reduce ambiguity.
-err := parser.Decode(normalized, &result, "tinyauth", "tinyauth."+subName) +err := parser.Decode(normalized, &result, "tinyauth")
24-29: Preallocate map capacity in filterFlagsMinor perf/alloc improvement.
-func filterFlags(flags map[string]string) map[string]string { - filtered := make(map[string]string) +func filterFlags(flags map[string]string) map[string]string { + filtered := make(map[string]string, len(flags)) for k, v := range flags { filtered[strings.TrimPrefix(k, "--")] = v } return filtered }internal/utils/decoders/env_decoder_test.go (1)
33-36: Add test cases for hyphenated field tags and slice decodingThe current TestDecodeEnv only covers simple string fields (ClientID, ClientSecret). Extend the test to verify:
- Hyphenated field tags work end-to-end:
PROVIDERS_GOOGLE_REDIRECT_URL,PROVIDERS_GOOGLE_USER_INFO_URL- Slice fields with delimiters:
PROVIDERS_GOOGLE_SCOPES="openid,email"→Scopes: []string{"openid", "email"}Then add a companion
TestDecodeFlagswith the same field coverage to confirm delimiter behavior is consistent across both decoder types.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (8)
internal/config/config.go(1 hunks)internal/utils/app_utils.go(2 hunks)internal/utils/decoders/decoders.go(2 hunks)internal/utils/decoders/decoders_test.go(0 hunks)internal/utils/decoders/env_decoder.go(1 hunks)internal/utils/decoders/env_decoder_test.go(1 hunks)internal/utils/decoders/flags_decoder.go(1 hunks)internal/utils/decoders/flags_decoder_test.go(1 hunks)
💤 Files with no reviewable changes (1)
- internal/utils/decoders/decoders_test.go
🧰 Additional context used
🧬 Code graph analysis (3)
internal/utils/decoders/env_decoder_test.go (2)
internal/config/config.go (2)
Providers(176-178)OAuthServiceConfig(55-66)internal/utils/decoders/env_decoder.go (1)
DecodeEnv(7-19)
internal/utils/app_utils.go (3)
internal/utils/decoders/env_decoder.go (1)
DecodeEnv(7-19)internal/config/config.go (2)
Providers(176-178)OAuthServiceConfig(55-66)internal/utils/decoders/flags_decoder.go (1)
DecodeFlags(9-22)
internal/utils/decoders/flags_decoder_test.go (2)
internal/config/config.go (2)
Providers(176-178)OAuthServiceConfig(55-66)internal/utils/decoders/flags_decoder.go (1)
DecodeFlags(9-22)
🔇 Additional comments (4)
internal/utils/app_utils.go (1)
150-150: Generic decoder calls look good; ensure env path works after normalize fix.Calls now align with the generic API. Once normalizeKeys is fixed to handle the tinyauth- prefix, GetOAuthProvidersConfig will correctly merge env and flags.
Add/keep an integration test that feeds TINYAUTH_PROVIDERS_GOOGLE_CLIENT_ID/SECRET and asserts providers["google"] is populated via env.
Also applies to: 170-170
internal/utils/decoders/env_decoder.go (1)
7-19: LGTM; relies on normalizeKeys semantics.Generic signature and parse path are correct. Once normalizeKeys is fixed, env decoding should work for both env and flags consistently.
internal/config/config.go (1)
56-66: The tag migration concern is unsubstantiated—verification shows the implementation is correct.The decoder implementation in
internal/utils/decoders/decoders.goexplicitly reads the"field"struct tag viaTag.Get("field")and properly handles fields without tags using fallback kebab-case generation. No legacykey:orauth:tags exist in the codebase. The migration is complete and working correctly.Likely an incorrect or invalid review comment.
internal/utils/decoders/flags_decoder.go (1)
13-16: Incorrect review comment. Flags use hyphens, not dots.The test evidence shows flags are hyphen-delimited (e.g.,
--providers-google-client-id), not dot-delimited as claimed. The normalizeKeys function normalizes its input separator to hyphens; since actual flags already use hyphens, thesepparameter is a no-op regardless of whether it's"_"or".". The code works correctly as-is, as proven by the passing test.Likely an incorrect or invalid review comment.
Summary by CodeRabbit
Refactor
Tests
Chores