Conversation
Adds backend foundations for mobile auth without UI changes: - API endpoints: /api/v1/auth (registrations, sessions, OAuth via Apple/Google, OTP 2FA challenges) and /api/v1/users (destroy, two_factor) - OAuth services: find_or_create_oauth_user, verify_apple_token, verify_google_token - JWT issuers/verifiers for OTP challenges and account-link emails - Web account-link controller for email-based provider linking - subscription_source enum on User (paddle/apple/google/none) with pending_payment status - Active Record encryption keys moved to application.rb so they load before devise-two-factor's encrypts :otp_secret - Rate limits for login, OAuth, OTP, and 2FA endpoints - Partial unique index on (provider, uid) for OAuth providers
Mobile auth backend (PR-A)
Add Traccar client support
|
Caution Review failedThe pull request is closed. ℹ️ Recent review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (3)
📝 WalkthroughWalkthroughAdds mobile OAuth (Apple/Google) sign-in and account-link flows, OTP-based 2FA, account-destroy confirmation tokens, a Traccar points ingestion endpoint with parsing/upsert logic, Rack::Attack throttles, supporting services/migrations/mailers/views, many tests, and bumps version to 1.7.1. Changes
Sequence Diagram(s)sequenceDiagram
participant Client as Client App
participant AuthCtrl as OAuth Controller\n(Apple/Google)
participant TokenVerify as Token Verifier\n(Apple/Google)
participant FindOrCreate as FindOrCreateOAuthUser
participant DB as Database
participant Mailer as Mailer Job
participant EmailService as Email Service
Client->>AuthCtrl: POST /api/v1/auth/apple (id_token)
AuthCtrl->>TokenVerify: Verify token
TokenVerify-->>AuthCtrl: Claims (sub,email,email_verified)
AuthCtrl->>FindOrCreate: Call with provider,claims,email_verified
FindOrCreate->>DB: Find user by (provider,uid)
alt Existing Identity
DB-->>FindOrCreate: User found
FindOrCreate-->>AuthCtrl: Return user, created=false
AuthCtrl-->>Client: 200 OK (api_key)
else Email Collision
FindOrCreate->>DB: Find user by email
DB-->>FindOrCreate: Existing user
alt email_verified=true
FindOrCreate->>Mailer: Enqueue account link email (with JWT)
Mailer->>EmailService: Send confirmation email
FindOrCreate-->>AuthCtrl: Raise LinkVerificationSent
AuthCtrl-->>Client: 202 Accepted (verification_sent)
else email_verified≠true
FindOrCreate-->>AuthCtrl: Raise UnverifiedEmail
AuthCtrl-->>Client: 403 Forbidden (email_not_verified)
end
else New User
FindOrCreate->>DB: Create user
FindOrCreate-->>AuthCtrl: Return user, created=true
AuthCtrl-->>Client: 201 Created (api_key)
end
sequenceDiagram
participant Client as Client App
participant SessionCtrl as Sessions Controller
participant DB as Database
participant OTPIssuer as OTP Challenge Issuer
participant OTPCtrl as OTP Challenges Controller
participant OTPVerify as OTP Verifier
Client->>SessionCtrl: POST /api/v1/auth/login (email,password)
SessionCtrl->>DB: Find user by email
DB-->>SessionCtrl: User record
SessionCtrl->>SessionCtrl: Constant-time password check
alt Password Invalid
SessionCtrl-->>Client: 401 Unauthorized
else Password Valid
alt 2FA Required
SessionCtrl->>OTPIssuer: Issue challenge token (5m)
OTPIssuer-->>SessionCtrl: challenge_token
SessionCtrl-->>Client: 202 Accepted (challenge_token, ttl)
Client->>OTPCtrl: POST /api/v1/auth/otp_challenge (token, code)
OTPCtrl->>OTPVerify: Verify token and TOTP/backup code
alt Valid
OTPVerify-->>OTPCtrl: success
OTPCtrl-->>Client: 200 OK (api_key)
else Invalid
OTPCtrl-->>Client: 401 Unauthorized
end
else No 2FA
SessionCtrl-->>Client: 200 OK (api_key)
end
end
sequenceDiagram
participant Device as Traccar Device
participant PointCtrl as Points Controller
participant ParamParser as Traccar::Params
participant MotionExtract as MotionDataExtractor
participant PointCreator as PointCreator
participant DB as Database
participant AnomalyJob as Anomaly Filter Job
participant Broadcaster as LiveBroadcaster
Device->>PointCtrl: POST /api/v1/traccar/points (payload)
PointCtrl->>ParamParser: Parse payload
ParamParser->>MotionExtract: Extract motion/activity
MotionExtract-->>ParamParser: motion_data
ParamParser-->>PointCreator: Validated params
PointCreator->>DB: Upsert points (lonlat,timestamp,user_id)
DB-->>PointCreator: Upsert results (xmax,...)
PointCreator->>DB: Update user counters if inserts
PointCreator->>AnomalyJob: Enqueue anomaly job (timestamp range)
PointCreator->>Broadcaster: Broadcast live points
PointCreator-->>PointCtrl: Return result array
PointCtrl-->>Device: 200 OK ([])
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 3 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 15
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
config/initializers/rack_attack.rb (1)
51-55:⚠️ Potential issue | 🟠 MajorInclude Traccar in the high-volume ingest bucket.
/api/v1/traccar/pointsis missing fromPOINTS_CREATION_PATHS, so it falls back to the generic API throttle instead of the 10,000 req/hr ingest limit. That will 429 normal tracker uploads much earlier than the other point ingestion endpoints.🚦 Suggested fix
POINTS_CREATION_PATHS = %w[ /api/v1/points /api/v1/owntracks/points /api/v1/overland/batches + /api/v1/traccar/points ].freeze🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@config/initializers/rack_attack.rb` around lines 51 - 55, POINTS_CREATION_PATHS is missing the Traccar endpoint which causes /api/v1/traccar/points to be throttled by the generic API rule; update the POINTS_CREATION_PATHS constant to include the string '/api/v1/traccar/points' alongside the other entries (ensuring the array remains frozen and formatting consistent) so Traccar uses the high-volume ingest throttle like the other point ingestion endpoints.
🧹 Nitpick comments (14)
spec/services/auth/verify_otp_challenge_token_spec.rb (1)
53-55: Drop the exception inheritance example.This is testing class structure rather than behavior. The replay example above already proves the public contract for a reused token.
Based on learnings: Applies to spec/**/*.rb : Test behavior, not implementation. Never mock the object under test; never test private methods via
send(); never usereceive_message_chain; avoid over-stubbing; don't test wiring without outcomes; preferhave_enqueued_joboverexpect(Job).to receive(:perform_later); don't assert on cache key formats or internal metric JSON shapes; use real factory data over allow stubs.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@spec/services/auth/verify_otp_challenge_token_spec.rb` around lines 53 - 55, Remove the example that asserts class inheritance (the it block creating described_class::TokenReplayed and expecting it to be a described_class::InvalidToken); delete that spec example entirely from verify_otp_challenge_token_spec.rb so the suite tests behavior only (the prior replay test already covers the public contract) and do not replace it with another implementation-structure assertion.spec/services/auth/verify_account_link_token_spec.rb (1)
67-76: Assert the replay outcome, not the cache key.These examples are coupled to the current cache key format, so a harmless key rename will fail the suite. It would be stronger to consume a real token and assert that verification rejects it, and keep the blank-
jticase as a no-raise expectation.♻️ Example refactor
describe '.mark_consumed!' do - it 'writes the jti to cache so replays are rejected' do - described_class.mark_consumed!('test-jti') - expect(Rails.cache.exist?('oauth_account_link:consumed:test-jti')).to be(true) - end + it 'causes a consumed token to be rejected' do + token = issue + payload = JWT.decode(token, ENV.fetch('JWT_SECRET_KEY', 'test_secret'), true, algorithm: 'HS256').first + + described_class.mark_consumed!(payload['jti']) + + expect { described_class.new(token).call }.to raise_error(described_class::TokenReplayed) + end - it 'is a no-op for a blank jti' do - described_class.mark_consumed!('') - expect(Rails.cache.exist?('oauth_account_link:consumed:')).to be(false) - end + it 'ignores a blank jti' do + expect { described_class.mark_consumed!('') }.not_to raise_error + end endBased on learnings: Applies to spec/**/*.rb : Test behavior, not implementation. Never mock the object under test; never test private methods via
send(); never usereceive_message_chain; avoid over-stubbing; don't test wiring without outcomes; preferhave_enqueued_joboverexpect(Job).to receive(:perform_later); don't assert on cache key formats or internal metric JSON shapes; use real factory data over allow stubs.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@spec/services/auth/verify_account_link_token_spec.rb` around lines 67 - 76, The tests should assert behavior, not implementation: instead of checking Rails.cache keys after calling described_class.mark_consumed!('test-jti'), create or obtain a real token/jti, call described_class.mark_consumed!(jti) and then assert that the public verification API (e.g., described_class.verify or described_class.verify!) rejects that same token/jti (returns false or raises the expected error) to prove replays are blocked; likewise change the blank-jti example to assert that calling described_class.mark_consumed!('') is a no-op by expecting no error (e.g., expect { described_class.mark_consumed!('') }.not_to raise_error) rather than inspecting cache keys. Ensure you reference described_class.mark_consumed! and the class's public verify/verify! method in the updated examples.app/controllers/api/v1/users/two_factor_controller.rb (2)
24-34: Consider addingdrift_aheadfor clock skew tolerance.The ROTP verification uses
drift_behind: 30but nodrift_ahead. If the user's device clock is slightly ahead of the server, valid codes may be rejected. TOTP apps typically expect both directions of drift tolerance.♻️ Add forward drift tolerance
if current_api_user.otp_secret.present? && - ROTP::TOTP.new(current_api_user.otp_secret).verify(params[:otp_code].to_s, drift_behind: 30) + ROTP::TOTP.new(current_api_user.otp_secret).verify(params[:otp_code].to_s, drift_behind: 30, drift_ahead: 30) current_api_user.otp_required_for_login = trueAlso apply the same change to
valid_otp?on line 77.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/controllers/api/v1/users/two_factor_controller.rb` around lines 24 - 34, The confirm action currently calls ROTP::TOTP.new(current_api_user.otp_secret).verify(..., drift_behind: 30) without a forward tolerance; update this call in the confirm method to pass a symmetric drift_ahead (e.g., drift_ahead: 30) alongside drift_behind so verification accepts small device-ahead clock skew, and make the identical change in the valid_otp? method to use the same drift_ahead (and drift_behind) values for consistent behavior.
73-78: Extract duplicate ROTP verification logic.The ROTP verification pattern is duplicated between
confirm(line 26) andvalid_otp?. Consider extracting to a shared helper.♻️ Extract verification helper
+ def verify_otp(code) + return false if current_api_user.otp_secret.blank? + + ROTP::TOTP.new(current_api_user.otp_secret) + .verify(code.to_s, drift_behind: 30, drift_ahead: 30) + .present? + end + def valid_otp? return false if params[:otp_code].blank? - return false if current_api_user.otp_secret.blank? - ROTP::TOTP.new(current_api_user.otp_secret).verify(params[:otp_code].to_s, drift_behind: 30).present? + verify_otp(params[:otp_code]) end🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/controllers/api/v1/users/two_factor_controller.rb` around lines 73 - 78, The ROTP verification is duplicated between the confirm action and the valid_otp? method; extract that logic into a single helper (e.g., a private method or controller concern) such as verify_otp_for(user, code) that encapsulates ROTP::TOTP.new(user.otp_secret).verify(code.to_s, drift_behind: 30) while keeping the same nil/blank guards; replace the inline calls in confirm and valid_otp? with calls to this new helper (and ensure current_api_user is passed where needed).spec/requests/api/v1/auth/registrations_spec.rb (1)
42-46: Consider removing redundant queue adapter configuration.In modern Rails/RSpec setups,
ActiveJob::Base.queue_adapter = :testis typically configured globally inrails_helper.rbor viaconfig.active_job.queue_adapter = :testin test environment config. Setting it per-test is harmless but unnecessary.♻️ Simplified test
it 'does not send welcome emails (trial not started)' do - ActiveJob::Base.queue_adapter = :test post '/api/v1/auth/register', params: valid_params expect(Users::MailerSendingJob).not_to have_been_enqueued end🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@spec/requests/api/v1/auth/registrations_spec.rb` around lines 42 - 46, Remove the redundant per-test queue adapter setup by deleting the line setting ActiveJob::Base.queue_adapter = :test from the spec (inside the example 'does not send welcome emails (trial not started)'); rely on the global test configuration for ActiveJob instead and keep the rest of the assertions that check Users::MailerSendingJob has not been enqueued.spec/services/traccar/point_creator_spec.rb (1)
84-88: Consider consolidating assertions to avoid redundant service call.The service is called twice in this test - once in the
expectblock and once for the return value check. Sincecall_servicemay have side effects, you could capture the result in the first call.♻️ Consolidated test
it 'does not raise and creates no point' do - expect { call_service }.not_to(change { Point.where(user:).count }) - expect(call_service).to eq([]) + result = nil + expect { result = call_service }.not_to(change { Point.where(user:).count }) + expect(result).to eq([]) end🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@spec/services/traccar/point_creator_spec.rb` around lines 84 - 88, The test currently invokes call_service twice; capture the service output once and assert side effects around the single invocation to avoid duplicate calls. Change the test to assign the result (e.g., result = nil; expect { result = call_service }.not_to(change { Point.where(user:).count })) and then assert expect(result).to eq([]). Keep references to call_service and Point.where(user:) as the targets to update.spec/services/auth/find_or_create_oauth_user_spec.rb (1)
10-17: Rename this helper so it doesn't shadow FactoryBot'sbuild.Overriding
buildat the example-group level makes this spec harder to extend and easy to misread once someone needsbuild(:user)in the same file. A name likebuild_serviceoroauth_servicekeeps the intent clear.Based on learnings: Applies to spec/**/*_spec.rb : Use FactoryBot/FFaker for test data generation in RSpec tests.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@spec/services/auth/find_or_create_oauth_user_spec.rb` around lines 10 - 17, The helper method named build in the spec shadows FactoryBot.build; rename this helper (e.g., to build_service or oauth_service) and update all usages in the spec to the new name so FactoryBot.build remains available; specifically change the method definition that currently reads def build(...) in the example group for FindOrCreateOauthUser (and any calls to build(...)) to def build_service(...) (or def oauth_service(...)) and update call sites to use build_service(...) so the described_class.new(...) construction stays identical but no longer overrides FactoryBot's build.spec/requests/api/v1/auth/otp_challenges_spec.rb (1)
6-14: Restore the originalRack::Attack.cache.storein teardown.This example group replaces the global cache store but only flips
Rack::Attack.enabledback afterward. Keeping the memory store around can make later request specs depend on execution order instead of their own setup.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@spec/requests/api/v1/auth/otp_challenges_spec.rb` around lines 6 - 14, The test setup replaces Rack::Attack.cache.store but never restores it; capture the original Rack::Attack.cache.store at the start of the before block (e.g. original_store = Rack::Attack.cache.store), then set Rack::Attack.cache.store to ActiveSupport::Cache::MemoryStore.new and in the after block restore it (Rack::Attack.cache.store = original_store) and call Rack::Attack.reset! as needed; reference the before/after hooks and Rack::Attack.cache.store so the teardown returns global state to its original value.spec/requests/api/v1/users/two_factor_spec.rb (1)
9-13: Restore the previousRack::Attackglobals after each example.This spec swaps
Rack::Attack.cache.storeand togglesRack::Attack.enabled, but onlyenabledgets restored. Leaving the memory store behind makes later throttling specs order-dependent and can mask production-only behavior differences. Anaroundhook that captures and restores both values would keep the suite isolated.Also applies to: 126-129
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@spec/requests/api/v1/users/two_factor_spec.rb` around lines 9 - 13, Wrap the example setup that mutates Rack::Attack in an around hook which captures and restores both Rack::Attack.cache.store and Rack::Attack.enabled, instead of only restoring enabled; specifically, save original_store = Rack::Attack.cache.store and original_enabled = Rack::Attack.enabled before the example, yield the example, and after it reassign Rack::Attack.cache.store = original_store and Rack::Attack.enabled = original_enabled so the memory store and enabled flag are both restored (references: Rack::Attack.cache.store, Rack::Attack.enabled, the existing before block in two_factor_spec.rb).spec/requests/api/v1/auth/google_spec.rb (1)
75-96: Add one example for the'true'claim variant.The controller accepts both boolean
trueand string'true'foremail_verified, but this spec only exercises the boolean path. A small request example for the string form would keep that compatibility from regressing.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@spec/requests/api/v1/auth/google_spec.rb` around lines 75 - 96, Add a second example that mirrors the existing context but stubs verifier_double.call to return email_verified: 'true' (string) and then posts to '/api/v1/auth/google' to assert the same behavior: enqueues Users::MailerSendingJob with existing.id and provider_label 'Google', responds with :accepted and body['error'] == 'verification_sent', and that existing.reload.provider and existing.reload.uid remain nil; place this alongside the current example in the same context so the controller's handling of the string 'true' claim is exercised.app/services/auth/find_or_create_oauth_user.rb (1)
61-71: Dead code path:auto_link_allowed?always returns false.The
if auto_link_allowed?branch (lines 64-67) is currently unreachable sinceauto_link_allowed?returnsfalse. The comment explains this is intentional pending a Flipper flag in a future PR. Consider adding a code comment directly inhandle_email_collisionreferencing PR-B, or remove the dead branch until the flag is implemented.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/services/auth/find_or_create_oauth_user.rb` around lines 61 - 71, handle_email_collision contains an unreachable branch because auto_link_allowed? currently always returns false; add an inline comment above the if auto_link_allowed? branch referencing the future PR (PR-B) and the Flipper flag that will re-enable this path so reviewers understand this is intentional and pending that PR (alternatively, remove the dead if block until PR-B lands) — target the handle_email_collision method and mention auto_link_allowed?, send_verification_email, UnverifiedEmail and LinkVerificationSent in the comment for clarity.spec/requests/api/v1/auth/apple_spec.rb (1)
41-59: Consider addingemail_verifiedto the returning user stub for completeness.The returning user stub (lines 45-48) omits
email_verified. While this works because the lookup by(provider, uid)bypasses email collision logic, including it would make the test more explicit about real-world Apple token payloads.This is a minor consistency nit - the test correctly validates the core behavior.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@spec/requests/api/v1/auth/apple_spec.rb` around lines 41 - 59, In the "returning Apple user" spec, the verifier_double stub returned in the before block should include an email_verified field to more accurately mimic real Apple ID token payloads; update the allow(verifier_double).to receive(:call).and_return(...) in that before block (used by the spec/context and verifier_double) to include email_verified: true while keeping sub and email unchanged so the existing lookup by provider/uid and expectations on User.count and existing.id remain valid.app/services/auth/verify_account_link_token.rb (1)
48-59: Minor API asymmetry:mark_consumed!is a class method whiletoken_consumed?is an instance method.This works correctly but creates a slight asymmetry in the API. Consider whether
mark_consumed!should also be an instance method for consistency, or document why the class method is preferred (e.g., called from controller after verification).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/services/auth/verify_account_link_token.rb` around lines 48 - 59, The API is asymmetrical: mark_consumed! is a class method while token_consumed? is an instance method—make them consistent by converting one to match the other; either change mark_consumed! into an instance method or change token_consumed? into a class method. To fix, update the method definition for mark_consumed! (or token_consumed?) so both are defined with the same receiver (e.g., add self. to token_consumed? to make it a class method, or remove self. from mark_consumed! to make it an instance method), and adjust any call sites (e.g., places calling VerifyAccountLinkToken.mark_consumed! or instance.token_consumed?) accordingly to use the chosen form.spec/requests/api/v1/auth/sessions_spec.rb (1)
52-62: Borderline implementation testing, but acceptable for timing-attack verification.The expectation on
BCrypt::Password.new(line 58) tests implementation rather than behavior, which the coding guidelines discourage. However, timing-attack resistance is genuinely difficult to test behaviorally without measuring actual response times (which is flaky). This pragmatic approach provides some assurance that the constant-time path is exercised.Consider adding a comment explaining why this implementation check is necessary here.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@spec/requests/api/v1/auth/sessions_spec.rb` around lines 52 - 62, The test for timing-attack resistance relies on an implementation-level expectation on BCrypt::Password.new (inside the "timing-attack resistance" example that posts to '/api/v1/auth/login') which can look like implementation testing; add a brief comment above the expect(BCrypt::Password).to receive(:new)... line that explains why this check is necessary (timing measurements are flaky, so asserting that BCrypt verification is invoked on unknown-email paths provides pragmatic assurance of the constant-time path), referencing BCrypt::Password.new and the spec example name so future readers understand the tradeoff.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@app/controllers/api/v1/auth/apple_controller.rb`:
- Around line 11-12: The rescue block for Auth::VerifyAppleToken::InvalidToken
in AppleController currently returns the verifier's raw error message to
clients; change it to return a stable generic message (e.g.
render_auth_error("Apple token verification failed")) and move the detailed
information to server logs by logging e.message (and optionally e.backtrace)
using the controller logger (e.g. logger.error or Rails.logger.error) before
rendering; update the rescue handling in
app/controllers/api/v1/auth/apple_controller.rb around the rescue
Auth::VerifyAppleToken::InvalidToken => e to implement this.
In `@app/controllers/api/v1/auth/google_controller.rb`:
- Around line 11-12: The rescue block for Auth::VerifyGoogleToken::InvalidToken
in GoogleController should stop echoing the exception message to clients;
replace the interpolated e.message in the render_auth_error call with a generic
message (e.g., "Google token verification failed") and log the exception
server-side instead (use Rails.logger.error or the controller's logger to record
e and its backtrace). Locate the rescue handling of
Auth::VerifyGoogleToken::InvalidToken in the GoogleController and update the
render_auth_error invocation and add a logger.error line before returning the
generic error.
In `@app/controllers/api/v1/auth/sessions_controller.rb`:
- Around line 13-15: The email lookup currently only downcases the submitted
email in SessionsController, causing lookups to fail for inputs with surrounding
whitespace; update the normalization where User.find_by is called to strip then
downcase the incoming email (i.e., change the parameter passed to
User.find_by(email: ...) to use strip.downcase on params[:email]) so
constant_time_authenticate receives the correctly normalized user and
params[:password] remains handled as before.
In `@app/controllers/api/v1/traccar/points_controller.rb`:
- Around line 4-5: Add the write-API guard to the controller so point-creating
requests cannot be performed with read-only credentials: in the PointsController
ensure you call require_write_api! (or require_pro_api! if appropriate) as a
before_action for the create action alongside authenticate_active_api_user! and
validate_points_limit so that only write-enabled API keys can mutate user data;
locate the existing before_action declarations (authenticate_active_api_user!
and validate_points_limit) and add require_write_api! to that list.
In `@app/controllers/auth/account_links_controller.rb`:
- Around line 28-30: Wrap the token consumption and user update in a single
atomic operation: create a service method (e.g.,
AccountLinkService.link_account!(user, result)) that begins a DB transaction,
calls Auth::VerifyAccountLinkToken.mark_consumed!(result.jti) and then updates
the user (user.update!(provider: result.provider, uid: result.uid)) inside the
same transaction so either both succeed or both roll back; call sign_in(user)
from the controller only after the service returns successfully. Ensure
mark_consumed! is executed inside that transaction to prevent races and make the
operation atomic.
- Around line 11-33: The show action currently performs state changes
(user.update!, Auth::VerifyAccountLinkToken.mark_consumed!, sign_in) on a GET
which risks token theft/consumption by prefetchers; split it into a safe GET
confirmation and a POST that performs the mutation: keep a new confirmation
action (e.g., confirm or keep show) that only verifies token validity without
consuming it (use or add a non-consuming verifier on
Auth::VerifyAccountLinkToken or call a read-only method) and renders a
confirmation page, and move the mutation logic (verify token, handle
TokenReplayed/InvalidToken, update! provider/uid on user, call
Auth::VerifyAccountLinkToken.mark_consumed!(result.jti), sign_in(user), and
redirect with notice) into a create/update POST action (e.g.,
AccountLinksController#create) protected by CSRF; ensure error redirects for
TokenReplayed and InvalidToken remain on the POST and that the GET does not call
mark_consumed! or perform user.update!.
In `@app/controllers/users/registrations_controller.rb`:
- Around line 110-111: The store_gads_linker action currently saves the
user-controlled params[:_gl] directly into session[:gads_linker]; add a
defensive length check and only persist a bounded value (e.g., validate
params[:_gl].present? && params[:_gl].is_a?(String) && params[:_gl].length <=
MAX_LEN), otherwise ignore or truncate before assigning to
session[:gads_linker]; update the method store_gads_linker to use a defined
MAX_LEN constant (for example 255 or 1024) and either skip storing oversized
values or store params[:_gl].slice(0, MAX_LEN) so cookie-backed sessions cannot
exceed size limits.
In `@app/services/auth/verify_google_token.rb`:
- Around line 14-15: The code currently treats empty-string env vars as present
because it only uses compact; update the client_ids construction in
verify_google_token.rb so blank strings are treated as missing (e.g., build the
array from ENV['GOOGLE_IOS_CLIENT_ID'] and ENV['GOOGLE_ANDROID_CLIENT_ID'], map
to strings, strip and reject empty/blank values or use reject(&:blank?) in
Rails) and keep the existing InvalidToken raise if the resulting client_ids
array is empty; ensure you update the client_ids variable (and not the raise) so
GOOGLE_IOS_CLIENT_ID="" or GOOGLE_ANDROID_CLIENT_ID="" will be considered
missing.
In `@app/services/traccar/params.rb`:
- Around line 59-65: The battery_level method currently converts battery[:level]
to a percent then returns nil for non-positive values, which discards a
legitimate 0% reading; update the logic in battery_level (the battery_level
method) to return value when it is zero or positive (e.g., use value >= 0 ?
value : nil) so 0 is preserved but negatives still map to nil; keep the existing
early nil return when battery[:level] is nil.
In
`@db/migrate/20260420190307_add_subscription_source_and_signup_variant_to_users.rb`:
- Around line 20-23: Update the comment above the remove_index call to
accurately describe index lifecycle: note that PostgreSQL will automatically
drop dependent indexes when a column is dropped (so DROP COLUMN is allowed), but
explicitly removing indexes (via remove_index :users, name:
SUBSCRIPTION_SOURCE_INDEX guarded by index_name_exists?(:users,
SUBSCRIPTION_SOURCE_INDEX)) is still good practice for clear rollbacks; also
mention that the index `index_users_on_subscription_source` is created by a
later migration (20260421230359) so the guard prevents errors if it doesn't
exist.
In `@db/migrate/20260421200001_add_partial_unique_index_on_users_provider_uid.rb`:
- Around line 8-21: Drop the old full unique index before adding the new partial
unique index to avoid redundant indexes: in the migration's up method, check for
and remove the existing index named "index_users_on_provider_and_uid" (using
index_name_exists? and remove_index / remove_index_concurrently) prior to
calling add_index for INDEX_NAME with where: 'provider IS NOT NULL AND uid IS
NOT NULL'; also ensure the down method reverses this by removing INDEX_NAME and
restoring the original index if needed. Use the existing symbols INDEX_NAME,
index_name_exists?, add_index, and remove_index/remove_index_concurrently to
locate and implement the changes.
In `@db/schema.rb`:
- Around line 564-565: Create a new Rails migration that removes the redundant
unconditional index by calling remove_index :users, name:
"index_users_on_provider_and_uid", algorithm: :concurrently; because algorithm:
:concurrently cannot run inside a transaction, add disable_ddl_transaction! at
the top of the migration class and ensure the migration only drops the index
(leaving the partial index "index_users_on_provider_and_uid_present" intact) so
uniqueness for non-NULL (provider, uid) pairs remains enforced.
In `@Gemfile`:
- Around line 17-20: The Gemfile violates Bundler/OrderedGems by not
alphabetizing gems; reorder the gem declarations so they are alphabetical within
their section — specifically move the gem 'apple_id' so gems read in
alphabetical order (e.g., ensure 'apple_id' appears before 'devise' and
'google-id-token' and after 'geocoder' if present); update the Gemfile gem list
accordingly and run bundle install / rubocop to confirm the Bundler/OrderedGems
offense is resolved.
In `@spec/requests/api/v1/traccar/points_spec.rb`:
- Around line 52-60: The test currently treats malformed payloads as success
(expecting HTTP 200) which hides dropped data; update the behavior so validation
errors return a 4xx with an error body and adjust the spec accordingly. Modify
the endpoint handling code (e.g., Api::V1::Traccar::PointsController#create or
the method that parses/ingests payloads such as Traccar::PointParser.parse or
similar) to detect invalid timestamps and respond with a 422 (or 400) and a
small JSON error payload when payload validation fails, and then change the spec
in spec/requests/api/v1/traccar/points_spec.rb to expect that 4xx response and
the error JSON instead of :ok while still asserting no Point was created.
In `@swagger/v1/swagger.yaml`:
- Around line 3200-3267: The OpenAPI requestBody for the POST
/api/v1/traccar/points operation is currently optional; update the operation's
requestBody object (the requestBody block under the POST /api/v1/traccar/points
definition) to include required: true directly under requestBody so clients and
docs require a body containing the mandatory device_id and location properties.
---
Outside diff comments:
In `@config/initializers/rack_attack.rb`:
- Around line 51-55: POINTS_CREATION_PATHS is missing the Traccar endpoint which
causes /api/v1/traccar/points to be throttled by the generic API rule; update
the POINTS_CREATION_PATHS constant to include the string
'/api/v1/traccar/points' alongside the other entries (ensuring the array remains
frozen and formatting consistent) so Traccar uses the high-volume ingest
throttle like the other point ingestion endpoints.
---
Nitpick comments:
In `@app/controllers/api/v1/users/two_factor_controller.rb`:
- Around line 24-34: The confirm action currently calls
ROTP::TOTP.new(current_api_user.otp_secret).verify(..., drift_behind: 30)
without a forward tolerance; update this call in the confirm method to pass a
symmetric drift_ahead (e.g., drift_ahead: 30) alongside drift_behind so
verification accepts small device-ahead clock skew, and make the identical
change in the valid_otp? method to use the same drift_ahead (and drift_behind)
values for consistent behavior.
- Around line 73-78: The ROTP verification is duplicated between the confirm
action and the valid_otp? method; extract that logic into a single helper (e.g.,
a private method or controller concern) such as verify_otp_for(user, code) that
encapsulates ROTP::TOTP.new(user.otp_secret).verify(code.to_s, drift_behind: 30)
while keeping the same nil/blank guards; replace the inline calls in confirm and
valid_otp? with calls to this new helper (and ensure current_api_user is passed
where needed).
In `@app/services/auth/find_or_create_oauth_user.rb`:
- Around line 61-71: handle_email_collision contains an unreachable branch
because auto_link_allowed? currently always returns false; add an inline comment
above the if auto_link_allowed? branch referencing the future PR (PR-B) and the
Flipper flag that will re-enable this path so reviewers understand this is
intentional and pending that PR (alternatively, remove the dead if block until
PR-B lands) — target the handle_email_collision method and mention
auto_link_allowed?, send_verification_email, UnverifiedEmail and
LinkVerificationSent in the comment for clarity.
In `@app/services/auth/verify_account_link_token.rb`:
- Around line 48-59: The API is asymmetrical: mark_consumed! is a class method
while token_consumed? is an instance method—make them consistent by converting
one to match the other; either change mark_consumed! into an instance method or
change token_consumed? into a class method. To fix, update the method definition
for mark_consumed! (or token_consumed?) so both are defined with the same
receiver (e.g., add self. to token_consumed? to make it a class method, or
remove self. from mark_consumed! to make it an instance method), and adjust any
call sites (e.g., places calling VerifyAccountLinkToken.mark_consumed! or
instance.token_consumed?) accordingly to use the chosen form.
In `@spec/requests/api/v1/auth/apple_spec.rb`:
- Around line 41-59: In the "returning Apple user" spec, the verifier_double
stub returned in the before block should include an email_verified field to more
accurately mimic real Apple ID token payloads; update the
allow(verifier_double).to receive(:call).and_return(...) in that before block
(used by the spec/context and verifier_double) to include email_verified: true
while keeping sub and email unchanged so the existing lookup by provider/uid and
expectations on User.count and existing.id remain valid.
In `@spec/requests/api/v1/auth/google_spec.rb`:
- Around line 75-96: Add a second example that mirrors the existing context but
stubs verifier_double.call to return email_verified: 'true' (string) and then
posts to '/api/v1/auth/google' to assert the same behavior: enqueues
Users::MailerSendingJob with existing.id and provider_label 'Google', responds
with :accepted and body['error'] == 'verification_sent', and that
existing.reload.provider and existing.reload.uid remain nil; place this
alongside the current example in the same context so the controller's handling
of the string 'true' claim is exercised.
In `@spec/requests/api/v1/auth/otp_challenges_spec.rb`:
- Around line 6-14: The test setup replaces Rack::Attack.cache.store but never
restores it; capture the original Rack::Attack.cache.store at the start of the
before block (e.g. original_store = Rack::Attack.cache.store), then set
Rack::Attack.cache.store to ActiveSupport::Cache::MemoryStore.new and in the
after block restore it (Rack::Attack.cache.store = original_store) and call
Rack::Attack.reset! as needed; reference the before/after hooks and
Rack::Attack.cache.store so the teardown returns global state to its original
value.
In `@spec/requests/api/v1/auth/registrations_spec.rb`:
- Around line 42-46: Remove the redundant per-test queue adapter setup by
deleting the line setting ActiveJob::Base.queue_adapter = :test from the spec
(inside the example 'does not send welcome emails (trial not started)'); rely on
the global test configuration for ActiveJob instead and keep the rest of the
assertions that check Users::MailerSendingJob has not been enqueued.
In `@spec/requests/api/v1/auth/sessions_spec.rb`:
- Around line 52-62: The test for timing-attack resistance relies on an
implementation-level expectation on BCrypt::Password.new (inside the
"timing-attack resistance" example that posts to '/api/v1/auth/login') which can
look like implementation testing; add a brief comment above the
expect(BCrypt::Password).to receive(:new)... line that explains why this check
is necessary (timing measurements are flaky, so asserting that BCrypt
verification is invoked on unknown-email paths provides pragmatic assurance of
the constant-time path), referencing BCrypt::Password.new and the spec example
name so future readers understand the tradeoff.
In `@spec/requests/api/v1/users/two_factor_spec.rb`:
- Around line 9-13: Wrap the example setup that mutates Rack::Attack in an
around hook which captures and restores both Rack::Attack.cache.store and
Rack::Attack.enabled, instead of only restoring enabled; specifically, save
original_store = Rack::Attack.cache.store and original_enabled =
Rack::Attack.enabled before the example, yield the example, and after it
reassign Rack::Attack.cache.store = original_store and Rack::Attack.enabled =
original_enabled so the memory store and enabled flag are both restored
(references: Rack::Attack.cache.store, Rack::Attack.enabled, the existing before
block in two_factor_spec.rb).
In `@spec/services/auth/find_or_create_oauth_user_spec.rb`:
- Around line 10-17: The helper method named build in the spec shadows
FactoryBot.build; rename this helper (e.g., to build_service or oauth_service)
and update all usages in the spec to the new name so FactoryBot.build remains
available; specifically change the method definition that currently reads def
build(...) in the example group for FindOrCreateOauthUser (and any calls to
build(...)) to def build_service(...) (or def oauth_service(...)) and update
call sites to use build_service(...) so the described_class.new(...)
construction stays identical but no longer overrides FactoryBot's build.
In `@spec/services/auth/verify_account_link_token_spec.rb`:
- Around line 67-76: The tests should assert behavior, not implementation:
instead of checking Rails.cache keys after calling
described_class.mark_consumed!('test-jti'), create or obtain a real token/jti,
call described_class.mark_consumed!(jti) and then assert that the public
verification API (e.g., described_class.verify or described_class.verify!)
rejects that same token/jti (returns false or raises the expected error) to
prove replays are blocked; likewise change the blank-jti example to assert that
calling described_class.mark_consumed!('') is a no-op by expecting no error
(e.g., expect { described_class.mark_consumed!('') }.not_to raise_error) rather
than inspecting cache keys. Ensure you reference described_class.mark_consumed!
and the class's public verify/verify! method in the updated examples.
In `@spec/services/auth/verify_otp_challenge_token_spec.rb`:
- Around line 53-55: Remove the example that asserts class inheritance (the it
block creating described_class::TokenReplayed and expecting it to be a
described_class::InvalidToken); delete that spec example entirely from
verify_otp_challenge_token_spec.rb so the suite tests behavior only (the prior
replay test already covers the public contract) and do not replace it with
another implementation-structure assertion.
In `@spec/services/traccar/point_creator_spec.rb`:
- Around line 84-88: The test currently invokes call_service twice; capture the
service output once and assert side effects around the single invocation to
avoid duplicate calls. Change the test to assign the result (e.g., result = nil;
expect { result = call_service }.not_to(change { Point.where(user:).count }))
and then assert expect(result).to eq([]). Keep references to call_service and
Point.where(user:) as the targets to update.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 37432dab-0b28-41c8-a75e-83ec555ec6cd
⛔ Files ignored due to path filters (1)
Gemfile.lockis excluded by!**/*.lock
📒 Files selected for processing (65)
.app_version.env.test.gitignoreCHANGELOG.mdGemfileapp/controllers/api/v1/auth/apple_controller.rbapp/controllers/api/v1/auth/base_controller.rbapp/controllers/api/v1/auth/google_controller.rbapp/controllers/api/v1/auth/otp_challenges_controller.rbapp/controllers/api/v1/auth/registrations_controller.rbapp/controllers/api/v1/auth/sessions_controller.rbapp/controllers/api/v1/subscriptions_controller.rbapp/controllers/api/v1/traccar/points_controller.rbapp/controllers/api/v1/users/destroy_controller.rbapp/controllers/api/v1/users/two_factor_controller.rbapp/controllers/api/v1/users_controller.rbapp/controllers/api_controller.rbapp/controllers/auth/account_links_controller.rbapp/controllers/users/registrations_controller.rbapp/mailers/users_mailer.rbapp/models/user.rbapp/services/auth/find_or_create_oauth_user.rbapp/services/auth/issue_account_link_token.rbapp/services/auth/issue_otp_challenge_token.rbapp/services/auth/verify_account_link_token.rbapp/services/auth/verify_apple_token.rbapp/services/auth/verify_google_token.rbapp/services/auth/verify_otp_challenge_token.rbapp/services/points/motion_data_extractor.rbapp/services/traccar/params.rbapp/services/traccar/point_creator.rbapp/views/layouts/application.html.erbapp/views/users_mailer/oauth_account_link.html.erbapp/views/users_mailer/oauth_account_link.text.erbconfig/application.rbconfig/initializers/rack_attack.rbconfig/routes.rbdb/migrate/20260420190307_add_subscription_source_and_signup_variant_to_users.rbdb/migrate/20260421200001_add_partial_unique_index_on_users_provider_uid.rbdb/schema.rbspec/config/encryption_keys_spec.rbspec/models/user_spec.rbspec/requests/api/v1/auth/apple_spec.rbspec/requests/api/v1/auth/google_spec.rbspec/requests/api/v1/auth/otp_challenges_spec.rbspec/requests/api/v1/auth/registrations_spec.rbspec/requests/api/v1/auth/sessions_spec.rbspec/requests/api/v1/subscriptions_spec.rbspec/requests/api/v1/traccar/points_spec.rbspec/requests/api/v1/users/destroy_spec.rbspec/requests/api/v1/users/two_factor_spec.rbspec/requests/api/v1/users_spec.rbspec/requests/auth/account_links_spec.rbspec/services/auth/find_or_create_oauth_user_spec.rbspec/services/auth/issue_account_link_token_spec.rbspec/services/auth/issue_otp_challenge_token_spec.rbspec/services/auth/verify_account_link_token_spec.rbspec/services/auth/verify_apple_token_spec.rbspec/services/auth/verify_google_token_spec.rbspec/services/auth/verify_otp_challenge_token_spec.rbspec/services/points/motion_data_extractor_spec.rbspec/services/traccar/params_spec.rbspec/services/traccar/point_creator_spec.rbspec/swagger/api/v1/traccar/points_controller_spec.rbswagger/v1/swagger.yaml
| rescue Auth::VerifyAppleToken::InvalidToken => e | ||
| return render_auth_error("Apple token verification failed: #{e.message}") |
There was a problem hiding this comment.
Don't return raw verifier errors to the client.
Interpolating e.message here leaks token-verification internals and makes failures depend on exception wording from the verifier stack. Return a stable generic error and log the detailed cause server-side instead.
Minimal hardening
- rescue Auth::VerifyAppleToken::InvalidToken => e
- return render_auth_error("Apple token verification failed: #{e.message}")
+ rescue Auth::VerifyAppleToken::InvalidToken => e
+ Rails.logger.info("Apple token verification failed: #{e.message}")
+ return render_auth_error('Apple token verification failed')
end📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| rescue Auth::VerifyAppleToken::InvalidToken => e | |
| return render_auth_error("Apple token verification failed: #{e.message}") | |
| rescue Auth::VerifyAppleToken::InvalidToken => e | |
| Rails.logger.info("Apple token verification failed: #{e.message}") | |
| return render_auth_error('Apple token verification failed') | |
| end |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/controllers/api/v1/auth/apple_controller.rb` around lines 11 - 12, The
rescue block for Auth::VerifyAppleToken::InvalidToken in AppleController
currently returns the verifier's raw error message to clients; change it to
return a stable generic message (e.g. render_auth_error("Apple token
verification failed")) and move the detailed information to server logs by
logging e.message (and optionally e.backtrace) using the controller logger (e.g.
logger.error or Rails.logger.error) before rendering; update the rescue handling
in app/controllers/api/v1/auth/apple_controller.rb around the rescue
Auth::VerifyAppleToken::InvalidToken => e to implement this.
| rescue Auth::VerifyGoogleToken::InvalidToken => e | ||
| return render_auth_error("Google token verification failed: #{e.message}") |
There was a problem hiding this comment.
Don't echo verifier exception text back to the client.
e.message can expose JWT/parsing/certificate details and makes the auth API contract depend on internal exception wording. Return a generic failure here and log the exception server-side instead.
Minimal hardening
- rescue Auth::VerifyGoogleToken::InvalidToken => e
- return render_auth_error("Google token verification failed: #{e.message}")
+ rescue Auth::VerifyGoogleToken::InvalidToken => e
+ Rails.logger.info("Google token verification failed: #{e.message}")
+ return render_auth_error('Google token verification failed')
end🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/controllers/api/v1/auth/google_controller.rb` around lines 11 - 12, The
rescue block for Auth::VerifyGoogleToken::InvalidToken in GoogleController
should stop echoing the exception message to clients; replace the interpolated
e.message in the render_auth_error call with a generic message (e.g., "Google
token verification failed") and log the exception server-side instead (use
Rails.logger.error or the controller's logger to record e and its backtrace).
Locate the rescue handling of Auth::VerifyGoogleToken::InvalidToken in the
GoogleController and update the render_auth_error invocation and add a
logger.error line before returning the generic error.
| user = User.find_by(email: params[:email]&.downcase) | ||
|
|
||
| authenticated = constant_time_authenticate(user, params[:password].to_s) |
There was a problem hiding this comment.
Strip surrounding whitespace from the submitted email.
downcase alone makes " user@example.com " miss the lookup, which is easy to hit with mobile autofill or copy/paste. Normalizing with strip.downcase avoids false “invalid email or password” responses.
✂️ Proposed fix
- user = User.find_by(email: params[:email]&.downcase)
+ normalized_email = params[:email].to_s.strip.downcase
+ user = User.find_by(email: normalized_email)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| user = User.find_by(email: params[:email]&.downcase) | |
| authenticated = constant_time_authenticate(user, params[:password].to_s) | |
| normalized_email = params[:email].to_s.strip.downcase | |
| user = User.find_by(email: normalized_email) | |
| authenticated = constant_time_authenticate(user, params[:password].to_s) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/controllers/api/v1/auth/sessions_controller.rb` around lines 13 - 15, The
email lookup currently only downcases the submitted email in SessionsController,
causing lookups to fail for inputs with surrounding whitespace; update the
normalization where User.find_by is called to strip then downcase the incoming
email (i.e., change the parameter passed to User.find_by(email: ...) to use
strip.downcase on params[:email]) so constant_time_authenticate receives the
correctly normalized user and params[:password] remains handled as before.
| before_action :authenticate_active_api_user!, only: %i[create] | ||
| before_action :validate_points_limit, only: %i[create] |
There was a problem hiding this comment.
Add the write-API guard before accepting point writes.
This endpoint mutates user data, but it only authenticates the caller. Without require_write_api!, read-only API credentials can still ingest points.
🔐 Suggested fix
class Api::V1::Traccar::PointsController < ApiController
before_action :authenticate_active_api_user!, only: %i[create]
+ before_action :require_write_api!, only: %i[create]
before_action :validate_points_limit, only: %i[create]Based on learnings: Applies to {app/models//*.rb,app/controllers//.rb,app/queries/**/.rb} : Use require_pro_api! or require_write_api! before_actions in API controllers.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| before_action :authenticate_active_api_user!, only: %i[create] | |
| before_action :validate_points_limit, only: %i[create] | |
| before_action :authenticate_active_api_user!, only: %i[create] | |
| before_action :require_write_api!, only: %i[create] | |
| before_action :validate_points_limit, only: %i[create] |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/controllers/api/v1/traccar/points_controller.rb` around lines 4 - 5, Add
the write-API guard to the controller so point-creating requests cannot be
performed with read-only credentials: in the PointsController ensure you call
require_write_api! (or require_pro_api! if appropriate) as a before_action for
the create action alongside authenticate_active_api_user! and
validate_points_limit so that only write-enabled API keys can mutate user data;
locate the existing before_action declarations (authenticate_active_api_user!
and validate_points_limit) and add require_write_api! to that list.
| def show | ||
| result = | ||
| begin | ||
| Auth::VerifyAccountLinkToken.new(params[:token]).call | ||
| rescue Auth::VerifyAccountLinkToken::TokenReplayed | ||
| return redirect_to(new_user_session_path, alert: 'This link has already been used.') | ||
| rescue Auth::VerifyAccountLinkToken::InvalidToken | ||
| return redirect_to(new_user_session_path, alert: 'Link invalid or expired.') | ||
| end | ||
|
|
||
| user = result.user | ||
|
|
||
| if user.provider.present? && (user.provider != result.provider || user.uid != result.uid) | ||
| return redirect_to(new_user_session_path, | ||
| alert: "This account is already linked to a different #{user.provider} identity.") | ||
| end | ||
|
|
||
| user.update!(provider: result.provider, uid: result.uid) | ||
| Auth::VerifyAccountLinkToken.mark_consumed!(result.jti) | ||
| sign_in(user) | ||
|
|
||
| redirect_to root_path, notice: "#{provider_label(result.provider)} is now linked to your account." | ||
| end |
There was a problem hiding this comment.
Avoid performing the link on a GET request.
This action mutates the account, consumes the token, and signs the user in from a show endpoint. Mail scanners and browser/link preview prefetchers routinely follow GET links, so they can burn the one-time token or link the account before the user actually interacts. Please split this into a safe GET confirmation page and a POST that performs the merge.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/controllers/auth/account_links_controller.rb` around lines 11 - 33, The
show action currently performs state changes (user.update!,
Auth::VerifyAccountLinkToken.mark_consumed!, sign_in) on a GET which risks token
theft/consumption by prefetchers; split it into a safe GET confirmation and a
POST that performs the mutation: keep a new confirmation action (e.g., confirm
or keep show) that only verifies token validity without consuming it (use or add
a non-consuming verifier on Auth::VerifyAccountLinkToken or call a read-only
method) and renders a confirmation page, and move the mutation logic (verify
token, handle TokenReplayed/InvalidToken, update! provider/uid on user, call
Auth::VerifyAccountLinkToken.mark_consumed!(result.jti), sign_in(user), and
redirect with notice) into a create/update POST action (e.g.,
AccountLinksController#create) protected by CSRF; ensure error redirects for
TokenReplayed and InvalidToken remain on the POST and that the GET does not call
mark_consumed! or perform user.update!.
| def up | ||
| # The existing `index_users_on_provider_and_uid` index is unique across ALL rows | ||
| # (including NULL provider/uid rows, which PostgreSQL treats as distinct NULLs — | ||
| # so uniqueness is effectively not enforced for rows without OAuth). | ||
| # Replace it with a partial unique index that only covers rows where both | ||
| # provider and uid are present, making the semantic intent explicit and | ||
| # avoiding the edge case where NULL,NULL rows slip through. | ||
| return if index_name_exists?(:users, INDEX_NAME) | ||
|
|
||
| add_index :users, %i[provider uid], | ||
| unique: true, | ||
| where: 'provider IS NOT NULL AND uid IS NOT NULL', | ||
| algorithm: :concurrently, | ||
| name: INDEX_NAME |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify whether both the old and new provider/uid indexes coexist.
rg -n "index_users_on_provider_and_uid(_present)?|add_index\s+:users,\s*%i\[provider uid\]" db/schema.rb db/migrateRepository: Freika/dawarich
Length of output: 902
Drop the old unique index when adding the partial unique index.
The migration adds index_users_on_provider_and_uid_present but leaves the existing index_users_on_provider_and_uid in place. Both indexes now coexist on the same column pair, creating redundant write overhead. Either drop the old index in the up method before adding the new one, or combine them into a single migration step. The partial unique index already provides the needed uniqueness constraint for populated OAuth identities.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@db/migrate/20260421200001_add_partial_unique_index_on_users_provider_uid.rb`
around lines 8 - 21, Drop the old full unique index before adding the new
partial unique index to avoid redundant indexes: in the migration's up method,
check for and remove the existing index named "index_users_on_provider_and_uid"
(using index_name_exists? and remove_index / remove_index_concurrently) prior to
calling add_index for INDEX_NAME with where: 'provider IS NOT NULL AND uid IS
NOT NULL'; also ensure the down method reverses this by removing INDEX_NAME and
restoring the original index if needed. Use the existing symbols INDEX_NAME,
index_name_exists?, add_index, and remove_index/remove_index_concurrently to
locate and implement the changes.
| t.index ["provider", "uid"], name: "index_users_on_provider_and_uid", unique: true | ||
| t.index ["provider", "uid"], name: "index_users_on_provider_and_uid_present", unique: true, where: "((provider IS NOT NULL) AND (uid IS NOT NULL))" |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Search for any code that might depend on the original index name
rg -n 'index_users_on_provider_and_uid[^_]' --type rubyRepository: Freika/dawarich
Length of output: 326
🏁 Script executed:
cat db/migrate/20260421200001_add_partial_unique_index_on_users_provider_uid.rbRepository: Freika/dawarich
Length of output: 1135
Remove the redundant unconditional unique index on (provider, uid).
The migration that added the partial unique index (line 565) was intended to "replace" the unconditional index (line 564), but only added the new one without removing the old. The unconditional index provides no meaningful uniqueness enforcement—PostgreSQL treats NULL values as distinct, so it doesn't prevent duplicate (NULL, NULL) pairs. The partial index (line 565) effectively replaces its functionality by enforcing uniqueness only for non-NULL provider/uid pairs (OAuth users).
Create a follow-up migration to remove the redundant index:
remove_index :users, name: "index_users_on_provider_and_uid", algorithm: :concurrently
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@db/schema.rb` around lines 564 - 565, Create a new Rails migration that
removes the redundant unconditional index by calling remove_index :users, name:
"index_users_on_provider_and_uid", algorithm: :concurrently; because algorithm:
:concurrently cannot run inside a transaction, add disable_ddl_transaction! at
the top of the migration class and ensure the migration only drops the index
(leaving the partial index "index_users_on_provider_and_uid_present" intact) so
uniqueness for non-NULL (provider, uid) pairs remains enforced.
| gem 'devise' | ||
| gem 'apple_id', '~> 1.2' | ||
| gem 'devise-two-factor' | ||
| gem 'google-id-token', '~> 1.4' |
There was a problem hiding this comment.
Fix gem ordering to satisfy RuboCop.
The pipeline is failing due to Bundler/OrderedGems violation. Gems should be sorted alphabetically within their section.
🔧 Proposed fix for gem ordering
+gem 'apple_id', '~> 1.2'
gem 'devise'
-gem 'apple_id', '~> 1.2'
gem 'devise-two-factor'
+gem 'fit4ruby', '~> 3.13'
+gem 'flipper', '~> 1.3'
+gem 'flipper-active_record', '~> 1.3'
+gem 'flipper-ui', '~> 1.3'
+gem 'foreman'
+gem 'geocoder', github: 'Freika/geocoder', branch: 'master'
gem 'google-id-token', '~> 1.4'
-gem 'fit4ruby', '~> 3.13'Move apple_id before devise and google-id-token to its alphabetical position (after geocoder).
🧰 Tools
🪛 GitHub Check: rubocop
[failure] 18-18:
Bundler/OrderedGems: Gems should be sorted in an alphabetical order within their section of the Gemfile. Gem apple_id should appear before devise.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@Gemfile` around lines 17 - 20, The Gemfile violates Bundler/OrderedGems by
not alphabetizing gems; reorder the gem declarations so they are alphabetical
within their section — specifically move the gem 'apple_id' so gems read in
alphabetical order (e.g., ensure 'apple_id' appears before 'devise' and
'google-id-token' and after 'geocoder' if present); update the Gemfile gem list
accordingly and run bundle install / rubocop to confirm the Bundler/OrderedGems
offense is resolved.
| context 'when payload is malformed' do | ||
| before { payload[:location][:timestamp] = 'not-a-date' } | ||
|
|
||
| it 'returns ok and does not create a point' do | ||
| expect do | ||
| post "/api/v1/traccar/points?api_key=#{user.api_key}", params: payload, as: :json | ||
| end.not_to change(Point, :count) | ||
|
|
||
| expect(response).to have_http_status(:ok) |
There was a problem hiding this comment.
Don’t codify silent drops as success.
Expecting 200 OK here tells the client the fix was accepted even though nothing is persisted. For an ingestion endpoint, that turns malformed payloads into permanent data loss because the sender will not retry. This should fail with a 4xx and a small error payload instead.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@spec/requests/api/v1/traccar/points_spec.rb` around lines 52 - 60, The test
currently treats malformed payloads as success (expecting HTTP 200) which hides
dropped data; update the behavior so validation errors return a 4xx with an
error body and adjust the spec accordingly. Modify the endpoint handling code
(e.g., Api::V1::Traccar::PointsController#create or the method that
parses/ingests payloads such as Traccar::PointParser.parse or similar) to detect
invalid timestamps and respond with a 422 (or 400) and a small JSON error
payload when payload validation fails, and then change the spec in
spec/requests/api/v1/traccar/points_spec.rb to expect that 4xx response and the
error JSON instead of :ok while still asserting no Point was created.
| requestBody: | ||
| content: | ||
| application/json: | ||
| schema: | ||
| type: object | ||
| properties: | ||
| device_id: | ||
| type: string | ||
| description: Tracker / device identifier | ||
| location: | ||
| type: object | ||
| properties: | ||
| timestamp: | ||
| type: string | ||
| description: ISO 8601 timestamp of the location fix | ||
| latitude: | ||
| type: number | ||
| description: Latitude in decimal degrees | ||
| longitude: | ||
| type: number | ||
| description: Longitude in decimal degrees | ||
| accuracy: | ||
| type: number | ||
| description: Horizontal accuracy in meters | ||
| speed: | ||
| type: number | ||
| description: Speed in meters per second | ||
| heading: | ||
| type: number | ||
| description: Bearing in degrees (0-360) | ||
| altitude: | ||
| type: number | ||
| description: Altitude in meters | ||
| is_moving: | ||
| type: boolean | ||
| description: Whether the device was moving at the time of the | ||
| fix | ||
| odometer: | ||
| type: number | ||
| description: Cumulative distance traveled in meters | ||
| event: | ||
| type: string | ||
| description: Event type that produced the fix (e.g. motionchange, | ||
| heartbeat) | ||
| required: | ||
| - latitude | ||
| - longitude | ||
| - timestamp | ||
| battery: | ||
| type: object | ||
| properties: | ||
| level: | ||
| type: number | ||
| description: Battery level as a 0-1 fraction | ||
| is_charging: | ||
| type: boolean | ||
| description: Whether the device is currently charging | ||
| activity: | ||
| type: object | ||
| properties: | ||
| type: | ||
| type: string | ||
| description: Detected activity (walking, running, in_vehicle, | ||
| ...) | ||
| required: | ||
| - device_id | ||
| - location | ||
| examples: |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Check if the file exists and read the specific lines mentioned
if [ -f "swagger/v1/swagger.yaml" ]; then
echo "=== File content (lines 3200-3267) ==="
sed -n '3200,3267p' swagger/v1/swagger.yaml
echo ""
echo "=== Full context around requestBody (lines 3180-3290) ==="
sed -n '3180,3290p' swagger/v1/swagger.yaml
else
echo "File swagger/v1/swagger.yaml not found"
fiRepository: Freika/dawarich
Length of output: 6534
🌐 Web query:
OpenAPI 3.0 requestBody required field specification
💡 Result:
In OpenAPI 3.0, the requestBody object has a 'required' field that is a boolean indicating whether the request body is required in the request. It defaults to false. The 'content' field is REQUIRED and specifies the media types and schemas. Required fields within the request body schema (e.g., specific properties of an object) are specified using the 'required' array in the Schema Object under the relevant content's schema, not in requestBody itself. Example: paths: /example: post: requestBody: required: true # Makes the entire body required content: application/json: schema: type: object required: ['name', 'email'] # Makes these properties required properties: name: type: string email: type: string
Citations:
- 1: https://github.com/OAI/OpenAPI-Specification/blob/main/versions/3.0.3.md
- 2: https://swagger.io/docs/specification/v3_0/describing-request-body/describing-request-body/
- 3: https://spec.openapis.org/oas/v3.0.0.html
Add required: true to the requestBody.
In OpenAPI 3, requestBody is optional by default. Without setting required: true, generated clients and documentation will treat POST /api/v1/traccar/points as callable without a body, even though device_id and location are mandatory properties.
Location to add the field
At line 3200, after requestBody:, insert required: true before content::
requestBody:
required: true
content:🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@swagger/v1/swagger.yaml` around lines 3200 - 3267, The OpenAPI requestBody
for the POST /api/v1/traccar/points operation is currently optional; update the
operation's requestBody object (the requestBody block under the POST
/api/v1/traccar/points definition) to include required: true directly under
requestBody so clients and docs require a body containing the mandatory
device_id and location properties.
There was a problem hiding this comment.
Actionable comments posted: 11
♻️ Duplicate comments (6)
app/services/traccar/params.rb (1)
73-79:⚠️ Potential issue | 🟡 MinorPreserve 0% battery instead of converting it to
nil.On Line 78,
value.positive?drops a valid0reading. That loses useful device-state data.🔧 Proposed fix
def battery_level level = battery[:level] return nil if level.nil? value = (level.to_f * 100).to_i - value.positive? ? value : nil + value >= 0 ? value : nil end🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/services/traccar/params.rb` around lines 73 - 79, The battery_level method currently converts a legitimate 0% reading to nil because it uses value.positive?; change the check so 0 is preserved—return value when it is zero or positive and only return nil for negative or missing values (keep the existing nil-on-missing behavior for battery[:level]). Update the conditional in battery_level (the value variable calculation path) to allow value == 0 (e.g., use value >= 0 or !value.negative?) instead of value.positive?.app/services/auth/verify_google_token.rb (1)
15-16:⚠️ Potential issue | 🟡 MinorTreat blank Google client IDs as missing before validation.
Using
.compactstill allows""values through, so misconfiguration can bypass the explicit “not configured” guard.Proposed fix
- client_ids = [ENV['GOOGLE_IOS_CLIENT_ID'], ENV['GOOGLE_ANDROID_CLIENT_ID']].compact + client_ids = [ENV['GOOGLE_IOS_CLIENT_ID'], ENV['GOOGLE_ANDROID_CLIENT_ID']].filter_map(&:presence) raise InvalidToken, 'Google client IDs not configured' if client_ids.empty?🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/services/auth/verify_google_token.rb` around lines 15 - 16, The current client_ids assignment uses .compact which doesn't remove empty strings; update the client_ids construction in verify_google_token (the client_ids variable) to normalize to strings and reject blank values (e.g. map(&:to_s).reject(&:empty?) or, if ActiveSupport is available, map(&:to_s).reject(&:blank?)) so that empty-string ENV values are treated as missing and the existing raise InvalidToken, 'Google client IDs not configured' guard will trigger correctly.app/controllers/api/v1/auth/google_controller.rb (1)
11-12:⚠️ Potential issue | 🟠 MajorDo not expose verifier exception text in API responses.
Return a generic message to clients and log the detailed exception server-side instead.
Proposed hardening
begin Auth::VerifyGoogleToken.new(params[:id_token], nonce: params[:nonce]).call rescue Auth::VerifyGoogleToken::InvalidToken => e - return render_auth_error("Google token verification failed: #{e.message}") + Rails.logger.info("Google token verification failed: #{e.message}") + return render_auth_error('Google token verification failed') end🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/controllers/api/v1/auth/google_controller.rb` around lines 11 - 12, The rescue in GoogleController currently returns the verifier exception text to clients; change the rescue for Auth::VerifyGoogleToken::InvalidToken to render a generic error message via render_auth_error (e.g., "Google token verification failed") and instead log the full exception server-side (use Rails.logger.error or the controller's logger to record e.message and backtrace) so clients don't see internal exception details; keep the rescue block handling in the same place and preserve returning the auth error response.app/controllers/auth/account_links_controller.rb (2)
16-27:⚠️ Potential issue | 🟠 MajorMake token consumption and account linking atomic.
consume!happens beforeuser.update!, so a failed write burns the one-time token with no way to retry. Wrap consumption and the provider/uid update in one transactional step, then sign in only after it commits.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/controllers/auth/account_links_controller.rb` around lines 16 - 27, The current flow calls Auth::VerifyAccountLinkToken.consume!(result.jti) before updating the user, which can burn the one‑time token if user.update! fails; change AccountLinksController so consumption and the provider/uid assignment happen inside a single DB transaction (e.g., wrap Auth::VerifyAccountLinkToken.consume!(result.jti) and user.update!(provider: result.provider, uid: result.uid) in a transaction block), ensure you re-check the provider/uid mismatch inside the transaction (and fail/rollback if mismatched), and only perform the sign-in/redirect after the transaction successfully commits so token consumption is atomic with the user update.
6-38:⚠️ Potential issue | 🟠 MajorMove the linking side effects off
GET /auth/account_link.This
showaction consumes the token, mutates the user, and may sign them in. Mail scanners and browser prefetchers can follow the link and burn or apply it before the user explicitly confirms.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/controllers/auth/account_links_controller.rb` around lines 6 - 38, The show action currently performs destructive side effects (Auth::VerifyAccountLinkToken.consume!, user.update!, and sign_in) on a GET request; change it so GET /auth/account_link (AccountLinksController#show) only verifies the token non-destructively and renders a confirmation page, and move the token-consuming and mutating logic (calls to Auth::VerifyAccountLinkToken.consume!, user.update!(provider:, uid:), and sign_in(user)) into a new POST action (e.g., AccountLinksController#create or confirm) that the confirmation page POSTs to; preserve the same error/redirect branches for invalid/replayed tokens by checking token validity in show and perform the actual consume!/update/sign_in in the POST handler.app/controllers/api/v1/auth/apple_controller.rb (1)
10-13:⚠️ Potential issue | 🟠 MajorReturn a stable auth failure message here.
Exposing
e.messageleaks token-verification details to clients and makes failures depend on verifier exception text. Log the detailed cause server-side and return a fixed authentication error instead.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/controllers/api/v1/auth/apple_controller.rb` around lines 10 - 13, The controller is returning the verifier's exception message to clients; change Auth::VerifyAppleToken rescue block to log the full exception server-side (e.g. Rails.logger.error with e.message and e.backtrace) and call render_auth_error with a stable, non-sensitive message such as "Apple token verification failed" (omit e.message); keep the rescue targeting Auth::VerifyAppleToken::InvalidToken and do not expose verifier internals to the response.
🧹 Nitpick comments (2)
app/services/traccar/params.rb (1)
36-36: UsePoint.altitude_decimal_supported?instead of per-call column lookup.Line 36 can reuse the model’s memoized capability helper for better consistency and lower overhead.
♻️ Proposed refactor
- attrs[:altitude_decimal] = altitude_value if Point.column_names.include?('altitude_decimal') + attrs[:altitude_decimal] = altitude_value if Point.altitude_decimal_supported?🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/services/traccar/params.rb` at line 36, Replace the runtime column lookup using Point.column_names.include?('altitude_decimal') with the model helper Point.altitude_decimal_supported? to use the memoized capability check; update the assignment in params (where attrs[:altitude_decimal] = altitude_value is guarded) to call Point.altitude_decimal_supported? instead so we avoid repeated column name queries and ensure consistent capability checks across code.app/controllers/api/v1/users/destroy_controller.rb (1)
29-31: Use a request-aware host fallback for confirmation URLs.Line 30 falling back to
'localhost'can generate unusable links when host config is missing. Preferrequest.hostas fallback (similar to the web controller) to keep links valid in real deployments.Proposed refactor
- ActionMailer::Base.default_url_options[:host] || ENV.fetch('APP_HOST', 'localhost') + ActionMailer::Base.default_url_options[:host] || ENV['APP_HOST'].presence || request.host🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/controllers/api/v1/users/destroy_controller.rb` around lines 29 - 31, The default_mailer_host method should use the controller request host as a fallback instead of hardcoding 'localhost'; update default_mailer_host to return ActionMailer::Base.default_url_options[:host] || request&.host || ENV.fetch('APP_HOST', 'localhost') so it uses the current request host when available (ensure this is an instance method in the destroy controller so request is accessible).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@app/controllers/api_controller.rb`:
- Around line 127-128: The current lookup uses params[:api_key] ||
extract_bearer_token which treats an empty api_key as present and prevents
falling back to the Authorization header; change the logic to use presence so
blank strings are ignored (use params[:api_key].presence ||
extract_bearer_token) so extract_bearer_token can be used when api_key is empty;
update the line where params[:api_key] is used and keep extract_bearer_token as
the fallback.
In `@app/controllers/api/v1/users/destroy_controller.rb`:
- Around line 10-25: The API destroy action issues confirmation links for any
authenticated user, bypassing the web flow's can_delete_account? check; update
the destroy method in the API controller to first call the same eligibility
check (e.g., current_api_user.can_delete_account? or the shared helper used by
RegistrationsController) and, if false, render an appropriate error response
(for example status :forbidden with a clear JSON error) without calling
Users::IssueDestroyToken or enqueueing Users::MailerSendingJob; only issue the
token and enqueue the mail when the eligibility check passes.
In `@app/controllers/api/v1/users/two_factor_controller.rb`:
- Around line 37-40: The backup_codes action currently calls
current_api_user.generate_otp_backup_codes! unconditionally; add a guard in the
backup_codes method that checks current_api_user.otp_required_for_login (or
whatever flag/method indicates 2FA is enabled) and reject the request if false
(render a JSON error and non-2xx status such as 403 or 422) so backup codes are
only generated and saved when 2FA is active; only call
generate_otp_backup_codes! and current_api_user.save! after that check.
- Around line 4-6: Add a before_action call to require_write_api! for the
mutating 2FA endpoints so they cannot be used via read-only API keys;
specifically, add a before_action :require_write_api!, only: %i[setup confirm
backup_codes destroy] alongside the existing before_actions
(ensure_two_factor_available, ensure_password_provided,
ensure_credential_provided) to protect the methods that rotate OTP secrets,
return recovery codes, or disable 2FA.
In `@app/controllers/users/destroy_confirmations_controller.rb`:
- Around line 6-35: The show action currently consumes the token and performs
deletion/enqueue/sign-out on a GET; change it so show only verifies the token
(call Users::VerifyDestroyToken.new(params[:token]).call) but does NOT call
Users::VerifyDestroyToken.consume!, user.mark_as_deleted_atomically!,
Users::DestroyJob.perform_later, or sign_out; instead render a confirmation view
with a form that POSTs to a new destroy action (or use a DELETE action) which
will perform the token consumption
(Users::VerifyDestroyToken.consume!(result.jti)), check
user.can_delete_account?, call user.mark_as_deleted_atomically!, enqueue
Users::DestroyJob.perform_later(user.id), and sign_out as necessary—move all
side effects into that POST/destroy endpoint.
- Around line 16-35: Move the call to
Users::VerifyDestroyToken.consume!(result.jti) so the token is only burned after
the deletion transition succeeds: first locate the user (result.user), check
user.can_delete_account?, then attempt user.mark_as_deleted_atomically! and only
if that returns true call Users::VerifyDestroyToken.consume!(result.jti),
enqueue Users::DestroyJob.perform_later(user.id), sign_out(user) as needed, and
redirect to the success notice; otherwise leave the token intact and redirect
with the appropriate alert.
In `@app/services/auth/find_or_create_oauth_user.rb`:
- Around line 56-62: The rate-limit key currently uses only
LINK_EMAIL_RATE_LIMIT_KEY_PREFIX + existing_user.id which conflates different
provider/link targets; change the cache_key construction in
find_or_create_oauth_user (where cache_key is defined and used with
Rails.cache.write and LINK_EMAIL_RATE_LIMIT_WINDOW) to include `@provider` and
`@uid` (or a stable fingerprint of them) so each link attempt (e.g., Apple vs
Google) gets its own throttle; ensure this updated cache_key is used for the
acquired check and does not affect handle_email_collision or the raised
LinkVerificationSent behavior.
In `@app/services/auth/verify_account_link_token.rb`:
- Around line 19-28: Require the temporal claims in JWT.decode and remove the
manual iat age check: when decoding `@token` in VerifyAccountLinkToken, pass
options to JWT.decode to require 'exp' and 'iat' (e.g. required_claims:
['exp','iat'] and enable verify_expiration/verify_iat) instead of treating them
as optional, then delete the explicit manual iat age check that compares
Time.now to decoded['iat'] (the block referencing
Auth::IssueAccountLinkToken::TTL); keep the existing purpose and jti
validations.
In `@config/initializers/apple_id.rb`:
- Around line 3-7: The initializer should not rescue NameError which can hide
boot-time bugs; instead guard the assignment by checking the constant exists
before wiring. Replace the current rescue-based pattern in
Rails.application.config.after_initialize with an explicit guard such as using
defined?(AppleID::JWKS) or defined?(AppleID) && defined?(AppleID::JWKS) and only
attempt to set AppleID::JWKS.cache = Rails.cache when the guard passes; remove
the broad NameError rescue and keep or narrow any rescue (e.g., only LoadError)
and log via Rails.logger.warn mentioning the specific error if one is caught.
In `@config/initializers/rack_attack.rb`:
- Around line 147-152: The throttle block for Rack::Attack ('users/exist')
returns nil when X-Webhook-Secret is blank so the rule is skipped; change the
block in Rack::Attack.throttle('users/exist', ...) to compute the SHA256 digest
when the header is present and otherwise return a deterministic fallback key
based on the request IP (e.g., use req.ip or "ip:#{req.ip}") so the throttle
always yields a non-nil discriminator; update references to
req.get_header('HTTP_X_WEBHOOK_SECRET') and req.ip inside the block to implement
this fallback.
In `@spec/services/users/issue_destroy_token_spec.rb`:
- Line 16: Replace Time.now with Time.current in the expectation line inside the
spec so it uses Rails timezone-aware time; update the expectation that compares
decoded['exp'] to be_within(5).of((Time.current + described_class::TTL).to_i) to
satisfy the RuboCop Rails/TimeZone rule while keeping the described_class::TTL
reference unchanged.
---
Duplicate comments:
In `@app/controllers/api/v1/auth/apple_controller.rb`:
- Around line 10-13: The controller is returning the verifier's exception
message to clients; change Auth::VerifyAppleToken rescue block to log the full
exception server-side (e.g. Rails.logger.error with e.message and e.backtrace)
and call render_auth_error with a stable, non-sensitive message such as "Apple
token verification failed" (omit e.message); keep the rescue targeting
Auth::VerifyAppleToken::InvalidToken and do not expose verifier internals to the
response.
In `@app/controllers/api/v1/auth/google_controller.rb`:
- Around line 11-12: The rescue in GoogleController currently returns the
verifier exception text to clients; change the rescue for
Auth::VerifyGoogleToken::InvalidToken to render a generic error message via
render_auth_error (e.g., "Google token verification failed") and instead log the
full exception server-side (use Rails.logger.error or the controller's logger to
record e.message and backtrace) so clients don't see internal exception details;
keep the rescue block handling in the same place and preserve returning the auth
error response.
In `@app/controllers/auth/account_links_controller.rb`:
- Around line 16-27: The current flow calls
Auth::VerifyAccountLinkToken.consume!(result.jti) before updating the user,
which can burn the one‑time token if user.update! fails; change
AccountLinksController so consumption and the provider/uid assignment happen
inside a single DB transaction (e.g., wrap
Auth::VerifyAccountLinkToken.consume!(result.jti) and user.update!(provider:
result.provider, uid: result.uid) in a transaction block), ensure you re-check
the provider/uid mismatch inside the transaction (and fail/rollback if
mismatched), and only perform the sign-in/redirect after the transaction
successfully commits so token consumption is atomic with the user update.
- Around line 6-38: The show action currently performs destructive side effects
(Auth::VerifyAccountLinkToken.consume!, user.update!, and sign_in) on a GET
request; change it so GET /auth/account_link (AccountLinksController#show) only
verifies the token non-destructively and renders a confirmation page, and move
the token-consuming and mutating logic (calls to
Auth::VerifyAccountLinkToken.consume!, user.update!(provider:, uid:), and
sign_in(user)) into a new POST action (e.g., AccountLinksController#create or
confirm) that the confirmation page POSTs to; preserve the same error/redirect
branches for invalid/replayed tokens by checking token validity in show and
perform the actual consume!/update/sign_in in the POST handler.
In `@app/services/auth/verify_google_token.rb`:
- Around line 15-16: The current client_ids assignment uses .compact which
doesn't remove empty strings; update the client_ids construction in
verify_google_token (the client_ids variable) to normalize to strings and reject
blank values (e.g. map(&:to_s).reject(&:empty?) or, if ActiveSupport is
available, map(&:to_s).reject(&:blank?)) so that empty-string ENV values are
treated as missing and the existing raise InvalidToken, 'Google client IDs not
configured' guard will trigger correctly.
In `@app/services/traccar/params.rb`:
- Around line 73-79: The battery_level method currently converts a legitimate 0%
reading to nil because it uses value.positive?; change the check so 0 is
preserved—return value when it is zero or positive and only return nil for
negative or missing values (keep the existing nil-on-missing behavior for
battery[:level]). Update the conditional in battery_level (the value variable
calculation path) to allow value == 0 (e.g., use value >= 0 or !value.negative?)
instead of value.positive?.
---
Nitpick comments:
In `@app/controllers/api/v1/users/destroy_controller.rb`:
- Around line 29-31: The default_mailer_host method should use the controller
request host as a fallback instead of hardcoding 'localhost'; update
default_mailer_host to return ActionMailer::Base.default_url_options[:host] ||
request&.host || ENV.fetch('APP_HOST', 'localhost') so it uses the current
request host when available (ensure this is an instance method in the destroy
controller so request is accessible).
In `@app/services/traccar/params.rb`:
- Line 36: Replace the runtime column lookup using
Point.column_names.include?('altitude_decimal') with the model helper
Point.altitude_decimal_supported? to use the memoized capability check; update
the assignment in params (where attrs[:altitude_decimal] = altitude_value is
guarded) to call Point.altitude_decimal_supported? instead so we avoid repeated
column name queries and ensure consistent capability checks across code.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 64aa7bc5-1bbc-44ee-a469-09a9aacab84a
📒 Files selected for processing (37)
app/controllers/api/v1/auth/apple_controller.rbapp/controllers/api/v1/auth/google_controller.rbapp/controllers/api/v1/traccar/points_controller.rbapp/controllers/api/v1/users/destroy_controller.rbapp/controllers/api/v1/users/two_factor_controller.rbapp/controllers/api_controller.rbapp/controllers/auth/account_links_controller.rbapp/controllers/users/destroy_confirmations_controller.rbapp/controllers/users/registrations_controller.rbapp/jobs/users/mailer_sending_job.rbapp/mailers/users_mailer.rbapp/services/auth/find_or_create_oauth_user.rbapp/services/auth/verify_account_link_token.rbapp/services/auth/verify_apple_token.rbapp/services/auth/verify_google_token.rbapp/services/auth/verify_otp_challenge_token.rbapp/services/traccar/params.rbapp/services/users/issue_destroy_token.rbapp/services/users/verify_destroy_token.rbapp/views/layouts/application.html.erbapp/views/users_mailer/account_destroy_confirmation.html.erbapp/views/users_mailer/account_destroy_confirmation.text.erbconfig/initializers/01_constants.rbconfig/initializers/apple_id.rbconfig/initializers/rack_attack.rbconfig/routes.rbdb/migrate/20260428200000_drop_redundant_users_provider_uid_index.rbdb/schema.rbspec/requests/api/v1/users/destroy_spec.rbspec/requests/api/v1/users/two_factor_spec.rbspec/requests/auth/account_links_spec.rbspec/requests/users/destroy_confirmations_spec.rbspec/requests/users/registrations_spec.rbspec/services/auth/verify_apple_token_spec.rbspec/services/auth/verify_google_token_spec.rbspec/services/users/issue_destroy_token_spec.rbspec/services/users/verify_destroy_token_spec.rb
✅ Files skipped from review due to trivial changes (2)
- app/views/users_mailer/account_destroy_confirmation.text.erb
- spec/requests/auth/account_links_spec.rb
🚧 Files skipped from review as they are similar to previous changes (4)
- app/services/auth/verify_otp_challenge_token.rb
- app/services/auth/verify_apple_token.rb
- app/controllers/api/v1/traccar/points_controller.rb
- app/views/layouts/application.html.erb
| params[:api_key] || extract_bearer_token | ||
| end |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's examine the api_controller.rb file around lines 127-128 and get broader context
cd /tmp && find . -name "api_controller.rb" -path "*/app/controllers/*" 2>/dev/null | head -5Repository: Freika/dawarich
Length of output: 41
🏁 Script executed:
# Let's look for the file in common locations
fd -type f "api_controller.rb" | head -10Repository: Freika/dawarich
Length of output: 230
🏁 Script executed:
# List files in app/controllers
ls -la app/controllers/ 2>/dev/null || echo "Path not found, listing root structure"
ls -laRepository: Freika/dawarich
Length of output: 5041
🏁 Script executed:
# Read the api_controller.rb file with line numbers
cat -n app/controllers/api_controller.rbRepository: Freika/dawarich
Length of output: 6319
🏁 Script executed:
# Verify .presence is used elsewhere in the codebase (Rails convention)
rg "\.presence" app/controllers/api_controller.rb app/models/ | head -20Repository: Freika/dawarich
Length of output: 123
🏁 Script executed:
# Check if there are tests for the api_key/auth logic
find spec -name "*api*" -type f | head -10Repository: Freika/dawarich
Length of output: 132
🏁 Script executed:
# Read the api_controller_spec.rb to check for existing tests
cat -n spec/controllers/api_controller_spec.rbRepository: Freika/dawarich
Length of output: 2242
Use .presence to allow Bearer token fallback when api_key param is blank.
At Line 127, an empty api_key query param still wins over a valid Authorization: Bearer ... header (empty strings are truthy in Ruby), which can cause false 401s.
Proposed fix
- params[:api_key] || extract_bearer_token
+ params[:api_key].presence || extract_bearer_token🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/controllers/api_controller.rb` around lines 127 - 128, The current lookup
uses params[:api_key] || extract_bearer_token which treats an empty api_key as
present and prevents falling back to the Authorization header; change the logic
to use presence so blank strings are ignored (use params[:api_key].presence ||
extract_bearer_token) so extract_bearer_token can be used when api_key is empty;
update the line where params[:api_key] is used and keep extract_bearer_token as
the fallback.
| before_action :ensure_two_factor_available | ||
| before_action :ensure_password_provided, only: %i[setup confirm backup_codes] | ||
| before_action :ensure_credential_provided, only: %i[destroy] |
There was a problem hiding this comment.
Add require_write_api! before these mutating 2FA actions.
These endpoints rotate OTP secrets, return recovery codes, and disable 2FA, so they should not be reachable through a read-only API capability.
Suggested hardening
class Api::V1::Users::TwoFactorController < ApiController
+ before_action :require_write_api!
before_action :ensure_two_factor_available
before_action :ensure_password_provided, only: %i[setup confirm backup_codes]
before_action :ensure_credential_provided, only: %i[destroy]As per coding guidelines "Use require_pro_api! or require_write_api! before_actions in API controllers."
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| before_action :ensure_two_factor_available | |
| before_action :ensure_password_provided, only: %i[setup confirm backup_codes] | |
| before_action :ensure_credential_provided, only: %i[destroy] | |
| before_action :require_write_api! | |
| before_action :ensure_two_factor_available | |
| before_action :ensure_password_provided, only: %i[setup confirm backup_codes] | |
| before_action :ensure_credential_provided, only: %i[destroy] |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/controllers/api/v1/users/two_factor_controller.rb` around lines 4 - 6,
Add a before_action call to require_write_api! for the mutating 2FA endpoints so
they cannot be used via read-only API keys; specifically, add a before_action
:require_write_api!, only: %i[setup confirm backup_codes destroy] alongside the
existing before_actions (ensure_two_factor_available, ensure_password_provided,
ensure_credential_provided) to protect the methods that rotate OTP secrets,
return recovery codes, or disable 2FA.
| def backup_codes | ||
| codes = current_api_user.generate_otp_backup_codes! | ||
| current_api_user.save! | ||
| render json: { backup_codes: codes } |
There was a problem hiding this comment.
Reject backup-code regeneration until 2FA is actually enabled.
generate_otp_backup_codes! runs even when otp_required_for_login is still false, so a caller can mint recovery codes before enrollment is complete. That leaves backup codes attached to an account with no active second factor.
Suggested guard
def backup_codes
+ unless current_api_user.otp_required_for_login? && current_api_user.otp_secret.present?
+ return render json: { error: 'two_factor_not_enabled' }, status: :conflict
+ end
+
codes = current_api_user.generate_otp_backup_codes!
current_api_user.save!
render json: { backup_codes: codes }
end📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def backup_codes | |
| codes = current_api_user.generate_otp_backup_codes! | |
| current_api_user.save! | |
| render json: { backup_codes: codes } | |
| def backup_codes | |
| unless current_api_user.otp_required_for_login? && current_api_user.otp_secret.present? | |
| return render json: { error: 'two_factor_not_enabled' }, status: :conflict | |
| end | |
| codes = current_api_user.generate_otp_backup_codes! | |
| current_api_user.save! | |
| render json: { backup_codes: codes } | |
| end |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/controllers/api/v1/users/two_factor_controller.rb` around lines 37 - 40,
The backup_codes action currently calls
current_api_user.generate_otp_backup_codes! unconditionally; add a guard in the
backup_codes method that checks current_api_user.otp_required_for_login (or
whatever flag/method indicates 2FA is enabled) and reject the request if false
(render a JSON error and non-2xx status such as 403 or 422) so backup codes are
only generated and saved when 2FA is active; only call
generate_otp_backup_codes! and current_api_user.save! after that check.
| def show | ||
| result = | ||
| begin | ||
| Users::VerifyDestroyToken.new(params[:token]).call | ||
| rescue Users::VerifyDestroyToken::TokenReplayed | ||
| return redirect_to(new_user_session_path, alert: 'This deletion link has already been used.') | ||
| rescue Users::VerifyDestroyToken::InvalidToken | ||
| return redirect_to(new_user_session_path, alert: 'Deletion link invalid or expired.') | ||
| end | ||
|
|
||
| unless Users::VerifyDestroyToken.consume!(result.jti) | ||
| return redirect_to(new_user_session_path, alert: 'This deletion link has already been used.') | ||
| end | ||
|
|
||
| user = result.user | ||
|
|
||
| unless user.can_delete_account? | ||
| return redirect_to( | ||
| new_user_session_path, | ||
| alert: 'Cannot delete account while you own a family with other members. ' \ | ||
| 'Transfer ownership or remove members first.' | ||
| ) | ||
| end | ||
|
|
||
| Users::DestroyJob.perform_later(user.id) if user.mark_as_deleted_atomically! | ||
|
|
||
| sign_out(user) if user_signed_in? && current_user&.id == user.id | ||
|
|
||
| redirect_to new_user_session_path, | ||
| notice: 'Your account has been scheduled for deletion. We are sorry to see you go.' |
There was a problem hiding this comment.
Don't schedule account deletion from a GET confirmation link.
This action consumes the token, soft-deletes the account, enqueues the destroy job, and signs the user out from show. Mail scanners and link prefetchers can trigger that without an explicit user action.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/controllers/users/destroy_confirmations_controller.rb` around lines 6 -
35, The show action currently consumes the token and performs
deletion/enqueue/sign-out on a GET; change it so show only verifies the token
(call Users::VerifyDestroyToken.new(params[:token]).call) but does NOT call
Users::VerifyDestroyToken.consume!, user.mark_as_deleted_atomically!,
Users::DestroyJob.perform_later, or sign_out; instead render a confirmation view
with a form that POSTs to a new destroy action (or use a DELETE action) which
will perform the token consumption
(Users::VerifyDestroyToken.consume!(result.jti)), check
user.can_delete_account?, call user.mark_as_deleted_atomically!, enqueue
Users::DestroyJob.perform_later(user.id), and sign_out as necessary—move all
side effects into that POST/destroy endpoint.
| cache_key = "#{LINK_EMAIL_RATE_LIMIT_KEY_PREFIX}#{existing_user.id}" | ||
| acquired = Rails.cache.write( | ||
| cache_key, true, | ||
| expires_in: LINK_EMAIL_RATE_LIMIT_WINDOW, | ||
| unless_exist: true | ||
| ) | ||
| return unless acquired |
There was a problem hiding this comment.
Don't rate-limit different link targets under the same cache key.
This key is scoped only by existing_user.id. If the same account tries Apple and then Google within the hour, the second email is suppressed but handle_email_collision still raises LinkVerificationSent, so the client gets a 202 without a usable email for the second provider. Include at least @provider and @uid in the key.
📫 Scope the throttle to the actual link attempt
- cache_key = "#{LINK_EMAIL_RATE_LIMIT_KEY_PREFIX}#{existing_user.id}"
+ cache_key = [
+ LINK_EMAIL_RATE_LIMIT_KEY_PREFIX,
+ existing_user.id,
+ `@provider`,
+ `@uid`
+ ].join(':')🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/services/auth/find_or_create_oauth_user.rb` around lines 56 - 62, The
rate-limit key currently uses only LINK_EMAIL_RATE_LIMIT_KEY_PREFIX +
existing_user.id which conflates different provider/link targets; change the
cache_key construction in find_or_create_oauth_user (where cache_key is defined
and used with Rails.cache.write and LINK_EMAIL_RATE_LIMIT_WINDOW) to include
`@provider` and `@uid` (or a stable fingerprint of them) so each link attempt (e.g.,
Apple vs Google) gets its own throttle; ensure this updated cache_key is used
for the acquired check and does not affect handle_email_collision or the raised
LinkVerificationSent behavior.
| decoded, = JWT.decode(@token, ENV.fetch('JWT_SECRET_KEY'), true, algorithm: 'HS256') | ||
| raise InvalidToken, 'wrong purpose' unless decoded['purpose'] == 'oauth_account_link' | ||
|
|
||
| jti = decoded['jti'].to_s | ||
| raise InvalidToken, 'missing jti' if jti.blank? | ||
|
|
||
| if decoded['iat'].present? && | ||
| (Time.now.to_i - decoded['iat'].to_i) > Auth::IssueAccountLinkToken::TTL.to_i | ||
| raise InvalidToken, 'token too old' | ||
| end |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Find the files mentioned in the review
find . -type f -name "*account_link*" | head -20Repository: Freika/dawarich
Length of output: 465
🏁 Script executed:
# Search for the IssueAccountLinkToken class
rg -A 30 "class.*IssueAccountLinkToken" --type rbRepository: Freika/dawarich
Length of output: 86
🏁 Script executed:
# Search for the VerifyAccountLinkToken class
rg -A 50 "class.*VerifyAccountLinkToken" --type rbRepository: Freika/dawarich
Length of output: 86
🏁 Script executed:
# Check what Auth module looks like
rg "module Auth" --type rb | head -10Repository: Freika/dawarich
Length of output: 86
🏁 Script executed:
cat -n ./app/services/auth/issue_account_link_token.rbRepository: Freika/dawarich
Length of output: 1162
🏁 Script executed:
cat -n ./app/services/auth/verify_account_link_token.rbRepository: Freika/dawarich
Length of output: 2290
🏁 Script executed:
# Check ruby-jwt version in Gemfile
rg "ruby-jwt|jwt" Gemfile.lock | head -20Repository: Freika/dawarich
Length of output: 254
🏁 Script executed:
# Search for JWT.decode usage patterns in the codebase
rg "JWT\.decode" -A 5 | head -50Repository: Freika/dawarich
Length of output: 4055
🏁 Script executed:
# Check JWT gem documentation by looking at how it's used in tests
rg "required_claims" -A 3 -B 3Repository: Freika/dawarich
Length of output: 627
🏁 Script executed:
# Check if any other JWT decode calls in the codebase use required_claims
rg "JWT\.decode.*required"Repository: Freika/dawarich
Length of output: 41
🏁 Script executed:
# Verify the Integer parsing approach used in the suggestion
rg "Integer\(" --type ruby | head -10Repository: Freika/dawarich
Length of output: 128
🏁 Script executed:
# Examine the subscription decode JWT service to see the full implementation
cat -n app/services/subscription/decode_jwt_token.rbRepository: Freika/dawarich
Length of output: 878
🏁 Script executed:
# Check the spec file for verify_account_link_token to understand current test coverage
cat -n spec/services/auth/verify_account_link_token_spec.rbRepository: Freika/dawarich
Length of output: 3195
🏁 Script executed:
# Check ruby-jwt gem version behavior - search for exp verification behavior
rg "exp.*verify|verify.*exp" --type ruby | head -20Repository: Freika/dawarich
Length of output: 1687
🏁 Script executed:
# Verify that JWT.decode actually verifies exp by default
web_search "ruby-jwt automatic expiration verification"Repository: Freika/dawarich
Length of output: 110
🏁 Script executed:
# Check if there are any tests that create tokens without exp to understand current behavior
rg "JWT\.encode" -B 2 -A 2 | grep -v "exp:" | head -40Repository: Freika/dawarich
Length of output: 2820
🏁 Script executed:
# Verify the syntax of required_claims parameter in ruby-jwt gem
web_search "ruby-jwt required_claims parameter syntax"Repository: Freika/dawarich
Length of output: 110
🌐 Web query:
ruby-jwt required_claims parameter syntax documentation
💡 Result:
In the Ruby JWT library (jwt/ruby-jwt), the required_claims parameter is passed as part of the 4th argument (the options hash) to JWT.decode. Syntax (array of claim names as strings): JWT.decode(token, hmac_secret, true, { required_claims: ['exp'], algorithm: 'HS256' }) Behavior: - required_claims lets you specify claims that must be present for decoding to be successful. - If any of the required claims are missing, JWT::MissingRequiredClaim is raised. Exception: - The library defines JWT::MissingRequiredClaim for this condition. (If you want, tell me your jwt/ruby-jwt version and whether you’re decoding with verify=true; I can confirm the exact option key/value expectations for that version.)
Citations:
- 1: https://github.com/jwt/ruby-jwt
- 2: https://jwt.github.io/ruby-jwt/
- 3: https://ruby-jwt.org/JWT/MissingRequiredClaim.html
🌐 Web query:
ruby-jwt automatic expiration verification default behavior
💡 Result:
By default, the ruby-jwt gem (jwt gem) automatically verifies JWT expiration if the 'exp' claim is present in the payload during decoding with signature verification enabled (verify=true). It raises JWT::ExpiredSignature if the token is expired, accounting for any specified leeway (default 0). Verification only occurs if the 'exp' key exists; absence of 'exp' skips the check since the claim is optional per RFC 7519. To disable, set verify_expiration: false in decode options.
Citations:
- 1: https://github.com/jwt/ruby-jwt
- 2: https://www.rubydoc.info/gems/jwt/1.5.2/JWT.decode
- 3: https://github.com/jwt/ruby-jwt/blob/master/lib/jwt/decode.rb
🏁 Script executed:
# Check the actual ruby-jwt gem code or documentation in the repo
find . -name "*.md" -o -name "*.txt" | xargs grep -l "jwt\|JWT" 2>/dev/null | head -5Repository: Freika/dawarich
Length of output: 41
🏁 Script executed:
# Look for jwt gem version and check what version is being used
cat Gemfile.lock | grep -A 5 "^jwt"Repository: Freika/dawarich
Length of output: 41
🏁 Script executed:
# Verify JWT.decode accepts keyword arguments for options
rg "JWT\.decode\(" -A 5 | head -40Repository: Freika/dawarich
Length of output: 2789
🏁 Script executed:
# Check if tests pass with the current keyword argument syntax
cat spec/services/auth/verify_account_link_token_spec.rb | grep -A 2 "JWT\.decode"Repository: Freika/dawarich
Length of output: 269
Defensively require temporal claims on JWT decode.
Auth::IssueAccountLinkToken always includes iat and exp claims, but the verifier treats them as optional. While ruby-jwt automatically verifies exp when present by default, explicitly requiring these claims follows defensive coding practices and matches the existing pattern in Subscription::DecodeJwtToken. Additionally, the manual iat age check can be simplified.
Tighten claim validation
- decoded, = JWT.decode(`@token`, ENV.fetch('JWT_SECRET_KEY'), true, algorithm: 'HS256')
+ decoded, = JWT.decode(
+ `@token`,
+ ENV.fetch('JWT_SECRET_KEY'),
+ true,
+ { algorithm: 'HS256', required_claims: %w[exp iat jti] }
+ )
raise InvalidToken, 'wrong purpose' unless decoded['purpose'] == 'oauth_account_link'
jti = decoded['jti'].to_s
raise InvalidToken, 'missing jti' if jti.blank?
- if decoded['iat'].present? &&
- (Time.now.to_i - decoded['iat'].to_i) > Auth::IssueAccountLinkToken::TTL.to_i
+ iat = Integer(decoded['iat'], exception: false)
+ raise InvalidToken, 'invalid iat' if iat.nil?
+
+ if (Time.now.to_i - iat) > Auth::IssueAccountLinkToken::TTL.to_i
raise InvalidToken, 'token too old'
end🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/services/auth/verify_account_link_token.rb` around lines 19 - 28, Require
the temporal claims in JWT.decode and remove the manual iat age check: when
decoding `@token` in VerifyAccountLinkToken, pass options to JWT.decode to require
'exp' and 'iat' (e.g. required_claims: ['exp','iat'] and enable
verify_expiration/verify_iat) instead of treating them as optional, then delete
the explicit manual iat age check that compares Time.now to decoded['iat'] (the
block referencing Auth::IssueAccountLinkToken::TTL); keep the existing purpose
and jti validations.
| Rails.application.config.after_initialize do | ||
| AppleID::JWKS.cache = Rails.cache | ||
| rescue NameError, LoadError => e | ||
| Rails.logger.warn "AppleID JWKS cache wiring skipped: #{e.message}" | ||
| end |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify whether AppleID is treated as a hard dependency across the repo.
# Expected:
# - If AppleID is required for production auth, avoid fail-open rescue of NameError.
# - If optional by design, ensure call sites guard feature usage accordingly.
rg -n --hidden -S "gem ['\"]apple|AppleID::|verify_apple_token|APPLE_BUNDLE_ID"Repository: Freika/dawarich
Length of output: 797
🏁 Script executed:
# Check Gemfile structure to confirm AppleID is a hard dependency
head -30 Gemfile | cat -nRepository: Freika/dawarich
Length of output: 1255
🏁 Script executed:
# Search for conditional gem groups and AppleID's placement
rg -n "group|apple_id" GemfileRepository: Freika/dawarich
Length of output: 191
🏁 Script executed:
# Check for similar guard patterns in other initializers
find config/initializers -name "*.rb" -type f | head -5 | xargs -I {} bash -c 'echo "=== {} ===" && cat -n {}'Repository: Freika/dawarich
Length of output: 7672
🏁 Script executed:
# Verify indentation compliance in the initializer file
cat -n config/initializers/apple_id.rbRepository: Freika/dawarich
Length of output: 327
Avoid rescuing broad NameError here; it can mask real boot-time errors.
Since apple_id is an unconditional gem dependency in your Gemfile, AppleID::JWKS should always be available. Rescuing NameError silently hides typos, missing requires, and other wiring bugs. Use an explicit constant guard instead.
♻️ Suggested change
Rails.application.config.after_initialize do
- AppleID::JWKS.cache = Rails.cache
-rescue NameError, LoadError => e
- Rails.logger.warn "AppleID JWKS cache wiring skipped: #{e.message}"
+ if defined?(AppleID::JWKS)
+ AppleID::JWKS.cache = Rails.cache
+ else
+ Rails.logger.warn 'AppleID JWKS cache wiring skipped: AppleID::JWKS is unavailable'
+ end
+rescue LoadError => e
+ Rails.logger.warn "AppleID JWKS cache wiring skipped: #{e.class}: #{e.message}"
end🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@config/initializers/apple_id.rb` around lines 3 - 7, The initializer should
not rescue NameError which can hide boot-time bugs; instead guard the assignment
by checking the constant exists before wiring. Replace the current rescue-based
pattern in Rails.application.config.after_initialize with an explicit guard such
as using defined?(AppleID::JWKS) or defined?(AppleID) && defined?(AppleID::JWKS)
and only attempt to set AppleID::JWKS.cache = Rails.cache when the guard passes;
remove the broad NameError rescue and keep or narrow any rescue (e.g., only
LoadError) and log via Rails.logger.warn mentioning the specific error if one is
caught.
| Rack::Attack.throttle('users/exist', limit: 600, period: 1.hour) do |req| | ||
| next unless req.path == '/api/v1/users/exist' && req.post? | ||
|
|
||
| secret = req.get_header('HTTP_X_WEBHOOK_SECRET').to_s | ||
| Digest::SHA256.hexdigest(secret)[0, 32] if secret.present? | ||
| end |
There was a problem hiding this comment.
Add a fallback throttle key when webhook secret is missing.
Right now, requests to /api/v1/users/exist with a blank/missing X-Webhook-Secret return nil discriminator and skip this throttle completely. Add an IP fallback so the endpoint is still rate-limited.
Proposed fix
Rack::Attack.throttle('users/exist', limit: 600, period: 1.hour) do |req|
next unless req.path == '/api/v1/users/exist' && req.post?
secret = req.get_header('HTTP_X_WEBHOOK_SECRET').to_s
- Digest::SHA256.hexdigest(secret)[0, 32] if secret.present?
+ if secret.present?
+ "users_exist:secret:#{Digest::SHA256.hexdigest(secret)[0, 32]}"
+ else
+ "users_exist:ip:#{req.ip}"
+ end
end🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@config/initializers/rack_attack.rb` around lines 147 - 152, The throttle
block for Rack::Attack ('users/exist') returns nil when X-Webhook-Secret is
blank so the rule is skipped; change the block in
Rack::Attack.throttle('users/exist', ...) to compute the SHA256 digest when the
header is present and otherwise return a deterministic fallback key based on the
request IP (e.g., use req.ip or "ip:#{req.ip}") so the throttle always yields a
non-nil discriminator; update references to
req.get_header('HTTP_X_WEBHOOK_SECRET') and req.ip inside the block to implement
this fallback.
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (2)
config/initializers/rack_attack.rb (1)
150-156:⚠️ Potential issue | 🟠 MajorAdd a fallback throttle key when webhook secret is missing.
When
X-Webhook-Secretheader is blank/missing, this throttle returnsniland skips rate limiting entirely, allowing unlimited requests to/api/v1/users/existfrom any IP.Proposed fix
Rack::Attack.throttle('users/exist', limit: 600, period: 1.hour) do |req| next if DawarichSettings.self_hosted? next unless req.path == '/api/v1/users/exist' && req.post? secret = req.get_header('HTTP_X_WEBHOOK_SECRET').to_s - Digest::SHA256.hexdigest(secret)[0, 32] if secret.present? + if secret.present? + "users_exist:secret:#{Digest::SHA256.hexdigest(secret)[0, 32]}" + else + "users_exist:ip:#{req.ip}" + end end🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@config/initializers/rack_attack.rb` around lines 150 - 156, The throttle currently returns nil when HTTP_X_WEBHOOK_SECRET is missing (in the Rack::Attack.throttle block defined for 'users/exist'), which disables rate limiting; change the block so it always returns a string key: compute secret = req.get_header('HTTP_X_WEBHOOK_SECRET').to_s and, if present, return Digest::SHA256.hexdigest(secret)[0,32], otherwise return a deterministic fallback string such as "no-secret:#{req.ip}" (or similar using req.ip or REMOTE_ADDR) so requests without the header are still rate-limited.app/controllers/api/v1/users/destroy_controller.rb (1)
10-12:⚠️ Potential issue | 🟠 MajorMissing account-deletion eligibility check.
The API destroy action proceeds without calling
can_delete_account?, unlike the web flow inRegistrationsController(lines 49-53). Family owners with members could bypass the restriction via the API.Proposed fix
def destroy + unless current_api_user.can_delete_account? + return render json: { error: 'cannot_delete_account', + message: 'Cannot delete your account while you own a family with other members.' }, + status: :unprocessable_entity + end + DawarichSettings.self_hosted? ? destroy_self_hosted : destroy_cloud end🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/controllers/api/v1/users/destroy_controller.rb` around lines 10 - 12, The destroy action in destroy_controller.rb skips the account-deletion eligibility check; before calling destroy_self_hosted or destroy_cloud, call the same can_delete_account? check used by RegistrationsController (e.g., current_user.can_delete_account?) and short-circuit with the same error response if the user is not allowed to delete (matching the RegistrationsController behavior). Place the check in destroy and only invoke destroy_self_hosted or destroy_cloud when can_delete_account? returns true, otherwise return the appropriate error status/message used elsewhere.
🧹 Nitpick comments (2)
spec/services/users/request_account_destroy_spec.rb (1)
31-31: Consider verifying JWT signature in test.
JWT.decodewithfalseas the third argument skips signature verification. While acceptable for checking payload structure, verifying the signature would provide stronger assurance.Optional: Verify signature
- decoded = JWT.decode(token, ENV.fetch('JWT_SECRET_KEY'), false).first + decoded = JWT.decode(token, ENV.fetch('JWT_SECRET_KEY'), true, algorithm: 'HS256').first🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@spec/services/users/request_account_destroy_spec.rb` at line 31, The test currently calls JWT.decode(token, ENV.fetch('JWT_SECRET_KEY'), false) which disables signature verification; change it to verify the signature by passing true for the verify flag and supply the expected algorithm options (e.g. use JWT.decode(token, ENV.fetch('JWT_SECRET_KEY'), true, algorithm: 'HS256') or the appropriate algorithm used by your app) so the spec validates both payload and signature when decoding the token.spec/requests/users/destroy_confirmations_spec.rb (1)
40-49: Assert the user-facing alert for wrong-purpose tokens.This example currently checks redirect and no deletion, but not the failure message contract returned by the controller.
Suggested change
it 'rejects a token with the wrong purpose' do payload = { user_id: user.id, purpose: 'something_else', jti: SecureRandom.uuid, iat: Time.now.to_i, exp: 1.hour.from_now.to_i } token = JWT.encode(payload, ENV.fetch('JWT_SECRET_KEY'), 'HS256') get '/users/me/destroy/confirm', params: { token: token } expect(user.reload.deleted_at).to be_nil expect(response).to redirect_to(new_user_session_path) + expect(flash[:alert]).to match(/invalid|expired/i) endAs per coding guidelines,
spec/**/*.rb: “don't test wiring without outcomes”.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@spec/requests/users/destroy_confirmations_spec.rb` around lines 40 - 49, The test misses asserting the user-facing alert when a token with the wrong purpose is used; after the GET to '/users/me/destroy/confirm' (the block that builds token and calls get), add an expectation for the controller flash (e.g. expect(flash[:alert]).to be_present or assert the specific message used by the controller) in addition to the existing checks of user.reload.deleted_at and redirect to new_user_session_path so the spec verifies the visible failure outcome.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@spec/requests/users/destroy_confirmations_spec.rb`:
- Line 9: Replace the global cache wipe in the before hook (the line with before
{ Rails.cache.clear } in destroy_confirmations_spec.rb) with a scoped cleanup:
either remove the hook entirely if JWT/JTI guarantees uniqueness for this spec,
or delete only the token-consumption keys used by these tests (use
Rails.cache.delete(key) for known keys or
Rails.cache.delete_matched("token_consumption*") to remove the relevant
namespace). Ensure the change is made inside the same before block so the test
still cleans up only the keys related to token consumption and does not clear
the global Rails cache.
---
Duplicate comments:
In `@app/controllers/api/v1/users/destroy_controller.rb`:
- Around line 10-12: The destroy action in destroy_controller.rb skips the
account-deletion eligibility check; before calling destroy_self_hosted or
destroy_cloud, call the same can_delete_account? check used by
RegistrationsController (e.g., current_user.can_delete_account?) and
short-circuit with the same error response if the user is not allowed to delete
(matching the RegistrationsController behavior). Place the check in destroy and
only invoke destroy_self_hosted or destroy_cloud when can_delete_account?
returns true, otherwise return the appropriate error status/message used
elsewhere.
In `@config/initializers/rack_attack.rb`:
- Around line 150-156: The throttle currently returns nil when
HTTP_X_WEBHOOK_SECRET is missing (in the Rack::Attack.throttle block defined for
'users/exist'), which disables rate limiting; change the block so it always
returns a string key: compute secret =
req.get_header('HTTP_X_WEBHOOK_SECRET').to_s and, if present, return
Digest::SHA256.hexdigest(secret)[0,32], otherwise return a deterministic
fallback string such as "no-secret:#{req.ip}" (or similar using req.ip or
REMOTE_ADDR) so requests without the header are still rate-limited.
---
Nitpick comments:
In `@spec/requests/users/destroy_confirmations_spec.rb`:
- Around line 40-49: The test misses asserting the user-facing alert when a
token with the wrong purpose is used; after the GET to
'/users/me/destroy/confirm' (the block that builds token and calls get), add an
expectation for the controller flash (e.g. expect(flash[:alert]).to be_present
or assert the specific message used by the controller) in addition to the
existing checks of user.reload.deleted_at and redirect to new_user_session_path
so the spec verifies the visible failure outcome.
In `@spec/services/users/request_account_destroy_spec.rb`:
- Line 31: The test currently calls JWT.decode(token,
ENV.fetch('JWT_SECRET_KEY'), false) which disables signature verification;
change it to verify the signature by passing true for the verify flag and supply
the expected algorithm options (e.g. use JWT.decode(token,
ENV.fetch('JWT_SECRET_KEY'), true, algorithm: 'HS256') or the appropriate
algorithm used by your app) so the spec validates both payload and signature
when decoding the token.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 398ebac8-3a0d-426b-9f66-e28e8a0a07cc
📒 Files selected for processing (14)
app/controllers/api/v1/users/destroy_controller.rbapp/controllers/auth/account_links_controller.rbapp/controllers/users/destroy_confirmations_controller.rbapp/controllers/users/registrations_controller.rbapp/services/users/request_account_destroy.rbconfig/initializers/rack_attack.rbspec/requests/api/v1/auth/otp_challenges_spec.rbspec/requests/api/v1/auth/sessions_spec.rbspec/requests/api/v1/users/destroy_spec.rbspec/requests/api/v1/users/two_factor_spec.rbspec/requests/auth/account_links_spec.rbspec/requests/users/destroy_confirmations_spec.rbspec/requests/users/registrations_spec.rbspec/services/users/request_account_destroy_spec.rb
✅ Files skipped from review due to trivial changes (1)
- spec/requests/api/v1/users/two_factor_spec.rb
🚧 Files skipped from review as they are similar to previous changes (4)
- app/controllers/auth/account_links_controller.rb
- app/controllers/users/destroy_confirmations_controller.rb
- spec/requests/api/v1/auth/sessions_spec.rb
- spec/requests/auth/account_links_spec.rb
| describe 'GET /users/me/destroy/confirm' do | ||
| let(:user) { create(:user) } | ||
|
|
||
| before { Rails.cache.clear } |
There was a problem hiding this comment.
Avoid global Rails.cache.clear in request specs.
Line 9 clears the entire cache, which can cause cross-spec interference and flakiness when cache is shared across workers/processes. Scope cleanup to token-consumption keys (or remove cleanup if uniqueness is guaranteed by jti).
Suggested change
- before { Rails.cache.clear }
+ # Avoid global cache flush; token JTIs are unique per example.📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| before { Rails.cache.clear } | |
| # Avoid global cache flush; token JTIs are unique per example. |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@spec/requests/users/destroy_confirmations_spec.rb` at line 9, Replace the
global cache wipe in the before hook (the line with before { Rails.cache.clear }
in destroy_confirmations_spec.rb) with a scoped cleanup: either remove the hook
entirely if JWT/JTI guarantees uniqueness for this spec, or delete only the
token-consumption keys used by these tests (use Rails.cache.delete(key) for
known keys or Rails.cache.delete_matched("token_consumption*") to remove the
relevant namespace). Ensure the change is made inside the same before block so
the test still cleans up only the keys related to token consumption and does not
clear the global Rails cache.
Summary by CodeRabbit
New Features
Security Enhancements
Bug Fixes