Skip to content

fix: Concurrent signup with same authentication creates duplicate users#10149

Merged
mtrezza merged 14 commits intoparse-community:alphafrom
mtrezza:fix/GHSA-c297-cj4p-3f3m-9
Mar 9, 2026
Merged

fix: Concurrent signup with same authentication creates duplicate users#10149
mtrezza merged 14 commits intoparse-community:alphafrom
mtrezza:fix/GHSA-c297-cj4p-3f3m-9

Conversation

@mtrezza
Copy link
Member

@mtrezza mtrezza commented Mar 8, 2026

Pull Request

Issue

Concurrent signup with same authentication creates duplicate users.

Tasks

  • Add tests
  • Add changes to documentation (guides, repository pages, code comments)
  • Add security check
  • Add new Parse Error codes to Parse JS SDK

Summary by CodeRabbit

  • New Features

    • Automatic creation of per-provider unique indexes on authData (includes anonymous) to enforce credential uniqueness at signup; storage adapters support creating these indexes.
  • Bug Fixes

    • Prevents race conditions creating duplicate users from concurrent signups; conflicting signups now return 208 "this auth is already used".
    • Improved duplicate-auth error mapping for clearer client errors.
  • Configuration

    • New server option to enable/disable automatic authData index creation (default: true).
  • Documentation

    • Updated option help and docs for provider naming and index behavior.
  • Tests

    • Comprehensive tests added for concurrent/sequential signup, anonymous flows, and login behavior.

@parse-github-assistant
Copy link

parse-github-assistant bot commented Mar 8, 2026

🚀 Thanks for opening this pull request! We appreciate your effort in improving the project. Please let us know once your pull request is ready for review.

Note

Please respond to review comments from AI agents just like you would to comments from a human reviewer. Let the reviewer resolve their own comments, unless they have reviewed and accepted your commit, or agreed with your explanation for why the feedback was incorrect.

Caution

Pull requests must be written using an AI agent with human supervision. Pull requests written entirely by a human will likely be rejected, because of lower code quality, higher review effort and the higher risk of introducing bugs. Please note that AI review comments on this pull request alone do not satisfy this requirement.

@parseplatformorg
Copy link
Contributor

parseplatformorg commented Mar 8, 2026

Snyk checks have passed. No issues have been found so far.

Status Scanner Critical High Medium Low Total (0)
Open Source Security 0 0 0 0 0 issues

💻 Catch issues earlier using the plugins for VS Code, JetBrains IDEs, Visual Studio, and Eclipse.

@coderabbitai
Copy link

coderabbitai bot commented Mar 8, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Creates per-provider unique indexes on _User.authData..id (startup and on-demand); surfaces duplicate authData DB errors as ACCOUNT_ALREADY_LINKED (208, "this auth is already used"); adds DatabaseOptions.createIndexAuthDataUniqueness and comprehensive tests for concurrent/batch/anonymous signup races and index behavior.

Changes

Cohort / File(s) Summary
Tests
spec/AuthDataUniqueIndex.spec.js, spec/DatabaseController.spec.js, spec/ParseUser.spec.js
Adds tests for concurrent/batch/anonymous authData signups, login-after-signup, index creation/idempotency/warnings, and updates expectations to assert authData-duplicate errors.
Mongo Adapter
src/Adapters/Storage/Mongo/MongoStorageAdapter.js
Adds ensureAuthDataUniqueness(provider) and authData-duplicate parsing for 11000 errors; creates sparse unique index _auth_data_<provider>.id and annotates duplicate errors with duplicated_field/underlyingError.
Postgres Adapter
src/Adapters/Storage/Postgres/PostgresStorageAdapter.js
Adds ensureAuthDataUniqueness(provider) and authData-first unique constraint parsing; creates per-provider unique index _User_unique_authData_<provider>_id and handles existing-index/duplicate cases.
Initialization
src/Controllers/DatabaseController.js
Runs per-provider ensureAuthDataUniqueness during DB initialization when createIndexAuthDataUniqueness is enabled; runs after TTL/index setup and before schema updates; logs per-provider failures.
Application Logic
src/RestWrite.js
Adds RestWrite.prototype._throwIfAuthDataDuplicate(error) and routes DB duplicate errors on _auth_data_<provider>.id to ACCOUNT_ALREADY_LINKED (208, "this auth is already used") in create/update/authData flows.
Options & Docs
src/Options/Definitions.js, src/Options/index.js, src/Options/docs.js
Adds createIndexAuthDataUniqueness DatabaseOptions boolean (env PARSE_SERVER_DATABASE_CREATE_INDEX_AUTH_DATA_UNIQUENESS, default true) and documents provider naming/index behavior.
Defaults
src/defaults.js
Inserts 'createIndexAuthDataUniqueness' into ParseServerDatabaseOptions.

Sequence Diagram

sequenceDiagram
    autonumber
    actor Client
    participant RestWrite
    participant DatabaseController
    participant StorageAdapter
    participant DB

    Client->>RestWrite: POST /1/users (signup with authData)
    RestWrite->>DatabaseController: ensure provider index readiness
    DatabaseController->>StorageAdapter: ensureAuthDataUniqueness(provider)
    alt first-time for provider
        StorageAdapter->>DB: create sparse unique index on _auth_data_<provider>.id
        DB-->>StorageAdapter: index created / exists / error
        StorageAdapter-->>DatabaseController: cached result
    else provider cached
        StorageAdapter-->>DatabaseController: cached OK
    end
    DatabaseController-->>RestWrite: index ready or warning
    RestWrite->>DB: insert user document with authData
    alt insert succeeds
        DB-->>RestWrite: user created
        RestWrite-->>Client: 201 Created (user)
    else duplicate on _auth_data_<provider>.id
        DB-->>RestWrite: duplicate key error (annotated)
        RestWrite-->>Client: 208 ACCOUNT_ALREADY_LINKED ("this auth is already used")
    end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

Suggested reviewers

  • Moumouls
🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 inconclusive)

Check name Status Explanation Resolution
Description check ❓ Inconclusive The PR description lacks substantive details in the Issue and Approach sections, leaving the rationale and implementation approach unclear. Expand the Issue section to explain the concurrent signup vulnerability and its impact, and add an Approach section detailing the technical solution implemented (e.g., unique indexes on authData fields, error handling changes).
✅ Passed checks (2 passed)
Check name Status Explanation
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
Title check ✅ Passed The title directly addresses the main change: implementing a fix for concurrent signups with identical authData creating duplicate users. It is concise, clear, and accurately summarizes the primary objective of the changeset.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@codecov
Copy link

codecov bot commented Mar 8, 2026

Codecov Report

❌ Patch coverage is 59.70149% with 27 lines in your changes missing coverage. Please review.
✅ Project coverage is 92.16%. Comparing base (2766f4f) to head (ac64d55).
⚠️ Report is 3 commits behind head on alpha.

Files with missing lines Patch % Lines
...dapters/Storage/Postgres/PostgresStorageAdapter.js 38.46% 16 Missing ⚠️
src/Adapters/Storage/Mongo/MongoStorageAdapter.js 60.86% 9 Missing ⚠️
src/RestWrite.js 80.00% 2 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##            alpha   #10149      +/-   ##
==========================================
- Coverage   92.71%   92.16%   -0.56%     
==========================================
  Files         192      192              
  Lines       16120    16181      +61     
  Branches      183      183              
==========================================
- Hits        14946    14913      -33     
- Misses       1162     1252      +90     
- Partials       12       16       +4     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

🧹 Nitpick comments (1)
src/Adapters/Storage/Mongo/MongoStorageAdapter.js (1)

830-842: Add ensureAuthDataUniqueness to the StorageAdapter interface.

Both MongoStorageAdapter and PostgresStorageAdapter implement this method, but it's not declared in the StorageAdapter interface. This forces RestWrite.js to use a runtime type check (typeof adapter.ensureAuthDataUniqueness === 'function') instead of relying on the formal contract. Adding it as an explicit method to the interface—either as a required method or optional with a default no-op—ensures that current and future adapters have visibility into this capability and prevents accidental omissions.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/Adapters/Storage/Mongo/MongoStorageAdapter.js` around lines 830 - 842,
Add the ensureAuthDataUniqueness method to the StorageAdapter interface so
adapters formally declare this capability: update the StorageAdapter type to
include ensureAuthDataUniqueness(provider: string): Promise<void> (or make it
optional with a default no-op implementation), then implement or forward that
signature in MongoStorageAdapter.ensureAuthDataUniqueness and
PostgresStorageAdapter.ensureAuthDataUniqueness, and remove the runtime typeof
check in RestWrite.js in favor of calling adapter.ensureAuthDataUniqueness
directly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@spec/AuthDataUniqueIndex.spec.js`:
- Around line 143-154: The test currently only checks that successes reference
one user and one persisted user; also assert that the losing requests returned
the expected duplicate-auth failure shape by extracting failures from results
(e.g., failures = results.filter(r => !r.success)) and verifying each failure
has the duplicate auth error code/message (compare against the project's
duplicate-auth identifier) and includes the anonymousId in the error context;
update the block after computing results/successes to validate failures
accordingly so the test fails if losers returned 500s or other unexpected
errors.

In `@src/Adapters/Storage/Mongo/MongoStorageAdapter.js`:
- Around line 585-591: The existing error parser in MongoStorageAdapter (the
block that sets err.userInfo using error.message.match) uses a too-restrictive
regex that only allows [A-Za-z0-9_] and thus misses provider keys with
characters like '-' used by ensureAuthDataUniqueness; update the regex used to
extract the index name (the authDataMatch in the code that runs when
!err.userInfo) to match the same provider charset that ensureAuthDataUniqueness
builds (e.g. use a lazy/wider match such as /index:\s+(_auth_data_.+?_id)/ or
/index:\s+(_auth_data_[^ ]+_id)/) so duplicated_field is correctly populated and
ACCOUNT_ALREADY_LINKED is returned when appropriate.

In `@src/Adapters/Storage/Postgres/PostgresStorageAdapter.js`:
- Around line 2064-2090: The _authDataUniqueIndexes cache
(this._authDataUniqueIndexes) doesn't prevent concurrent callers from all
running the CREATE UNIQUE INDEX DDL because the provider is only added after
awaiting this._client.none; to fix, reserve/mark the provider before the await
or track in-flight operations: create a map like
this._authDataUniqueIndexPromises keyed by provider, store the promise returned
by the index creation (the this._client.none call) immediately, have other
callers await that promise instead of issuing their own CREATE, and on final
resolution either add provider to this._authDataUniqueIndexes or remove the
in-flight entry on error; reference the symbols provider, indexName,
this._client.none, this._authDataUniqueIndexes, and consider cleaning the
in-flight map on rejection to allow retries.

In `@src/Controllers/DatabaseController.js`:
- Around line 1865-1873: The current auth uniqueness setup swallows errors by
catching and only logging in the authProviders Promise.all, which lets the
server boot without the uniqueness guarantee; change the behavior in the same
block so failures abort startup like the other index setups: remove the
per-provider .catch that suppresses the error (or rethrow after logging) so that
Promise.all rejects on any ensureAuthDataUniqueness failure; reference the
ensureAuthDataUniqueness calls created from authProviders and the
createIndexAuthDataUniqueness feature flag to locate and update this logic.

In `@src/Options/index.js`:
- Around line 809-811: Update the help text for the option
createIndexAuthDataUniqueness in src/Options/index.js to explicitly state that
indexes are created for each configured auth provider and that when
enableAnonymousUsers !== false an "anonymous" auth provider index is also
automatically included (mirror the behavior seen in DatabaseController.js around
the auth provider handling). After updating the description text, regenerate the
definitions with npm run definitions so the change propagates to Definitions.js
and docs.js.

In `@src/RestWrite.js`:
- Around line 533-538: The lazy index-creation call in RestWrite (the block that
calls adapter.ensureAuthDataUniqueness for each provider) must be skipped when
the database config disables automatic DDL; before computing providers and
calling adapter.ensureAuthDataUniqueness, check the same config flag used at
startup (e.g. this.config.database.createIndexAuthDataUniqueness) and only run
the provider mapping/Promise.all when that flag is true (and
adapter.ensureAuthDataUniqueness exists), so deployments with
createIndexAuthDataUniqueness:false do not attempt runtime index creation.

---

Nitpick comments:
In `@src/Adapters/Storage/Mongo/MongoStorageAdapter.js`:
- Around line 830-842: Add the ensureAuthDataUniqueness method to the
StorageAdapter interface so adapters formally declare this capability: update
the StorageAdapter type to include ensureAuthDataUniqueness(provider: string):
Promise<void> (or make it optional with a default no-op implementation), then
implement or forward that signature in
MongoStorageAdapter.ensureAuthDataUniqueness and
PostgresStorageAdapter.ensureAuthDataUniqueness, and remove the runtime typeof
check in RestWrite.js in favor of calling adapter.ensureAuthDataUniqueness
directly.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 43da608d-0e19-4b0a-85e2-b7ef3b6f6743

📥 Commits

Reviewing files that changed from the base of the PR and between 6d84496 and e2b0e5c.

📒 Files selected for processing (8)
  • spec/AuthDataUniqueIndex.spec.js
  • src/Adapters/Storage/Mongo/MongoStorageAdapter.js
  • src/Adapters/Storage/Postgres/PostgresStorageAdapter.js
  • src/Controllers/DatabaseController.js
  • src/Options/Definitions.js
  • src/Options/docs.js
  • src/Options/index.js
  • src/RestWrite.js

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (1)
spec/AuthDataUniqueIndex.spec.js (1)

196-207: ⚠️ Potential issue | 🟡 Minor

Assert the anonymous failure shape too.

This still passes if the losing anonymous requests return 500s or some other unexpected error, as long as only one user wins the race. Mirror the earlier duplicate-auth assertions here as well.

Suggested tightening
     const results = await Promise.all(concurrentRequests);
     const successes = results.filter(r => r.success);
+    const failures = results.filter(r => !r.success);

     // All successes should reference the same user
     const uniqueObjectIds = new Set(successes.map(r => r.data.objectId));
     expect(uniqueObjectIds.size).toBe(1);

+    for (const failure of failures) {
+      expect(failure.error.code).toBe(208);
+      expect(failure.error.error).toBe('this auth is already used');
+    }
+
     // Verify only one user exists in the database with this authData
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@spec/AuthDataUniqueIndex.spec.js` around lines 196 - 207, The test currently
only asserts successful responses but can hide unexpected failure types; after
computing results and successes (variables results and successes) add assertions
for the non-success entries (results.filter(r => !r.success)) to validate they
match the expected duplicate-auth failure shape used earlier in the file (e.g.,
check r.error.code, r.error.message or specific duplicate-auth indicators),
ensuring any losing anonymous requests return the expected Parse duplicate-auth
error rather than 500s or other errors; keep the existing uniqueObjectIds and
database query (new Parse.Query('_User') using anonymousId) checks intact.
🧹 Nitpick comments (1)
spec/AuthDataUniqueIndex.spec.js (1)

139-147: This doesn't actually prove the second call was a cache hit.

The assertion only checks that _authDataUniqueIndexes was populated. The test still passes if both calls reach the database. Please assert that the underlying index-creation path runs once across the two calls.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@spec/AuthDataUniqueIndex.spec.js` around lines 139 - 147, The test currently
only checks that _authDataUniqueIndexes was populated but doesn't verify the DB
call count; update the test to spy on the underlying index-creation method and
assert it was invoked exactly once across the two ensureAuthDataUniqueness
calls. Specifically, create a Jest spy (e.g., jest.spyOn(adapter, '...') ) on
the adapter method that performs the DB index creation (the actual method used
by ensureAuthDataUniqueness — for example the internal
_createAuthDataUniqueIndex or the DB client's createIndex call) before calling
adapter.ensureAuthDataUniqueness twice, then assert the spy was called once and
that adapter._authDataUniqueIndexes.has('fakeAuthProvider') is true. Ensure you
restore the spy after the test.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@spec/AuthDataUniqueIndex.spec.js`:
- Around line 196-207: The test currently only asserts successful responses but
can hide unexpected failure types; after computing results and successes
(variables results and successes) add assertions for the non-success entries
(results.filter(r => !r.success)) to validate they match the expected
duplicate-auth failure shape used earlier in the file (e.g., check r.error.code,
r.error.message or specific duplicate-auth indicators), ensuring any losing
anonymous requests return the expected Parse duplicate-auth error rather than
500s or other errors; keep the existing uniqueObjectIds and database query (new
Parse.Query('_User') using anonymousId) checks intact.

---

Nitpick comments:
In `@spec/AuthDataUniqueIndex.spec.js`:
- Around line 139-147: The test currently only checks that
_authDataUniqueIndexes was populated but doesn't verify the DB call count;
update the test to spy on the underlying index-creation method and assert it was
invoked exactly once across the two ensureAuthDataUniqueness calls.
Specifically, create a Jest spy (e.g., jest.spyOn(adapter, '...') ) on the
adapter method that performs the DB index creation (the actual method used by
ensureAuthDataUniqueness — for example the internal _createAuthDataUniqueIndex
or the DB client's createIndex call) before calling
adapter.ensureAuthDataUniqueness twice, then assert the spy was called once and
that adapter._authDataUniqueIndexes.has('fakeAuthProvider') is true. Ensure you
restore the spy after the test.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: ff1c4f72-8d48-4e78-8d00-346c70b67040

📥 Commits

Reviewing files that changed from the base of the PR and between e2b0e5c and 781c659.

📒 Files selected for processing (2)
  • spec/AuthDataUniqueIndex.spec.js
  • src/defaults.js

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (1)
src/Options/index.js (1)

809-811: ⚠️ Potential issue | 🟡 Minor

Mention the implicit anonymous provider in this help text.

Line 809 says the indexes are created for configured auth providers, but startup also includes anonymous when enableAnonymousUsers !== false. That makes the manual-index guidance incomplete for users who manage these indexes themselves.

✏️ Suggested wording
-  /* Set to `true` to automatically create unique indexes on the authData fields of the _User collection for each configured auth provider on server start. These indexes prevent race conditions during concurrent signups with the same authData. Set to `false` to skip index creation. Default is `true`.<br><br>⚠️ When setting this option to `false` to manually create the indexes, keep in mind that the otherwise automatically created indexes may change in the future to be optimized for the internal usage by Parse Server.
+  /* Set to `true` to automatically create unique indexes on the authData fields of the _User collection for each configured auth provider on server start. If `enableAnonymousUsers !== false`, the `anonymous` provider is included as well. These indexes prevent race conditions during concurrent signups with the same authData. Set to `false` to skip index creation. Default is `true`.<br><br>⚠️ When setting this option to `false` to manually create the indexes, keep in mind that the otherwise automatically created indexes may change in the future to be optimized for the internal usage by Parse Server.

Based on learnings: update src/Options/index.js first, then run npm run definitions to sync src/Options/Definitions.js and src/Options/docs.js.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/Options/index.js` around lines 809 - 811, Update the help text for the
createIndexAuthDataUniqueness option to explicitly mention the implicit
"anonymous" provider is also included at startup when enableAnonymousUsers !==
false, so users creating indexes manually are aware they must handle the
anonymous provider too; change the description in src/Options/index.js for the
createIndexAuthDataUniqueness symbol accordingly, then run `npm run definitions`
to propagate the updated text into src/Options/Definitions.js and
src/Options/docs.js.
🧹 Nitpick comments (1)
spec/AuthDataUniqueIndex.spec.js (1)

139-148: Consider verifying the cache prevents DB calls.

The test verifies the cache contains the provider after two calls, but doesn't confirm the second call skipped the database operation. To strengthen this:

💡 Optional: Verify cache prevents DB interaction
   it('should handle calling ensureAuthDataUniqueness multiple times via cache', async () => {
     const config = Config.get('test');
     const adapter = config.database.adapter;
 
     // First call creates the index
     await adapter.ensureAuthDataUniqueness('fakeAuthProvider');
+    
+    // Spy on internal method to verify no DB call on second invocation
+    const collection = await adapter._adaptiveCollection('_User');
+    const createIndexSpy = spyOn(collection._mongoCollection, 'createIndex').and.callThrough();
+    
     // Second call should be a cache hit (no DB call)
     await adapter.ensureAuthDataUniqueness('fakeAuthProvider');
     expect(adapter._authDataUniqueIndexes.has('fakeAuthProvider')).toBe(true);
+    expect(createIndexSpy).not.toHaveBeenCalled();
   });

This would explicitly verify the caching behavior rather than just the cache state.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@spec/AuthDataUniqueIndex.spec.js` around lines 139 - 148, The test should
assert the second ensureAuthDataUniqueness call does not hit the DB: add a
spy/mock for the underlying DB operation that ensureAuthDataUniqueness invokes
(e.g., whatever method on adapter performs index creation) before the first
call, call adapter.ensureAuthDataUniqueness('fakeAuthProvider') twice, and
expect the DB-method spy to have been called only once (or not called on the
second invocation) while still asserting
adapter._authDataUniqueIndexes.has('fakeAuthProvider') is true; use jest.spyOn
or a mock function to track the DB interaction and restore the spy after the
test.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@src/Options/index.js`:
- Around line 809-811: Update the help text for the
createIndexAuthDataUniqueness option to explicitly mention the implicit
"anonymous" provider is also included at startup when enableAnonymousUsers !==
false, so users creating indexes manually are aware they must handle the
anonymous provider too; change the description in src/Options/index.js for the
createIndexAuthDataUniqueness symbol accordingly, then run `npm run definitions`
to propagate the updated text into src/Options/Definitions.js and
src/Options/docs.js.

---

Nitpick comments:
In `@spec/AuthDataUniqueIndex.spec.js`:
- Around line 139-148: The test should assert the second
ensureAuthDataUniqueness call does not hit the DB: add a spy/mock for the
underlying DB operation that ensureAuthDataUniqueness invokes (e.g., whatever
method on adapter performs index creation) before the first call, call
adapter.ensureAuthDataUniqueness('fakeAuthProvider') twice, and expect the
DB-method spy to have been called only once (or not called on the second
invocation) while still asserting
adapter._authDataUniqueIndexes.has('fakeAuthProvider') is true; use jest.spyOn
or a mock function to track the DB interaction and restore the spy after the
test.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: b352365f-d288-44e0-8f16-514e68b60057

📥 Commits

Reviewing files that changed from the base of the PR and between 781c659 and 5610243.

📒 Files selected for processing (4)
  • spec/AuthDataUniqueIndex.spec.js
  • src/Options/Definitions.js
  • src/Options/docs.js
  • src/Options/index.js

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/RestWrite.js`:
- Around line 1616-1627: Centralize the duplicate-key translation for authData
by extracting the duplicated_field check (error && error.userInfo &&
error.userInfo.duplicated_field.startsWith('_auth_data_')) into a shared helper
(e.g., mapAuthDataDuplicateError or translateAuthDataDuplicate) that converts
that case into throwing new Parse.Error(Parse.Error.ACCOUNT_ALREADY_LINKED,
'this auth is already used'); then call that helper from the create path (where
database.create() currently has the inline check), the _User update path (the
code around the update handling), and the login-time authData refresh path so
all `_auth_data_` unique-index races produce ACCOUNT_ALREADY_LINKED instead of
DUPLICATE_VALUE.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: a430da67-8abf-4275-ba90-c1eaa0988695

📥 Commits

Reviewing files that changed from the base of the PR and between 5610243 and b8ca1b1.

📒 Files selected for processing (4)
  • src/Options/Definitions.js
  • src/Options/docs.js
  • src/Options/index.js
  • src/RestWrite.js
🚧 Files skipped from review as they are similar to previous changes (1)
  • src/Options/docs.js

@mtrezza
Copy link
Member Author

mtrezza commented Mar 8, 2026

@coderabbitai review

@coderabbitai
Copy link

coderabbitai bot commented Mar 8, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/RestWrite.js`:
- Around line 1632-1633: The call to _throwIfAuthDataDuplicate(error) always
throws (either ACCOUNT_ALREADY_LINKED or rethrows the original), making the
later username/email duplicate handling unreachable; replace that call with an
inline check: detect whether the incoming error is an authData duplicate
(matching the logic inside _throwIfAuthDataDuplicate that yields
ACCOUNT_ALREADY_LINKED) and only throw ACCOUNT_ALREADY_LINKED in that case,
otherwise allow execution to continue so the existing username/email-specific
logic (which produces USERNAME_TAKEN/EMAIL_TAKEN) can run; reference the same
symbols used now (_throwIfAuthDataDuplicate, ACCOUNT_ALREADY_LINKED,
USERNAME_TAKEN, EMAIL_TAKEN) when implementing the inline conditional so the
duplicate-authData case still errors early but non-authData duplicate errors
fall through to the username/email handlers.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: b1dff9bd-5b3d-441a-879a-767abf50cb83

📥 Commits

Reviewing files that changed from the base of the PR and between 5266bfa and a863c0b.

📒 Files selected for processing (1)
  • src/RestWrite.js

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
spec/AuthDataUniqueIndex.spec.js (1)

140-149: Consider spying to verify the cache actually prevents DB calls.

The test verifies the cache state after two calls but doesn't confirm the second call was truly a no-op. You could strengthen this by counting actual index-creation attempts.

💡 Optional improvement
   it('should handle calling ensureAuthDataUniqueness multiple times via cache', async () => {
     const config = Config.get('test');
     const adapter = config.database.adapter;
+    // Clear cache to ensure clean state
+    if (adapter._authDataUniqueIndexes) {
+      adapter._authDataUniqueIndexes.delete('fakeAuthProvider');
+    }
+
+    // Spy on the underlying collection/client to count actual DB calls
+    const collectionSpy = spyOn(adapter, '_adaptiveCollection').and.callThrough();
 
     // First call creates the index
     await adapter.ensureAuthDataUniqueness('fakeAuthProvider');
+    const callCountAfterFirst = collectionSpy.calls.count();
+
     // Second call should be a cache hit (no DB call)
     await adapter.ensureAuthDataUniqueness('fakeAuthProvider');
     expect(adapter._authDataUniqueIndexes.has('fakeAuthProvider')).toBe(true);
+    expect(collectionSpy.calls.count()).toBe(callCountAfterFirst);
   });

Note: The spy target depends on the adapter type (Mongo vs Postgres). For Postgres, you'd spy on _client.none instead.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@spec/AuthDataUniqueIndex.spec.js` around lines 140 - 149, The test should
assert the DB index creation is only called once by spying on the adapter's
underlying DB client: add a spy on the DB method used by your adapter (e.g., for
Mongo spy on adapter._client.createIndex or for Postgres spy on
adapter._client.none), call adapter.ensureAuthDataUniqueness('fakeAuthProvider')
twice, then assert the spy was called exactly once and that
adapter._authDataUniqueIndexes.has('fakeAuthProvider') is true; make sure to
restore the spy after the test.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@spec/AuthDataUniqueIndex.spec.js`:
- Around line 140-149: The test should assert the DB index creation is only
called once by spying on the adapter's underlying DB client: add a spy on the DB
method used by your adapter (e.g., for Mongo spy on adapter._client.createIndex
or for Postgres spy on adapter._client.none), call
adapter.ensureAuthDataUniqueness('fakeAuthProvider') twice, then assert the spy
was called exactly once and that
adapter._authDataUniqueIndexes.has('fakeAuthProvider') is true; make sure to
restore the spy after the test.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 3219a878-fd84-4c87-bc6a-aacb715c270e

📥 Commits

Reviewing files that changed from the base of the PR and between ce3c6d1 and db17c00.

📒 Files selected for processing (1)
  • spec/AuthDataUniqueIndex.spec.js

coderabbitai[bot]
coderabbitai bot previously approved these changes Mar 8, 2026
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/Adapters/Storage/Postgres/PostgresStorageAdapter.js`:
- Around line 1485-1494: The updateObjectsByQuery() error handling needs the
same authData unique-constraint translation as createObject(): when catching a
Postgres unique-violation (23505) inside updateObjectsByQuery(), inspect
error.constraint with the same regex used for authData
(/_User_unique_authData_([a-zA-Z0-9_]+)_id/) and, if it matches, set
err.userInfo = { duplicated_field: `_auth_data_${match[1]}` }; otherwise fall
back to the generic unique regex (/unique_([a-zA-Z]+)/) and set err.userInfo = {
duplicated_field: matches[1] } so the error is upgraded to
ACCOUNT_ALREADY_LINKED the same way createObject() does.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 2bfa2344-cc3b-4c81-b796-07b640915bc0

📥 Commits

Reviewing files that changed from the base of the PR and between db17c00 and a63a413.

📒 Files selected for processing (2)
  • spec/ParseUser.spec.js
  • src/Adapters/Storage/Postgres/PostgresStorageAdapter.js

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (2)
src/Adapters/Storage/Postgres/PostgresStorageAdapter.js (1)

1821-1844: Extract the unique-violation translation into one helper.

This block now duplicates the createObject() constraint parsing almost verbatim. A shared helper would keep insert/update behavior aligned the next time the constraint format changes.

♻️ Small consolidation
-    const promise = (transactionalSession ? transactionalSession.t : this._client)
-      .any(qs, values)
-      .catch(error => {
-        if (error.code === PostgresUniqueIndexViolationError) {
-          const err = new Parse.Error(
-            Parse.Error.DUPLICATE_VALUE,
-            'A duplicate value for a field with unique values was provided'
-          );
-          err.underlyingError = error;
-          if (error.constraint) {
-            const authDataMatch = error.constraint.match(/_User_unique_authData_([a-zA-Z0-9_]+)_id/);
-            if (authDataMatch) {
-              err.userInfo = { duplicated_field: `_auth_data_${authDataMatch[1]}` };
-            } else {
-              const matches = error.constraint.match(/unique_([a-zA-Z]+)/);
-              if (matches && Array.isArray(matches)) {
-                err.userInfo = { duplicated_field: matches[1] };
-              }
-            }
-          }
-          throw err;
-        }
-        throw error;
-      });
+    const promise = (transactionalSession ? transactionalSession.t : this._client)
+      .any(qs, values)
+      .catch(error => {
+        throw this._transformUniqueViolation(error);
+      });
_transformUniqueViolation(error: any) {
  if (error.code !== PostgresUniqueIndexViolationError) {
    return error;
  }

  const err = new Parse.Error(
    Parse.Error.DUPLICATE_VALUE,
    'A duplicate value for a field with unique values was provided'
  );
  err.underlyingError = error;

  if (error.constraint) {
    const authDataMatch = error.constraint.match(/_User_unique_authData_([a-zA-Z0-9_]+)_id/);
    if (authDataMatch) {
      err.userInfo = { duplicated_field: `_auth_data_${authDataMatch[1]}` };
    } else {
      const matches = error.constraint.match(/unique_([a-zA-Z]+)/);
      if (matches && Array.isArray(matches)) {
        err.userInfo = { duplicated_field: matches[1] };
      }
    }
  }

  return err;
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/Adapters/Storage/Postgres/PostgresStorageAdapter.js` around lines 1821 -
1844, The duplicate Postgres unique-violation parsing logic should be extracted
into a single helper (e.g. _transformUniqueViolation) and used wherever unique
constraint errors are handled (replace the inline code in the catch block that
checks PostgresUniqueIndexViolationError and the similar block in
createObject()). Implement _transformUniqueViolation(error) to detect
PostgresUniqueIndexViolationError, build and return a Parse.Error with
underlyingError and userInfo. Then call this helper from the catch (or
error-path) and throw the returned Parse.Error (or rethrow the original error if
the helper returns it) so insert/update behavior stays consistent.
src/Adapters/Storage/Mongo/MongoStorageAdapter.js (1)

580-591: Extract the duplicate-key parser into one helper.

The new authData fallback is duplicated in both duplicate-key handlers. Keeping the Parse.Error construction and duplicated_field parsing in one place will make the Mongo and update paths stay in sync.

♻️ Possible refactor
+function buildDuplicateValueError(error: any) {
+  const err = new Parse.Error(
+    Parse.Error.DUPLICATE_VALUE,
+    'A duplicate value for a field with unique values was provided'
+  );
+  err.underlyingError = error;
+  if (error.message) {
+    const matches = error.message.match(/index:[\sa-zA-Z0-9_\-\.]+\$?([a-zA-Z_-]+)_1/);
+    if (matches && Array.isArray(matches)) {
+      err.userInfo = { duplicated_field: matches[1] };
+    }
+    if (!err.userInfo) {
+      const authDataMatch = error.message.match(/index:\s+(_auth_data_[a-zA-Z0-9_]+_id)/);
+      if (authDataMatch) {
+        err.userInfo = { duplicated_field: authDataMatch[1] };
+      }
+    }
+  }
+  return err;
+}
...
-          const err = new Parse.Error(
-            Parse.Error.DUPLICATE_VALUE,
-            'A duplicate value for a field with unique values was provided'
-          );
-          err.underlyingError = error;
-          if (error.message) {
-            const matches = error.message.match(/index:[\sa-zA-Z0-9_\-\.]+\$?([a-zA-Z_-]+)_1/);
-            if (matches && Array.isArray(matches)) {
-              err.userInfo = { duplicated_field: matches[1] };
-            }
-            if (!err.userInfo) {
-              const authDataMatch = error.message.match(/index:\s+(_auth_data_[a-zA-Z0-9_]+_id)/);
-              if (authDataMatch) {
-                err.userInfo = { duplicated_field: authDataMatch[1] };
-              }
-            }
-          }
-          throw err;
+          throw buildDuplicateValueError(error);

Also applies to: 673-685

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/Adapters/Storage/Mongo/MongoStorageAdapter.js` around lines 580 - 591,
Extract the duplicate-key parsing logic into a single helper (e.g.,
parseDuplicateKey or buildDuplicateKeyError) and call it from both duplicate-key
handlers in MongoStorageAdapter so the Parse.Error construction and
duplicated_field extraction are centralized; the helper should accept the
original Mongo error, run the existing regexes (the primary index regex and the
authData fallback), and return a standardized object (or the constructed
Parse.Error) with userInfo. Replace the duplicated parsing code in the block
shown (currently matching /index:[\sa-zA-Z0-9_\-\.]+\$?([a-zA-Z_-]+)_1/ and
/index:\s+(_auth_data_[a-zA-Z0-9_]+_id)/) and the similar logic at the other
occurrence so both paths call this single helper. Ensure callers use the
helper's returned Parse.Error or set err.userInfo from the helper result to
preserve existing behavior.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/Adapters/Storage/Mongo/MongoStorageAdapter.js`:
- Around line 846-877: The ensureAuthDataUniqueness cache
(_authDataUniqueIndexes) can become stale when the _User collection or its
indexes are dropped; update the adapter to invalidate that cache: in
ensureAuthDataUniqueness (function ensureAuthDataUniqueness and the
_authDataUniqueIndexes set) add handling to clear the provider entry when the
index/collection is absent (detect Mongo "NamespaceNotFound"/collection missing
errors or index-not-found codes) and, more importantly, add cache invalidation
in the adapter methods that drop collections or indexes (e.g., the methods that
implement collection/index deletion/remove operations) so they remove the
corresponding provider from this._authDataUniqueIndexes or reset the set when
_User or its indexes are removed.

In `@src/Adapters/Storage/Postgres/PostgresStorageAdapter.js`:
- Line 2089: The function ensureAuthDataUniqueness and other
TypeScript/Flow-style annotations in this .js file are causing Biome parse
errors; fix by either converting the file to TypeScript (rename to .ts and
ensure any TS-only imports/exports and tsconfig updates are applied), or remove
all type annotations from ensureAuthDataUniqueness(provider: string) and other
Flow/TS syntax to make the file valid plain JavaScript, or explicitly add this
file to biome.json's exclude list; pick one approach and apply it consistently
across the file so Biome no longer reports parse errors.

---

Nitpick comments:
In `@src/Adapters/Storage/Mongo/MongoStorageAdapter.js`:
- Around line 580-591: Extract the duplicate-key parsing logic into a single
helper (e.g., parseDuplicateKey or buildDuplicateKeyError) and call it from both
duplicate-key handlers in MongoStorageAdapter so the Parse.Error construction
and duplicated_field extraction are centralized; the helper should accept the
original Mongo error, run the existing regexes (the primary index regex and the
authData fallback), and return a standardized object (or the constructed
Parse.Error) with userInfo. Replace the duplicated parsing code in the block
shown (currently matching /index:[\sa-zA-Z0-9_\-\.]+\$?([a-zA-Z_-]+)_1/ and
/index:\s+(_auth_data_[a-zA-Z0-9_]+_id)/) and the similar logic at the other
occurrence so both paths call this single helper. Ensure callers use the
helper's returned Parse.Error or set err.userInfo from the helper result to
preserve existing behavior.

In `@src/Adapters/Storage/Postgres/PostgresStorageAdapter.js`:
- Around line 1821-1844: The duplicate Postgres unique-violation parsing logic
should be extracted into a single helper (e.g. _transformUniqueViolation) and
used wherever unique constraint errors are handled (replace the inline code in
the catch block that checks PostgresUniqueIndexViolationError and the similar
block in createObject()). Implement _transformUniqueViolation(error) to detect
PostgresUniqueIndexViolationError, build and return a Parse.Error with
underlyingError and userInfo. Then call this helper from the catch (or
error-path) and throw the returned Parse.Error (or rethrow the original error if
the helper returns it) so insert/update behavior stays consistent.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 337730ae-ff22-4a90-bfac-f349baa9a848

📥 Commits

Reviewing files that changed from the base of the PR and between a63a413 and 6d4e91d.

📒 Files selected for processing (2)
  • src/Adapters/Storage/Mongo/MongoStorageAdapter.js
  • src/Adapters/Storage/Postgres/PostgresStorageAdapter.js

@mtrezza
Copy link
Member Author

mtrezza commented Mar 9, 2026

@coderabbitai review

@coderabbitai
Copy link

coderabbitai bot commented Mar 9, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (2)
src/Adapters/Storage/Postgres/PostgresStorageAdapter.js (1)

1818-1841: Extract the duplicate-key translation into one helper.

updateObjectsByQuery() now copies the same authData / generic unique-constraint parsing that already exists in createObject(). Keeping these regex branches separate makes the ACCOUNT_ALREADY_LINKED mapping easy to drift again.

♻️ Suggested shape
+  _transformUniqueViolation(error: any) {
+    if (error.code !== PostgresUniqueIndexViolationError) {
+      return error;
+    }
+
+    const err = new Parse.Error(
+      Parse.Error.DUPLICATE_VALUE,
+      'A duplicate value for a field with unique values was provided'
+    );
+    err.underlyingError = error;
+    if (error.constraint) {
+      const authDataMatch = error.constraint.match(/_User_unique_authData_([a-zA-Z0-9_]+)_id/);
+      if (authDataMatch) {
+        err.userInfo = { duplicated_field: `_auth_data_${authDataMatch[1]}` };
+      } else {
+        const matches = error.constraint.match(/unique_([a-zA-Z]+)/);
+        if (matches && Array.isArray(matches)) {
+          err.userInfo = { duplicated_field: matches[1] };
+        }
+      }
+    }
+    return err;
+  }
+
-      .catch(error => {
-        if (error.code === PostgresUniqueIndexViolationError) {
-          const err = new Parse.Error(
-            Parse.Error.DUPLICATE_VALUE,
-            'A duplicate value for a field with unique values was provided'
-          );
-          err.underlyingError = error;
-          if (error.constraint) {
-            const authDataMatch = error.constraint.match(/_User_unique_authData_([a-zA-Z0-9_]+)_id/);
-            if (authDataMatch) {
-              err.userInfo = { duplicated_field: `_auth_data_${authDataMatch[1]}` };
-            } else {
-              const matches = error.constraint.match(/unique_([a-zA-Z]+)/);
-              if (matches && Array.isArray(matches)) {
-                err.userInfo = { duplicated_field: matches[1] };
-              }
-            }
-          }
-          throw err;
-        }
-        throw error;
-      });
+      .catch(error => {
+        throw this._transformUniqueViolation(error);
+      });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/Adapters/Storage/Postgres/PostgresStorageAdapter.js` around lines 1818 -
1841, Extract the duplicate-key translation logic into a shared helper (e.g.,
buildDuplicateKeyError) and use it from both updateObjectsByQuery (the code
shown) and createObject so the authData / generic unique-constraint regex
parsing is centralized; the helper should accept the raw Postgres error (the one
compared to PostgresUniqueIndexViolationError), construct and return a
Parse.Error with DUPLICATE_VALUE, set underlyingError, and populate userInfo.
Replace the duplicated branches in updateObjectsByQuery and createObject to call
this helper and throw its result when a unique-index violation is detected.
spec/AuthDataUniqueIndex.spec.js (1)

140-147: Assert the no-op, not just the absence of an error.

Both calls can still hit the database and this test will pass, so it won't catch a missing _authDataUniqueIndexes cache. Spy on the underlying createIndex / _client.none path and assert the second ensureAuthDataUniqueness('fakeAuthProvider') call does not issue another DDL request.

Based on learnings, ensureAuthDataUniqueness(provider) is supposed to become a cached no-op on the request path once startup has pre-created configured provider indexes.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@spec/AuthDataUniqueIndex.spec.js` around lines 140 - 147, The test currently
only verifies absence of an error but must assert the idempotent no-op behavior:
instrument the adapter's underlying DDL call (e.g., spy/stub on
adapter.createIndex or adapter._client.none used by ensureAuthDataUniqueness)
before calling adapter.ensureAuthDataUniqueness('fakeAuthProvider') twice, then
assert the underlying DDL method was called exactly once (or not called on the
second invocation) and that the adapter's _authDataUniqueIndexes cache contains
'fakeAuthProvider' after the first call; update the test to use the spy/stub and
an assertion on its call count to ensure the second call does not issue another
DDL request.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/Adapters/Storage/Mongo/MongoStorageAdapter.js`:
- Around line 844-868: The ensureAuthDataUniqueness(provider) function always
calls createIndex on every request causing repeated metadata DDL; memoize
successful index creation per provider to make subsequent calls a no-op: add a
Set or map field on the adapter (e.g., this._authDataIndexesCreated) and at the
top of ensureAuthDataUniqueness(provider) return immediately if provider is
already in the set, otherwise proceed to call
this._adaptiveCollection(...)._mongoCollection.createIndex(...) and on
successful completion add provider to the set; preserve existing error handling
(duplicate/index exists checks and this.handleError) so behavior is unchanged on
failures.

In `@src/Adapters/Storage/Postgres/PostgresStorageAdapter.js`:
- Around line 2084-2107: ensureAuthDataUniqueness currently issues CREATE INDEX
on every call; make it a cached no-op by checking and setting a per-adapter
cache. Add/ensure a Set or Map property (e.g. this._authDataUniqueIndexes) is
initialized (constructor) and at the start of ensureAuthDataUniqueness(provider)
return immediately if this._authDataUniqueIndexes.has(provider); after the
CREATE UNIQUE INDEX succeeds or when you determine the index already exists
(including the duplicate-relation or unique-violation branches), add
this._authDataUniqueIndexes.add(provider) so subsequent calls skip the DDL; keep
existing error handling but ensure the cache is updated in the branches that
represent a successful/harmless outcome.

---

Nitpick comments:
In `@spec/AuthDataUniqueIndex.spec.js`:
- Around line 140-147: The test currently only verifies absence of an error but
must assert the idempotent no-op behavior: instrument the adapter's underlying
DDL call (e.g., spy/stub on adapter.createIndex or adapter._client.none used by
ensureAuthDataUniqueness) before calling
adapter.ensureAuthDataUniqueness('fakeAuthProvider') twice, then assert the
underlying DDL method was called exactly once (or not called on the second
invocation) and that the adapter's _authDataUniqueIndexes cache contains
'fakeAuthProvider' after the first call; update the test to use the spy/stub and
an assertion on its call count to ensure the second call does not issue another
DDL request.

In `@src/Adapters/Storage/Postgres/PostgresStorageAdapter.js`:
- Around line 1818-1841: Extract the duplicate-key translation logic into a
shared helper (e.g., buildDuplicateKeyError) and use it from both
updateObjectsByQuery (the code shown) and createObject so the authData / generic
unique-constraint regex parsing is centralized; the helper should accept the raw
Postgres error (the one compared to PostgresUniqueIndexViolationError),
construct and return a Parse.Error with DUPLICATE_VALUE, set underlyingError,
and populate userInfo. Replace the duplicated branches in updateObjectsByQuery
and createObject to call this helper and throw its result when a unique-index
violation is detected.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 5199ae6a-3fff-4694-8254-1bb979612632

📥 Commits

Reviewing files that changed from the base of the PR and between 6d4e91d and 3988fa7.

📒 Files selected for processing (3)
  • spec/AuthDataUniqueIndex.spec.js
  • src/Adapters/Storage/Mongo/MongoStorageAdapter.js
  • src/Adapters/Storage/Postgres/PostgresStorageAdapter.js

@mtrezza mtrezza changed the title fix: GHSA-c297-cj4p-3f3m-9 fix: Concurrent signup with same authentication creates duplicate users Mar 9, 2026
@mtrezza mtrezza merged commit 853bfe1 into parse-community:alpha Mar 9, 2026
20 of 24 checks passed
@mtrezza mtrezza deleted the fix/GHSA-c297-cj4p-3f3m-9 branch March 9, 2026 02:06
parseplatformorg pushed a commit that referenced this pull request Mar 9, 2026
## [9.5.2-alpha.11](9.5.2-alpha.10...9.5.2-alpha.11) (2026-03-09)

### Bug Fixes

* Concurrent signup with same authentication creates duplicate users ([#10149](#10149)) ([853bfe1](853bfe1))
@parseplatformorg
Copy link
Contributor

🎉 This change has been released in version 9.5.2-alpha.11

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

state:released-alpha Released as alpha version

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants