Skip to content

Feat: Memory adapter#860

Merged
abnegate merged 19 commits intomainfrom
feat-memory-adapter
Apr 29, 2026
Merged

Feat: Memory adapter#860
abnegate merged 19 commits intomainfrom
feat-memory-adapter

Conversation

@Meldiron
Copy link
Copy Markdown
Contributor

@Meldiron Meldiron commented Apr 20, 2026

Summary by CodeRabbit

  • New Features

    • Added an in-memory database adapter with transactions, document CRUD, rich querying, ordering, tenant-aware isolation, permissions enforcement, and basic aggregations.
  • Tests

    • Added end-to-end tests covering transactions, unique indexes, JSON round-trips, tenant isolation, error semantics, attribute/index migrations, and bulk operations.
  • Chores

    • CI updated to run adapter tests for the new in-memory adapter.
    • Container healthchecks improved for more reliable MySQL readiness checks.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 20, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Adds a new in-memory database adapter Utopia\Database\Adapter\Memory, a corresponding E2E test suite, and a CI workflow entry to run Memory adapter tests; also tweaks MySQL container healthchecks.

Changes

Cohort / File(s) Summary
In-Memory Adapter Implementation
src/Database/Adapter/Memory.php
New Utopia\Database\Adapter\Memory: full in-memory store for databases/collections/attributes/indexes/documents/permissions, transaction snapshot/restore, attribute/index management and propagation, unique-index enforcement and normalization, in-memory query evaluation (filters, ordering, cursors), aggregations, capability flags, and explicit unsupported-feature errors.
Adapter Test Suite
tests/e2e/Adapter/MemoryTest.php
Adds Tests\E2E\Adapter\MemoryTest: wiring for Memory adapter + Redis cache, isolated freshDatabase helper, many skipped inherited tests where unsupported, and extensive regression tests for transactions, upserts (throws), JSON/array round-trips, unique-index semantics and normalization, schema mutation effects, bulk ops/permission cleanup, tenant/shared-table behavior, sequence lookup, and collection/document error semantics.
CI Workflow Update
.github/workflows/tests.yml
Extends adapter_test GitHub Actions matrix to include a Memory adapter entry so Memory E2E tests run in CI.
Container healthchecks
docker-compose.yml
Updates MySQL and MySQL-mirror healthchecks to run mysqladmin ping against 127.0.0.1 on configured port using root and $$MYSQL_ROOT_PASSWORD with --silent.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Suggested reviewers

  • fogelito

Poem

🐇 I hide small rows in cozy stacks,

Snapshots saved in carrot-packed backs.
Indexes twitch, permissions hop,
Tests nibble bugs until they stop.
A rabbit grins — the memory pops!

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 28.64% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'Feat: Memory adapter' directly and clearly summarizes the main change—introducing a new Memory adapter for the database library.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feat-memory-adapter

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Comment thread tests/e2e/Adapter/MemoryTest.php Outdated
@abnegate abnegate marked this pull request as ready for review April 29, 2026 00:06
@greptile-apps
Copy link
Copy Markdown
Contributor

greptile-apps Bot commented Apr 29, 2026

Greptile Summary

This PR introduces a full in-memory database adapter backed by PHP arrays, with journal-based nested transactions, inverted permission indexes, unique-index hash tables, and the complete query/ordering/cursor surface. The CI matrix and MySQL healthcheck are also improved.

  • TYPE_NOT_EQUAL includes null-valued rows (src/Database/Adapter/Memory.php line 2631): looseEquals(null, 'value') returns false, so the loop completes and the predicate returns true for any null attribute — documents with null field values match notEqual queries. MariaDB/Postgres exclude them (NULL != x evaluates to NULL). This is the same class of null-inclusion divergence affecting NOT_BETWEEN, NOT_STARTS_WITH, and NOT_ENDS_WITH (previously flagged), now also present in NOT_EQUAL.

Confidence Score: 3/5

Safe to merge for non-production / test-fixture use cases, but the null-handling divergence in negation operators means the Memory adapter does not faithfully replicate SQL adapter semantics — cross-adapter test results will diverge.

Score reflects one P1 finding (NOT_EQUAL null inclusion) that adds to the existing set of previously-flagged null-divergence issues in negation operators. The adapter is large and the pattern recurs across multiple query types, increasing the risk of unexpected test divergence between Memory and the SQL adapters.

src/Database/Adapter/Memory.php — the matches() method's negation-operator cases all need a null-exclusion guard consistent with SQL semantics.

Important Files Changed

Filename Overview
src/Database/Adapter/Memory.php New 3,681-line in-memory adapter; well-structured journal-based transaction system, but TYPE_NOT_EQUAL includes null rows (diverging from SQL semantics), and several other negation operators have the same null-inclusion issue (previously flagged). Dead code in removePermissionsForDocument.
tests/e2e/Adapter/MemoryTest.php New 871-line test class inheriting standard adapter scopes; covers transactions, unique indexes, tenant isolation, and bulk ops — no explicit coverage for negation operators against null-valued attributes.
.github/workflows/tests.yml Adds Memory to the adapter test matrix; straightforward one-line change.
docker-compose.yml Improves MySQL healthcheck to use CMD-SHELL with root credentials and 127.0.0.1; unrelated to the Memory adapter but a solid reliability fix.

Reviews (12): Last reviewed commit: "test(memory): tighten regression tests f..." | Re-trigger Greptile

Comment thread src/Database/Adapter/Memory.php Outdated
Comment thread src/Database/Adapter/Memory.php
Comment thread src/Database/Adapter/Memory.php
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 8

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/Database/Adapter/Memory.php`:
- Around line 239-248: The updateAttribute implementation must not return early
when $newKey is provided: modify updateAttribute (and its call to
renameAttribute) so the attribute metadata (type/size/signed/array/required) is
applied to the new key after renaming instead of skipping the rest of the
update; additionally, ensure renameAttribute and deleteAttribute both rewrite
any index definitions and uniqueness metadata that reference the old attribute
name to point to the new name (or remove the attribute from indexes on delete)
so indexed queries and unique checks remain consistent — update the logic that
mutates $this->data[$key]['indexes'] and any stored unique maps inside
renameAttribute, deleteAttribute, and updateAttribute to reflect the new
attribute name and/or remove references.
- Around line 1148-1153: documentToRow() JSON-encodes array attribute values but
rowToDocument() only decodes the special _permissions field, so normal array
fields return as JSON strings from getDocument()/find(); update rowToDocument()
to mirror documentToRow() by detecting and json_decode()-ing attribute values
that are JSON-encoded arrays (skip/keep existing handling of _permissions), or
otherwise reverse the encoding logic so attributes encoded in documentToRow()
are decoded in rowToDocument() to restore proper array types for fields returned
by getDocument()/find().
- Around line 338-356: createIndex currently registers Database::INDEX_UNIQUE
without checking existing rows; update createIndex to, when $type ===
Database::INDEX_UNIQUE, iterate the existing rows stored under
$this->data[$key]['rows'], compute the index key for each row using the same
attribute list (and apply $this->filter/$this->collation logic if used
elsewhere), detect any duplicate index keys and throw a DatabaseException if
duplicates exist; only if no duplicates are found, proceed to set
$this->data[$key]['indexes'][$id] and return true. Ensure you reference/create
the same normalization used on writes so enforcement is consistent.
- Around line 28-30: The Memory adapter stores documents keyed only by
$document->getId(), causing shared-table tenants to clobber each other; change
the documents structure to include tenant isolation (e.g.,
$data[...]['documents'][$tenant][$id] or a composite key like "$tenant|$id") and
update all code that writes, reads, updates, deletes, checks existence, and
iterates documents (the sections handling documents and the methods around the
documents/indexes manipulation) to use the tenant-aware key or nested array;
also ensure sequence handling and index maintenance use the same tenant scoping
so tenants cannot overwrite or read each other's records.
- Around line 598-612: The bulk-delete loop removes documents by matching
$seqSet but only cleans permissions when $permissionIds is provided, causing
stale permission rows; update the method that contains the shown loop to always
remove permission entries for documents that were deleted by sequence: after the
foreach that unsets $this->data[$key]['documents'][$uid], collect the deleted
document ids (from $row['_id'] or $uid), build a set (like $deletedSet) and then
filter $this->permissions[$key] to drop any permission where $p['document']
exists in either $deletedSet or the existing $permSet derived from
$permissionIds, ensuring $this->permissions is pruned regardless of whether
$permissionIds was populated.
- Around line 1577-1578: The Memory adapter currently throws DuplicateException
when $rowSignature === $signature, but that should only happen for duplicate
_uid collisions; change the error to throw UniqueException for user-defined
unique-index collisions. Update the branch around $rowSignature === $signature
to inspect the unique index fields (e.g., the index or key definition used to
compute $signature) and: if the index is the _uid index then throw
DuplicateException, otherwise throw UniqueException; use the existing symbols
$rowSignature, $signature, DuplicateException and UniqueException and adjust the
logic where the signature is compared to perform this distinction.
- Around line 79-113: rollbackTransaction() currently pops and restores the most
recent snapshot, but the contract requires rolling back all active transactions;
change rollbackTransaction() to restore the earliest snapshot (the initial state
captured by startTransaction(), e.g. $this->snapshots[0] or
reset($this->snapshots')) instead of the last one, then clear $this->snapshots
and set $this->inTransaction = 0; ensure you still restore both $this->data and
$this->permissions from that snapshot and return true/false as before.
- Around line 501-540: Batch updates in updateDocuments() bypass unique-index
checks because enforceUniqueIndexes() is never invoked; modify updateDocuments()
to call enforceUniqueIndexes() for each document (or once for the whole batch)
before applying writes to $this->data so duplicate unique values are rejected.
Specifically, use the existing $key, $uid, $attrs and $updates context: validate
the proposed attribute changes against enforceUniqueIndexes($key, $uid, $attrs
or computed new row) and abort/throw if enforcement fails, ensuring no partial
writes occur (rollback or validate-before-write) and preserve existing behavior
for createdAt/updatedAt/permissions handling.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: babcac22-6b47-43f5-8a95-53fb25d8ea93

📥 Commits

Reviewing files that changed from the base of the PR and between bbeaf38 and 7e02eb5.

📒 Files selected for processing (3)
  • .github/workflows/tests.yml
  • src/Database/Adapter/Memory.php
  • tests/e2e/Adapter/MemoryTest.php

Comment thread src/Database/Adapter/Memory.php
Comment thread src/Database/Adapter/Memory.php
Comment thread src/Database/Adapter/Memory.php Outdated
Comment thread src/Database/Adapter/Memory.php
Comment thread src/Database/Adapter/Memory.php Outdated
Comment thread src/Database/Adapter/Memory.php
Comment thread src/Database/Adapter/Memory.php Outdated
Comment thread src/Database/Adapter/Memory.php Outdated
- rollbackTransaction now decrements inTransaction and pops a single
  snapshot, matching the SQL adapter and the nested-transaction contract
- updateAttribute applies new metadata after a rename instead of
  returning early and silently dropping type/size/required changes
- renameAttribute and deleteAttribute keep index attribute lists in sync
  so indexed queries and uniqueness checks remain consistent
- createIndex(INDEX_UNIQUE) refuses to register when existing rows
  already contain duplicate values for the indexed columns
- enforceUniqueIndexes skips signatures with NULLs so multiple null
  values do not collide on a unique index
- updateDocuments runs enforceUniqueIndexes per row so batch updates
  cannot introduce duplicate unique values
- deleteDocuments always purges permissions for documents removed by
  sequence, not only when explicit permissionIds are provided
- rowToDocument decodes JSON-encoded array attributes so array fields
  round-trip from getDocument/find
- introduce documentKey() and key documents by tenant|id under
  sharedTables so tenants no longer clobber each other on the same id

Adds regression coverage for each fix.
Comment thread src/Database/Adapter/Memory.php Outdated
@claude
Copy link
Copy Markdown
Contributor

claude Bot commented Apr 29, 2026

CI Healing Report — Flaky (no fix committed)

Failing job: Adapter Tests (MySQL) — run 25084920551

Root cause: Infrastructure flake, not a code regression.

All 535 errors are the same:

PDOException: SQLSTATE[HY000] [2002] Connection refused
  /usr/src/code/src/Database/PDO.php:29
  /usr/src/code/tests/e2e/Adapter/MySQLTest.php:38

Every MySQL E2E test failed at setUp() before any test logic ran — the MySQL service container was unreachable. The MariaDB / Postgres / SQLite / SQL Server matrix legs are unaffected by the same code, which confirms this is a service-availability issue in the MySQL container, not a defect on feat-memory-adapter.

Action: No code fix is appropriate. Recommend re-running the MySQL leg only.

— task: healing

abnegate and others added 2 commits April 29, 2026 12:50
A second pass over the Memory adapter resolves observable behavior
differences from MariaDB so the adapter can serve as a true drop-in.

- find/count/sum/deleteDocument(s)/updateDocuments throw NotFoundException
  when the collection itself is missing (mirrors MariaDB's PDO unknown-table
  exception). deleteDocument still returns false when only the document
  is absent — that path matches rowCount() == 0 on MariaDB.
- createIndex(INDEX_UNIQUE) raises DuplicateException (not the generic
  DatabaseException) when existing rows violate the constraint.
  Database::createIndex catches that and silently registers the index as
  an "orphan" — matching MariaDB+Database's end-state where the metadata
  exists but the physical index does not.
- Query::TYPE_SELECT now applies projection: find/getDocument restrict
  user attributes to the requested set while preserving internal columns
  ($id, $sequence, $createdAt, $updatedAt, $permissions, $tenant).
- deleteDocuments now skips rows that don't belong to the current tenant
  in shared-tables mode, preventing cross-tenant destruction when
  auto-incrementing sequences collide between tenants.
- getSequences uses each document's own tenant when building the lookup
  key (mirroring MariaDB's :_tenant_$i bind), instead of always using the
  adapter's current tenant. Documents with sequences already set are
  skipped to match the SQL contract.
- find sorts by _id ASC by default to mirror MariaDB's clustered-index
  ordering, so paginating without an explicit order is stable.
- count(max:0) and sum(max:0) honour zero (LIMIT 0 returns no rows on
  MariaDB) instead of treating any non-positive max as "no limit".
- increaseDocumentAttribute returns true on bound violations (silent
  no-op) — MariaDB encodes the bound check in WHERE so the UPDATE just
  matches zero rows but the call still returns true.
- rowToDocument no longer JSON-sniffs string values that happen to start
  with [ or { (which would inadvertently parse user-stored JSON-shaped
  strings on read). Array decoding is delegated to the Database layer's
  casting step; getSupportForCasting now returns true so arrays
  round-trip through Database::casting like the SQL adapters.
- enforceUniqueIndexes type-normalizes signatures (booleans → ints,
  numeric strings → numbers, arrays → JSON-encoded) so a doc value of
  `true` matches a stored row value of `1`.
- applyOrdering shuffles up-front when ORDER_RANDOM is requested instead
  of returning random_int(-1,1) inside usort — that comparator is
  non-transitive and breaks the sort.

Each behavior change ships with a regression test in MemoryTest.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Replace the bespoke MemoryTest with one that extends Base, so the Memory
adapter is exercised against the same comprehensive scenarios as MariaDB,
MySQL, Postgres and friends. Inherits all 13 scope traits (Collection,
Document, Attribute, Index, Operator, Permission, Relationship, Spatial,
Schemaless, ObjectAttribute, Vector, General, CustomDocumentType); scope
tests gated by getSupportFor* flags self-skip for the unimplemented
features (relationships, operators, vectors, spatial, fulltext,
schemaless, object attributes). Seven inherited tests are explicitly
skipped via markTestSkipped where the surrounding trait does not gate on
a flag (relationship-specific permission/order tests, upsert-with-JSON,
attribute-name-with-dots, attribute structural type-coercion on resize,
varchar truncation on shrink, and a no-op keyword test).

Memory-specific regressions (transaction nesting, raw store layout after
attribute operations, tenant isolation under shared tables, sequence
resolution from per-document tenant) stay as direct tests, but route
through a `freshDatabase()` helper so they cannot leak collections into
the shared `getDatabase()` instance the inherited scopes use.

Adapter parity tweaks unblocked by the broader coverage:

- createDocument honours the per-call skipDuplicates flag (mirrors
  MariaDB INSERT IGNORE);
- delete(name) only purges the namespace if the database flag was
  actually set, so a stray drop of a sibling schema (e.g. testEvents'
  hellodb) no longer wipes METADATA;
- increase/decrease bound checks compare the pre-update column value
  (matching MariaDB's WHERE-clause semantics) instead of the post-update
  value;
- documentKey lower-cases the id, matching MariaDB's default
  case-insensitive collation;
- getDocument honours Query::select projection, dropping unselected
  user columns while preserving internals;
- getMaxIndexLength returns 1024 (Mongo's value) so callers that derive
  sizes via `getMaxIndexLength() - N` arithmetic don't go negative;
- getSupportNonUtfCharacters reports false (Memory does not actively
  reject non-UTF byte sequences);
- TYPE_CONTAINS / TYPE_CONTAINS_ANY against scalar strings use stripos
  for MariaDB-compatible case-insensitive substring matching;
- TYPE_CONTAINS_ANY and TYPE_CONTAINS_ALL are implemented for arrays.

647 tests now run against Memory, of which 25 are skipped (relationships,
operators, vectors, spatial, fulltext, regex, schemaless plus the seven
Memory-specific gaps); 2447 assertions all pass.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Comment thread src/Database/Adapter/Memory.php
@claude
Copy link
Copy Markdown
Contributor

claude Bot commented Apr 29, 2026

CI run 25086657068 — flaky test, no fix committed

What failed
Tests\E2E\Adapter\SharedTables\MariaDBTest::testSingleDocumentDateOperations (DocumentTests.php:6029):

Failed asserting that '2026-04-29T01:40:33.519+00:00' is not equal to '2026-04-29T01:40:33.519+00:00'.

Why this is flaky
The doc11 sub-case (lines 6014–6029) creates a document with setPreserveDates(true) and only $createdAt set, so $updatedAt is auto-generated to "now". It then immediately calls updateDocument(...) with no dates and asserts the new $updatedAt differs from the original. With millisecond-precision datetimes, both operations can land in the same millisecond on a fast runner — exactly what happened here (...33.519 vs ...33.519).

The same test has been hardened against this flake before in other sub-cases — see commits e5d0d3f6 ("Flaky testSingleDocumentDateOperations timing issue"), f7467014 ("prevent flaky date test by using past timestamp for $updatedAt"), b5b87f9f, 451d5235. The doc11 block is the one spot that doesn't have a sleep or a past-timestamp guard. The earlier doc4 block at line 5934 already does sleep(1) for exactly this reason.

Why it isn't this PR's fault
This branch (feat-memory-adapter) only adds:

  • src/Database/Adapter/Memory.php
  • tests/e2e/Adapter/MemoryTest.php
  • one line in .github/workflows/tests.yml

It does not touch DocumentTests.php, Database.php, or any code path exercised by the MariaDB SharedTables date-operations test.

Action
Per the healing protocol for flakes: commenting and exiting without committing. Recommend re-running the failed job; if the flake recurs, the doc11 block at tests/e2e/Adapter/Scopes/DocumentTests.php:6028 should get the same sleep(1) (or past-timestamp) treatment as doc4 at line 5934.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (3)
src/Database/Adapter/Memory.php (2)

1880-1881: ⚠️ Potential issue | 🟠 Major

User-defined unique-index collisions should throw UniqueException, not DuplicateException.

This branch handles unique-index enforcement, not duplicate _uid primary identity collisions. Throwing DuplicateException here breaks adapter parity for uniqueness error semantics.

Proposed fix
 use Utopia\Database\Exception\Duplicate as DuplicateException;
 use Utopia\Database\Exception\NotFound as NotFoundException;
+use Utopia\Database\Exception\Unique as UniqueException;
 ...
                 if ($rowSignature === $signature) {
-                    throw new DuplicateException('Document with the requested unique attributes already exists');
+                    $mapped = \array_map(fn (string $a) => $this->mapAttribute($a), $attributes);
+                    $isUidIndex = \count($mapped) === 1 && $mapped[0] === '_uid';
+                    if ($isUidIndex) {
+                        throw new DuplicateException('Document already exists');
+                    }
+                    throw new UniqueException('Document with the requested unique attributes already exists');
                 }

Based on learnings: In src/Database/Adapter/MariaDB.php, only duplicate _uid violations should throw DuplicateException; other unique constraint violations should throw UniqueException.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/Database/Adapter/Memory.php` around lines 1880 - 1881, The code currently
throws DuplicateException when a user-defined unique-index collision is detected
(the branch where $rowSignature === $signature); change this to throw
UniqueException instead so uniqueness violations use UniqueException while
keeping DuplicateException reserved for primary _uid duplicate cases (mirror the
behavior in MariaDB.php); update the branch handling $rowSignature ===
$signature to throw UniqueException and ensure any callers/tests expecting
DuplicateException for non-_uid conflicts are adjusted accordingly.

91-125: ⚠️ Potential issue | 🔴 Critical

Rollback semantics violate the adapter contract for nested transactions.

rollbackTransaction() currently restores only the latest snapshot and decrements by one. The adapter contract requires rollback to cancel all active transactions and reset depth to 0.

Proposed fix
 public function startTransaction(): bool
 {
-    $this->snapshots[] = [
-        'data' => $this->deepCopy($this->data),
-        'permissions' => $this->deepCopy($this->permissions),
-    ];
+    if ($this->inTransaction === 0) {
+        $this->snapshots[] = [
+            'data' => $this->deepCopy($this->data),
+            'permissions' => $this->deepCopy($this->permissions),
+        ];
+    }
     $this->inTransaction++;
     return true;
 }

 public function commitTransaction(): bool
 {
     if ($this->inTransaction === 0) {
         return false;
     }

-    \array_pop($this->snapshots);
     $this->inTransaction--;
+    if ($this->inTransaction === 0) {
+        $this->snapshots = [];
+    }
     return true;
 }

 public function rollbackTransaction(): bool
 {
     if ($this->inTransaction === 0) {
         return false;
     }

-    $snapshot = \array_pop($this->snapshots);
+    $snapshot = $this->snapshots[0] ?? null;
     if ($snapshot !== null) {
         $this->data = $snapshot['data'];
         $this->permissions = $snapshot['permissions'];
     }
-    $this->inTransaction--;
+    $this->inTransaction = 0;
+    $this->snapshots = [];
     return true;
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/Database/Adapter/Memory.php` around lines 91 - 125, rollbackTransaction()
currently only pops the latest snapshot and decrements inTransaction; change it
to cancel all active transactions by restoring the initial pre-transaction
snapshot, clearing the snapshots stack, and resetting inTransaction to 0.
Specifically, in the Memory adapter adjust rollbackTransaction() to: if
inTransaction === 0 return false; take the first snapshot from $this->snapshots
(restore its 'data' and 'permissions' into $this->data and $this->permissions),
clear $this->snapshots (or set to empty array), set $this->inTransaction = 0,
and return true; keep startTransaction() and commitTransaction() behavior
unchanged except ensure snapshots management remains consistent with nested
transactions created by startTransaction().
tests/e2e/Adapter/MemoryTest.php (1)

427-430: ⚠️ Potential issue | 🟠 Major

Unique-index collision tests should assert UniqueException instead of DuplicateException.

These assertions currently lock in the wrong error contract for user-defined unique constraints.

Proposed fix
 use Utopia\Database\Exception\Duplicate as DuplicateException;
 use Utopia\Database\Exception\NotFound as NotFoundException;
+use Utopia\Database\Exception\Unique as UniqueException;
 ...
-        $this->expectException(DuplicateException::class);
+        $this->expectException(UniqueException::class);
         $database->updateDocuments('handles', new Document(['handle' => 'taken']), [
             Query::equal('$id', ['h2']),
         ]);
 ...
-        $this->expectException(DuplicateException::class);
+        $this->expectException(UniqueException::class);
         $database->createDocument('flags', new Document([
             '$id' => 'second',
             '$permissions' => [Permission::read(Role::any())],
             'active' => true,
         ]));

Based on learnings: In src/Database/Adapter/MariaDB.php, only duplicate _uid violations should throw DuplicateException; all other unique violations should throw UniqueException.

Also applies to: 659-664

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/e2e/Adapter/MemoryTest.php` around lines 427 - 430, The test is
asserting the wrong exception for user-defined unique-index collisions; replace
expectException(DuplicateException::class) with
expectException(UniqueException::class) for the updateDocuments(...) cases that
trigger unique-index violations (e.g., the call using updateDocuments('handles',
new Document(['handle' => 'taken']), [ Query::equal('$id', ['h2']) ]) and the
similar case later around the second occurrence) so the tests match the intended
contract where only internal _uid duplicate errors throw DuplicateException
while all other unique constraint violations throw UniqueException.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/Database/Adapter/Memory.php`:
- Around line 406-426: The unique-index creation loop in createIndex (handling
Database::INDEX_UNIQUE over $this->data[$key]['documents']) builds signatures
from raw row values but must mirror write-time enforcement by normalizing each
attribute exactly as enforceUniqueIndexes does; update the inner loop to call
the same normalizeIndexValue (using $this->normalizeIndexValue(...) or the
equivalent method) on each $row[$this->mapAttribute($attribute)] value
(preserving null handling), then JSON-hash those normalized values to detect
duplicates and throw DuplicateException as before so index creation and runtime
enforcement behave identically.
- Around line 648-698: The current loop in Memory.php updates each document then
calls enforceUniqueIndexes, which allows earlier writes to persist if a later
document violates uniqueness; change this to a two-phase commit: first iterate
$documents and for each build the merged Document (use rowToDocument(),
documentKey(), and new Document(\array_merge(...))), call
enforceUniqueIndexes($key, $merged, $uid) for validation only and collect the
merged rows and permission changes into temporary arrays without mutating
$this->data or $this->permissions; if all validations pass, perform a second
loop that writes the prepared rows into $this->data[$key]['documents'][$docKey],
encodes arrays and permissions, and sets _createdAt/_updatedAt using $updates,
and finally update $this->permissions (use Database::PERMISSIONS and
$updates->getPermissionsByType) — this ensures atomic-like behavior for the
batch.

---

Duplicate comments:
In `@src/Database/Adapter/Memory.php`:
- Around line 1880-1881: The code currently throws DuplicateException when a
user-defined unique-index collision is detected (the branch where $rowSignature
=== $signature); change this to throw UniqueException instead so uniqueness
violations use UniqueException while keeping DuplicateException reserved for
primary _uid duplicate cases (mirror the behavior in MariaDB.php); update the
branch handling $rowSignature === $signature to throw UniqueException and ensure
any callers/tests expecting DuplicateException for non-_uid conflicts are
adjusted accordingly.
- Around line 91-125: rollbackTransaction() currently only pops the latest
snapshot and decrements inTransaction; change it to cancel all active
transactions by restoring the initial pre-transaction snapshot, clearing the
snapshots stack, and resetting inTransaction to 0. Specifically, in the Memory
adapter adjust rollbackTransaction() to: if inTransaction === 0 return false;
take the first snapshot from $this->snapshots (restore its 'data' and
'permissions' into $this->data and $this->permissions), clear $this->snapshots
(or set to empty array), set $this->inTransaction = 0, and return true; keep
startTransaction() and commitTransaction() behavior unchanged except ensure
snapshots management remains consistent with nested transactions created by
startTransaction().

In `@tests/e2e/Adapter/MemoryTest.php`:
- Around line 427-430: The test is asserting the wrong exception for
user-defined unique-index collisions; replace
expectException(DuplicateException::class) with
expectException(UniqueException::class) for the updateDocuments(...) cases that
trigger unique-index violations (e.g., the call using updateDocuments('handles',
new Document(['handle' => 'taken']), [ Query::equal('$id', ['h2']) ]) and the
similar case later around the second occurrence) so the tests match the intended
contract where only internal _uid duplicate errors throw DuplicateException
while all other unique constraint violations throw UniqueException.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 5072368a-4b6b-4e90-aebe-a5a8c4d7cf37

📥 Commits

Reviewing files that changed from the base of the PR and between 7e02eb5 and d4ba937.

📒 Files selected for processing (2)
  • src/Database/Adapter/Memory.php
  • tests/e2e/Adapter/MemoryTest.php

Comment thread src/Database/Adapter/Memory.php Outdated
Comment thread src/Database/Adapter/Memory.php Outdated
abnegate and others added 5 commits April 29, 2026 14:05
Track per-database collection ownership so delete($name) only wipes the
collections registered to that schema. Storage keys are prefixed with the
current database name so two databases under the same namespace stay
isolated, mirroring MariaDB's CREATE DATABASE behaviour.

Honour the metadata-collection NULL-tenant fallback under shared tables
(parity with SQL.php's `OR _tenant IS NULL` clause), and persist each
document's own tenant rather than the adapter's current tenant so
tenantPerDocument writes land on the right partition.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Apply Operator instances against the stored row before writing back, so
increment/decrement/multiply/divide/modulo/power, string concat/replace,
toggle, array append/prepend/insert/remove/unique/intersect/diff/filter,
and date add/sub/setNow operators all behave the same as in MariaDB.

Numeric overflow is detected via remaining-headroom comparison rather
than naive addition — preserves int-ness near PHP_INT_MAX so the Range
validator still accepts the result. Skip the upsert+operator scope tests
in MemoryTest because Memory does not implement upserts.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Implement Query::TYPE_SEARCH / TYPE_NOT_SEARCH / TYPE_REGEX in the
in-memory matcher: tokenize both haystack and needle on whitespace and
punctuation, return true if any needle token is present (matching
MariaDB MATCH AGAINST natural-language semantics) or if a quoted phrase
appears verbatim. Wildcard suffix (`term*`) collapses to a prefix scan
across tokens. Regex queries delegate to preg_match against a PCRE
delimiter-wrapped pattern.

Drop the throw on `INDEX_FULLTEXT` from createIndex — Memory has no
physical index, so the metadata is enough. Flip the corresponding
support flags so the inherited fulltext/regex scope tests run instead
of self-skipping.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
- getSupportForObject() / getSupportForObjectIndexes() now true
- Query, order, and cursor attribute references accept dotted paths
  (foo.bar.baz) and resolve through nested objects on read
- Storage keeps PHP arrays in place; resolveAttributeValue decodes
  legacy JSON-encoded blobs on demand for backwards compatibility
Flip getSupportForRelationships() to true and implement createRelationship,
updateRelationship, and deleteRelationship by registering relationship fields
on the in-memory attribute list. Reads now surface registered relationship
keys as null when unpopulated, matching MariaDB's `ADD COLUMN ... DEFAULT
NULL` semantics so the wrapper can populate, traverse, and remove
relationships consistently across drivers.

updateDocument now applies a sparse merge against the existing row instead
of replacing it wholesale, so the wrapper's pattern of removing unchanged
relationship attributes before update no longer clobbers stored FKs.

resolveAttributeValue tries the fully-filtered column name before falling
back to dot-path traversal so relationship keys whose names contain dots
(e.g. `\$symbols_coll.ection3`) match the row's flattened column.

Cascade and metadata orchestration continue to live in the Database
wrapper; the adapter only mutates the underlying row maps.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

♻️ Duplicate comments (3)
src/Database/Adapter/Memory.php (3)

2583-2584: ⚠️ Potential issue | 🟠 Major

Throw UniqueException for user-defined unique collisions.

Lines 2583-2584 are enforcing a user-defined unique index, not a duplicate _uid. Returning DuplicateException here diverges from the other adapters and also bakes the wrong contract into the new Memory E2E assertions. Based on learnings, only duplicate _uid violations should throw DuplicateException; all other unique constraint violations should throw UniqueException.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/Database/Adapter/Memory.php` around lines 2583 - 2584, The code in
Memory.php currently throws DuplicateException when a user-defined unique index
collides (the check comparing $rowSignature === $signature); change this to
throw UniqueException instead so that only duplicate _uid violations use
DuplicateException while all other unique constraint conflicts use
UniqueException; locate the collision handling around the $rowSignature ===
$signature comparison and replace the thrown exception class accordingly
(keeping or adapting the error message as needed).

952-1007: ⚠️ Potential issue | 🟠 Major

Validate the full batch before mutating any rows.

Lines 953-1006 still validate and write one document at a time. If a later document fails enforceUniqueIndexes(), earlier updates remain applied, which diverges from the statement-level behavior of the SQL adapters.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/Database/Adapter/Memory.php` around lines 952 - 1007, Current code
applies updates per-document and calls enforceUniqueIndexes immediately, causing
partial commits if a later document fails; change to a two-phase update in the
same method: first iterate $documents to compute per-document $resolvedAttrs via
$this->applyOperators, build the merged Document (using $this->rowToDocument +
$resolvedAttrs + ['$id'=>$uid]) and call $this->enforceUniqueIndexes for every
merged candidate without mutating $this->data (collect these computed rows and
any permission/timestamp changes into a staging array), and only after all
documents pass validation perform a second loop that writes the staged changes
into $this->data (setting filtered attributes, _createdAt/_updatedAt,
_permissions and updating $this->permissions). Ensure you reference the same
symbols ($documents, $attrs, applyOperators, enforceUniqueIndexes, $this->data,
$updates, $this->permissions) so the validation happens fully before any
mutation.

678-699: ⚠️ Potential issue | 🟠 Major

Mirror write-time unique enforcement when backfilling a unique index.

Lines 685-693 hash raw row values from every stored row. enforceUniqueIndexes() normalizes values and scopes shared-table checks per tenant, so createIndex() can miss already-conflicting rows (true vs 1) and can also reject valid duplicates that only exist in other tenants.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/Database/Adapter/Memory.php` around lines 678 - 699, The
duplicate-detection loop in Memory.php currently hashes raw row values, which
can mismatch enforceUniqueIndexes() behavior; update the loop inside
createIndex() (the block using $this->data[$key]['documents'],
$this->mapAttribute, $seen and throwing DuplicateException) to normalize each
attribute value the same way enforceUniqueIndexes() does and to include the
tenant scope in the signature/hash (i.e., compute signature elements by calling
the same normalization helper used by enforceUniqueIndexes() for each mapped
attribute and prepend/append the row's tenant identifier when building the hash)
so comparisons treat equivalent values (true vs "1") consistently and only
detect duplicates within the same tenant.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/Database/Adapter/Memory.php`:
- Around line 2184-2196: The switch branch currently treats
Query::TYPE_CONTAINS, Query::TYPE_CONTAINS_ANY and Query::TYPE_CONTAINS_ALL
identically and returns true on the first matching candidate; change it so that
TYPE_CONTAINS_ANY and TYPE_CONTAINS (if intended as single-value contains) keep
the early-return-on-first-match behavior but Query::TYPE_CONTAINS_ALL uses an
all-match check: when $haystack is null return false, then for TYPE_CONTAINS_ALL
iterate $values and if any candidate is not contained (using
$this->jsonContains($haystack, $this->wrapScalarObjectValue($candidate))) return
false and only return true after the loop; keep existing early-true logic for
TYPE_CONTAINS_ANY (and TYPE_CONTAINS if appropriate) or split the cases into
separate case blocks to implement the differing behaviors.
- Around line 1258-1263: getMaxIndexLength() currently returns 1024 (1 KB) but
the comment and expected cap are 1024 KB (1,048,576 bytes); update the return
value in Memory::getMaxIndexLength() to 1048576 and adjust the inline comment to
state "1024 KB (1,048,576 bytes)" so callers receive the correct byte limit
matching MongoDB's 1024 KB index entry cap.
- Around line 172-182: The exists() method uses $this->key($collection) which
relies on the adapter's current database instead of the $database argument
passed in; update the code so collection existence checks use the provided
$database: modify the key(...) helper to accept an optional $database parameter
(or add a new keyFor(collection, database) helper) and call that from
exists(string $database, ?string $collection) so the lookup uses $database when
checking $this->databases[$database][...]; keep the original behavior when no
$database param is supplied.

---

Duplicate comments:
In `@src/Database/Adapter/Memory.php`:
- Around line 2583-2584: The code in Memory.php currently throws
DuplicateException when a user-defined unique index collides (the check
comparing $rowSignature === $signature); change this to throw UniqueException
instead so that only duplicate _uid violations use DuplicateException while all
other unique constraint conflicts use UniqueException; locate the collision
handling around the $rowSignature === $signature comparison and replace the
thrown exception class accordingly (keeping or adapting the error message as
needed).
- Around line 952-1007: Current code applies updates per-document and calls
enforceUniqueIndexes immediately, causing partial commits if a later document
fails; change to a two-phase update in the same method: first iterate $documents
to compute per-document $resolvedAttrs via $this->applyOperators, build the
merged Document (using $this->rowToDocument + $resolvedAttrs + ['$id'=>$uid])
and call $this->enforceUniqueIndexes for every merged candidate without mutating
$this->data (collect these computed rows and any permission/timestamp changes
into a staging array), and only after all documents pass validation perform a
second loop that writes the staged changes into $this->data (setting filtered
attributes, _createdAt/_updatedAt, _permissions and updating
$this->permissions). Ensure you reference the same symbols ($documents, $attrs,
applyOperators, enforceUniqueIndexes, $this->data, $updates, $this->permissions)
so the validation happens fully before any mutation.
- Around line 678-699: The duplicate-detection loop in Memory.php currently
hashes raw row values, which can mismatch enforceUniqueIndexes() behavior;
update the loop inside createIndex() (the block using
$this->data[$key]['documents'], $this->mapAttribute, $seen and throwing
DuplicateException) to normalize each attribute value the same way
enforceUniqueIndexes() does and to include the tenant scope in the
signature/hash (i.e., compute signature elements by calling the same
normalization helper used by enforceUniqueIndexes() for each mapped attribute
and prepend/append the row's tenant identifier when building the hash) so
comparisons treat equivalent values (true vs "1") consistently and only detect
duplicates within the same tenant.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 9a5a2141-8ce5-4fde-9be4-0c790df75293

📥 Commits

Reviewing files that changed from the base of the PR and between d4ba937 and b8e383c.

📒 Files selected for processing (2)
  • src/Database/Adapter/Memory.php
  • tests/e2e/Adapter/MemoryTest.php

Comment thread src/Database/Adapter/Memory.php Outdated
Comment thread src/Database/Adapter/Memory.php
Comment thread src/Database/Adapter/Memory.php
Exec-form healthcheck did not shell-expand $$MYSQL_USER and
$$MYSQL_PASSWORD, so mysqladmin received literal '$MYSQL_USER' as the
username. mysqladmin ping reports 'mysqld is alive' on connection
errors via stderr while still exiting non-zero with bad credentials,
making the check unstable and CI flakes (Adapter Tests
SharedTables/MySQL) appear with all tests failing on Connection
refused before MySQL was actually ready. Switch to CMD-SHELL so the
env vars expand at run time, ping over TCP on the configured port,
and use root/MYSQL_ROOT_PASSWORD which is always set.
Comment thread src/Database/Adapter/Memory.php Outdated
- exists($database, $collection) now keys lookup by the $database
  argument and the filtered collection name, so callers can probe a
  database other than the adapter's currently-bound one. Storage map
  is now [database][collectionName] => storageKey.
- updateDocument / updateDocuments / deleteDocument / deleteDocuments
  guard permission cleanup with the current tenant under shared
  tables, so deleting a row in tenant A no longer wipes another
  tenant's permissions for a document that happens to share the same
  $id.
- updateDocuments validates every row (including unique-index checks)
  before any write lands, so a uniqueness violation on row N no
  longer leaves rows 0..N-1 partially committed.
- TYPE_CONTAINS_ALL on object attributes now requires every candidate
  to be present, instead of returning true on the first match (was
  behaving like CONTAINS_ANY).
Comment thread src/Database/Adapter/Memory.php Outdated
PHP's `==` treats `null == 0` and `null == ""` as true, which made
applyOrdering and applyCursor silently equate null with 0/empty
strings. Switch both call sites to `===` and explicitly place NULLs
first under ASC ordering to mirror SQL collation, so cursor
boundaries no longer drop rows whose values happen to coerce to the
cursor reference.
Comment thread src/Database/Adapter/Memory.php
The two-phase validator only saw each row's pre-update state, so two
documents updated to the same unique value in one call both passed
phase 1 (each saw the other's old value, no conflict) and silently
violated the constraint at phase 2. Phase 1 now also checks every
candidate against the other pending merges in the same batch, so
sibling collisions are caught before any write lands.
Closes the algorithmic hot spots flagged in the perf audit. Each
change moves a hot path from O(dataset) or O(N) to O(touched) or
O(1) per write, with the suite runtime dropping ~30% locally
(42s → 29s) and assertion count unchanged at 4623.

- Transactions: replace `unserialize(serialize($data))` snapshots with
  a per-transaction mutation journal. Every mutating entry point
  records its inverse op; `commit` discards the active journal,
  `rollback` walks it in reverse. Cost becomes O(touched rows)
  instead of O(dataset) per startTransaction, and nested transactions
  no longer multiply the copy.
- Permissions: maintain `permissionsByDocument` and
  `permissionsByPermission` inverted indexes alongside the flat list.
  `applyPermissions` now intersects the user's roles against the
  permission map (O(roles × matched-buckets)) instead of walking the
  whole list per query. Per-document deletion in updateDocument /
  updateDocuments / deleteDocument / deleteDocuments is O(|that
  doc's perms|) instead of O(P).
- Unique indexes: maintain `uniqueIndexHashes` keyed by index +
  signature. enforceUniqueIndexes is a hash probe, not a full scan;
  updateDocuments batch path drops the per-row sibling array_filter
  and detects sibling collisions while building a single pending-sig
  map (O(M·K) vs O(M·K·(N+K))).
- Find pipeline: fuse tenantFilter / applyQueries / applyPermissions
  into a single pass that allocates one output array. Ordering and
  cursoring stay separate (they need the full set materialised), but
  the no-order path skips sorting and trusts insertion order.
- Array attributes round-trip natively now — no more JSON
  encode-on-write / decode-per-access. resolveAttributeValue and
  decodeArrayValue collapse to pass-through for already-decoded
  arrays.
- applyOrdering precomputes column mapping outside the usort closure
  and uses a decorate-sort-undecorate transform so the comparator
  only sees small key tuples.
- Memory overrides Adapter::filter() with a per-instance cache so
  the preg_replace inside the find inner loop runs once per unique
  raw input, not once per row × per attribute.
Comment thread src/Database/Adapter/Memory.php
The hashed unique-index map keyed signatures by attribute values
only, so two tenants storing the same value in a unique-indexed
column collided and the second write threw DuplicateException.
MariaDB models this as a composite (attr, _tenant) unique index;
mirror that contract by prepending the row's tenant to the
signature when sharedTables is on. createIndex's initial scan
drops its tenant skip — every tenant's rows now contribute their
own per-tenant signature.
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@tests/e2e/Adapter/MemoryTest.php`:
- Around line 697-738: The test testUniqueIndexNormalizesBoolAndNumericString
currently inserts two boolean true values only; update it so it actually tests
the boolean vs numeric-string normalization path by creating one document with
'active' => true and another that uses a numeric string (e.g., 'active' => "1")
to trigger the coercion/normalization logic in createDocument for the 'flags'
collection, keeping the expectException(DuplicateException::class) assertion;
locate the test method by its name testUniqueIndexNormalizesBoolAndNumericString
and modify the second createDocument call to use the numeric-string fixture so
the unique-index dedupe path is exercised.
- Around line 452-455: The test currently only expects a DuplicateException from
updateDocuments('handles', new Document(['handle' => 'taken']),
[Query::equal('$id', ['h2'])]) but doesn't assert no partial updates occurred;
change the test to snapshot the original rows (fetch the documents for the
affected ids), then call updateDocuments inside a try block and call
$this->fail() if no exception is thrown, catch DuplicateException, and after
catching reload the same documents and assert their attributes match the
original snapshot (no partial application). Apply the same
try/catch+snapshot/assert pattern to the similar case around lines 501-504.
- Around line 685-694: Test currently creates a document with id 'a' in tenant 7
but then probes tenant 1 for id 'a', so a bug in getSequences() that ignores
probe.'$tenant' will still pass; fix by making the tenant-7 fixture use a
different id (e.g., change the Document passed to createDocument in the
setTenant(7) block to have '$id' => 'a7') so that [$probe] = new Document(['$id'
=> 'a', '$tenant' => 1]) truly requires tenant-aware lookup by getSequences(),
or alternatively switch the probe to a tenant with no matching row before
calling getSequences(); update the code around adapter->setTenant(7),
adapter->createDocument(... new Document(...)) and the $probe definition
accordingly.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 9ce7d62f-48a1-4cd7-9f98-4219973cc292

📥 Commits

Reviewing files that changed from the base of the PR and between 1ab7016 and 30ae71f.

📒 Files selected for processing (2)
  • src/Database/Adapter/Memory.php
  • tests/e2e/Adapter/MemoryTest.php

Comment thread tests/e2e/Adapter/MemoryTest.php Outdated
Comment thread tests/e2e/Adapter/MemoryTest.php Outdated
Comment thread tests/e2e/Adapter/MemoryTest.php
The inherited scope test asserts that a document's $updatedAt after
an immediate update differs from the initial $updatedAt. The Memory
adapter's create+update sequence runs faster than a millisecond, so
both timestamps coincide and the assertion fails. Override the test
to wait 1.1ms between the two writes; semantics for slower adapters
are unchanged.
The class docblock was written when Memory was a CRUD-only stub and
relationships, operators, fulltext, regex, schemas, schemaless, and
object attributes all threw DatabaseException. All of those are now
implemented and exercised by the inherited adapter scope tests; the
docblock claimed otherwise. Only spatial types and vector search
still throw, since those genuinely need a real engine.
- testBatchUpdateEnforcesUniqueIndexes / testBatchUpdateRejectsSiblingCollision
  now wrap the call in try/catch and assert the original attribute
  values are still intact after the failure, so they fail if a partial
  batch lands.
- testGetSequencesUsesDocumentTenant uses different ids per tenant
  and probes the adapter while it is scoped to the *other* tenant, so
  the assertion fails if getSequences ignored the document's $tenant.
- testUniqueIndexNormalizesBoolAndNumericString split into
  testUniqueIndexNormalizesBool (the bool path the original test
  actually exercised) and testUniqueIndexNormalizesNumericString
  (writes through the adapter directly with string "3" then int 3 so
  the numeric-string normalisation path is exercised end-to-end).
Comment on lines +2631 to +2638
case Query::TYPE_NOT_EQUAL:
foreach ($queryValues as $candidate) {
if ($this->looseEquals($value, $candidate)) {
return false;
}
}

return true;
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 TYPE_NOT_EQUAL includes null-valued rows, diverging from SQL semantics

When a document's attribute value is null and the query is notEqual('field', 'value'), looseEquals(null, 'value') returns false (neither === nor both numeric), so the loop completes and the method returns true — the null row is included. In all SQL adapters, WHERE col != 'value' excludes NULL rows (null comparisons evaluate to NULL, not true). This means queries like Query::notEqual('status', 'active') will return null-status documents on the Memory adapter but not on MariaDB/Postgres, causing test results to diverge between adapters.

@abnegate abnegate merged commit 7479a74 into main Apr 29, 2026
32 checks passed
@abnegate abnegate deleted the feat-memory-adapter branch April 29, 2026 06:30
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants