Skip to content

feat(storage): implement per-user DB isolation and migration tooling#167

Merged
XuPeng-SH merged 13 commits intomatrixorigin:mainfrom
gouhongshen:feat/per-user-db-isolation
Apr 4, 2026
Merged

feat(storage): implement per-user DB isolation and migration tooling#167
XuPeng-SH merged 13 commits intomatrixorigin:mainfrom
gouhongshen:feat/per-user-db-isolation

Conversation

@gouhongshen
Copy link
Copy Markdown
Contributor

Summary

  • route Memoria from legacy single-DB storage to shared control-plane DB + per-user mem_u_* databases
  • add the formal memoria migrate legacy-to-multi-db dry-run/execute flow, including a pre-execute MatrixOne safety snapshot and JSON reporting
  • scope snapshot internals by user DB, handle duplicate snapshot creates without surfacing a 500, and align snapshot_e2e with the scoped naming model

Validation

  • make check
  • cargo test -p memoria-service --lib
  • cargo test -p memoria-mcp --lib
  • cargo test -p memoria-api --lib
  • cargo test -p memoria-storage --lib
  • cargo test -p memoria-git --lib
  • cargo test -p memoria-mcp --test snapshot_e2e -- --test-threads=1
  • live local black-box API validation against MatrixOne 3.0.9 in multi-db mode
  • live execute migration drill with old API keys, migrated branch state, snapshot/rollback, and isolated mem_u_* user DBs
  • full immersive drill from scratch: legacy 10m soak (995/995), execute migration, migrated API 13-user 10m soak (5122/5122), all with 0 non-2xx and 0 exceptions

Notes

  • shared DB stays control-plane only (mem_api_keys, registry, locks/tasks/plugins), while user data/branches/snapshots live in dedicated routed DBs
  • duplicate snapshot create now returns an explicit already exists message instead of bubbling into an internal error

gouhongshen and others added 6 commits April 3, 2026 14:54
Add the per-user database isolation RFC covering shared-vs-user DB split, restore safety, snapshot/branch handling, migration, and metadata reconciliation.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Copilot AI review requested due to automatic review settings April 3, 2026 14:36
Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR migrates Memoria from a legacy single shared database to a shared control-plane DB plus per-user mem_u_* databases, adding routing, migration tooling, and updating snapshot/branch behavior to be per-database scoped.

Changes:

  • Add DbRouter and split storage migrations into migrate_user() and migrate_shared() to support multi-DB routing.
  • Introduce an offline CLI migration flow (memoria migrate legacy-to-multi-db) with dry-run/execute and JSON reporting.
  • Scope snapshot operations to user databases and update MCP/API/service layers to use routed per-user stores.

Reviewed changes

Copilot reviewed 27 out of 28 changed files in this pull request and generated 8 comments.

Show a summary per file
File Description
memoria/crates/memoria-storage/src/store.rs Add optional DbRouter, split migrations, route edit-log/access patterns, scope safety snapshots by DB
memoria/crates/memoria-storage/src/router.rs New shared→user DB router with registry + lazy per-user DB provisioning
memoria/crates/memoria-storage/src/migration.rs New legacy→multi-db migration planner/executor + report + identifier bounding
memoria/crates/memoria-storage/src/lib.rs Export router/migration modules and types
memoria/crates/memoria-storage/Cargo.toml Add sha2 dependency for user DB hashing
memoria/crates/memoria-service/tests/plugin_repository.rs Update test config with shared DB fields
memoria/crates/memoria-service/src/service.rs Route per-user operations via DbRouter, adjust batching/caches, disable rebuild worker in multi-db
memoria/crates/memoria-service/src/scheduler.rs Update test config with shared DB fields
memoria/crates/memoria-service/src/pipeline.rs Persist phase now resolves per-user SQL store in multi-db mode
memoria/crates/memoria-service/src/governance.rs Route governance operations across per-user DBs when router is enabled
memoria/crates/memoria-service/src/config.rs Add shared_db_url and multi_db toggle + effective SQL URL helper
memoria/crates/memoria-mcp/tests/snapshot_e2e.rs Align snapshot internal naming with scoped model + duplicate-create behavior
memoria/crates/memoria-mcp/tests/branch_e2e.rs Add merge/replace regression test with mock embedder
memoria/crates/memoria-mcp/src/tools.rs Route MCP tools to per-user SQL stores; adjust snapshot health + branch-aware queries
memoria/crates/memoria-mcp/src/git_tools.rs Scope snapshot naming by DB, route git ops per user store, improve merge replace behavior
memoria/crates/memoria-git/src/service.rs Create snapshots FOR DATABASE and filter SHOW SNAPSHOTS by database
memoria/crates/memoria-cli/src/main.rs Add migrate command and wire multi-db startup (router + shared store)
memoria/crates/memoria-api/src/state.rs Route batcher flush/rebuild APIs via MemoryService for multi-db
memoria/crates/memoria-api/src/routes/snapshots.rs Resolve per-user git/sql context for snapshot reads/diffs
memoria/crates/memoria-api/src/routes/sessions.rs Make session summary query branch-aware via active table
memoria/crates/memoria-api/src/routes/metrics.rs Aggregate metrics across per-user DBs + use shared pool for shared tables
memoria/crates/memoria-api/src/routes/memory.rs Route profile/history/retrieval params via per-user SQL store + branch-aware stats query
memoria/crates/memoria-api/src/routes/governance.rs Route governance endpoints via per-user SQL store and scope cooldown keys
memoria/crates/memoria-api/src/routes/auth.rs Use shared auth pool for API key operations (no per-user store assumption)
memoria/crates/memoria-api/src/routes/admin.rs Aggregate admin stats across per-user stores and use shared pool where appropriate
memoria/crates/memoria-api/src/auth.rs Route tool-usage and call-log batch flush/rebuild via per-user stores in multi-db
memoria/Cargo.lock Lockfile update for new dependency
docs/per-user-database-architecture.md New architecture RFC/design doc for per-user DB isolation

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment thread memoria/crates/memoria-mcp/src/git_tools.rs
Comment thread memoria/crates/memoria-storage/src/router.rs Outdated
Comment thread memoria/crates/memoria-storage/src/router.rs Outdated
Comment thread memoria/crates/memoria-storage/src/store.rs Outdated
Comment thread memoria/crates/memoria-storage/src/store.rs
Comment thread memoria/crates/memoria-storage/src/store.rs
Comment thread memoria/crates/memoria-api/src/auth.rs Outdated
Comment thread memoria/crates/memoria-cli/src/main.rs Outdated
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
@gouhongshen gouhongshen changed the title Implement per-user DB isolation and migration tooling feat(storage): implement per-user DB isolation and migration tooling Apr 3, 2026
@gouhongshen
Copy link
Copy Markdown
Contributor Author

🔍 Code Review Report — PR #167

Implement per-user DB isolation and migration tooling
28 files, +4600/−848 lines, 6 commits (3 patches), Rust
Reviewed by 7-agent jury system (v1.5) · 5/6 Round 1 agents completed

📊 概览

指标
审查文件数 28
总发现数 15
🔴 Must Fix 6
🟡 Should Fix 7
🟢 Nit 2
🧨 破坏性测试结论 严重不足

📝 总结

PR 的设计方向正确——从单 DB 行级隔离升级为 per-user DB 隔离,根治了 memory_rollback 全表 DELETE 的 P0 bug。RFC 文档完整,迁移工具有 dry-run 模式,快照安全网考虑周到。

但实现呈现典型的 "先实现后打补丁" 模式:6 个 commit 中 3 个是 patch(主 commit 后 12 分钟就开始打补丁)。核心问题集中在三个方面:(1) 连接池爆炸 — 每用户创建独立 MySqlPool,几百用户即耗尽数据库连接数;(2) 安全面 — DDL 标识符拼接缺少统一的 quote_ident() 防护 + 数据库凭据明文写入日志;(3) 测试系统性盲区 — 两个最大新文件(router.rs 269行 + migration.rs 1137行)几乎零测试,所有 E2E 测试仍在单 DB 模式运行,multi-DB 核心路径零实测。

🧨 破坏性测试审判

  • 结论: 🔴 严重不足
  • 已覆盖: 单 DB 模式的 snapshot CRUD、branch lifecycle、replace merge happy path、duplicate snapshot 名称拒绝(部分覆盖)
  • 缺失:
    • ❌ DbRouter 隔离正确性(用户 A 数据不落入用户 B 数据库)
    • ❌ 并发用户 provision(CREATE DATABASE 竞争)
    • ❌ Migration 部分失败/回滚/幂等
    • ❌ Multi-DB 模式 E2E(所有测试仍在 legacy 模式)
    • ❌ 缓存失效后的连接池有效性
    • ❌ Router 降级行为(router 不可用时 fallback 还是拒绝?)
    • ❌ Snapshot 创建 TOCTOU 并发竞争
    • ❌ 空/恶意 user_id 的路由行为
  • 裁决理由: 本 PR 引入全新数据隔离层和数据迁移工具(合计 1400+ 行核心代码),但测试仍停留在旧架构。相当于换了发动机但只在原跑道上测了一圈。3 个 bug fix commit 中只有 duplicate snapshot 有部分回归覆盖,无并发回归。

🔴 Must Fix

1. 连接池爆炸 + 无界缓存 — router.rs:29-31,164-176

  • 类别: 性能 / 可用性
  • 发现者: 闪电、老K、补丁犬 (共识)
  • 描述: user_store_cache 使用 Arc<Mutex<HashMap<String, Arc<SqlMemoryStore>>>>无 TTL、无容量上限、无驱逐。每个 user 创建独立 MySqlPool。数学:1000 用户 × max_connections=10 = 10,000 idle connections;MySQL 默认 max_connections=151 → 几百用户即导致服务不可用。同时 Mutex 全局锁在 hot path(每个 API 请求)上造成串行化瓶颈,且持锁期间可能执行 connect() + migrate_user()(数百毫秒)。存在 thundering herd:并发 cache miss → 重复创建连接池和 DDL。
  • 根治方案: 将 user_store_cache 替换为 moka::future::Cache(项目已依赖 moka),设置 max_capacity(10_000) + time_to_idle(600s) + 驱逐回调关闭 pool.close()。使用 try_get_with() 实现 coalesced provisioning 消除 thundering herd。长期考虑共享连接池 + qualified table names 替代 per-user pool。

2. 凭据明文写入日志 — main.rs:403,554

  • 类别: 安全
  • 发现者: 铁壁
  • 描述: cmd_servecmd_mcp 以 INFO 级别输出完整数据库 URL(含密码):tracing::info!(db_url = %cfg.db_url, shared_db_url = %cfg.shared_db_url, ...)。默认 URL mysql://root:111@... 包含明文密码。日志聚合管道中的任何读取者均可获取数据库凭据。
  • 根治方案: 实现 redact_url() 函数,在日志输出前将 URL 中的 username/password 替换为 ***

3. DDL 标识符 SQL 注入面 — router.rs:247 + git_tools.rs:~830 + snapshots.rs 多处

  • 类别: 安全
  • 发现者: 铁壁、老K (共识)
  • 描述: create_database_if_missing() 直接 format!("CREATE DATABASE IF NOT EXISTS {db_name}"),无 backtick、无白名单校验。git_tools.rscollect_replace_candidatesformat!("...JOIN {table_name}...") 拼接表名。snapshots.rs 使用了 backtick 但未转义内部反引号。注意 migration.rs 已正确使用 quote_ident(),但其他文件没有复用。
  • 根治方案: 将 migration.rsquote_ident() 提升为 crate 级公共函数,所有 format! 拼接的 SQL 标识符统一调用。对 db_name 增加 [a-zA-Z0-9_] 白名单校验。

4. Multi-DB 核心路径零测试 — router.rs + migration.rs + 全部 *_e2e.rs

  • 类别: 测试
  • 发现者: 测姐、老K、补丁犬 (共识)
  • 描述: DbRouter(269行,用户隔离唯一防线)0 单元测试。Migration(1137行,操作生产数据)仅 5 个 trivial 纯函数测试。所有 E2E 测试的 setup() 传入 None router,multi-DB 路由分支从未执行。service.rs 的 18 处 user_sql_store() 路由分支、governance.rs 的 12 处 routed_user_stores() 分支均零覆盖。
  • 根治方案: (1) 为 router.rs 添加单元测试:user_db_name_for_id 确定性、隔离正确性、并发 provision、创建失败处理。(2) 为 migration.rs 添加集成测试:dry-run/execute 行数一致、部分失败报告、幂等性。(3) 添加 setup_multi_db() 函数并编写至少 1 个 multi-DB E2E 测试。

5. 补丁链:provision_user_db 并发竞态 — router.rs:150-185

  • 类别: Bug / 代码健康
  • 发现者: 补丁犬、铁壁
  • 描述: 主 commit 用裸 INSERT,12 分钟后打补丁加 match Err(Database(e)) if e.message().contains("Duplicate") || e.message().contains("1062")字符串匹配错误消息是 fragile workaround — 不同 MySQL 兼容引擎(MatrixOne/TiDB/MySQL)错误消息格式不保证一致。sqlx::Error::Databasecode() 方法可获取数字错误码,比 message().contains() 更可靠。
  • 根治方案: 使用 INSERT ... ON DUPLICATE KEY UPDATESELECT ... FOR UPDATE 事务锁,消除对错误消息子串匹配的依赖。配合 moka::Cache::try_get_with() 在缓存层 coalesce 并发请求。

6. Snapshot 创建 TOCTOU 竞态 — git_tools.rs:368-400

  • 类别: Bug
  • 发现者: 补丁犬、测姐
  • 描述: Commit 6 加了 get_snapshot_registration + get_snapshot_registration_by_internal 双检查后才调 create_snapshot。但 check 和 create 之间无锁,两个并发请求都会通过 check → 可能创建重复 MO snapshot。3 个 bug fix commit 中此 fix 无并发回归测试。
  • 根治方案: register_snapshot 应使用数据库唯一约束 + INSERT ... ON DUPLICATE KEY 做原子幂等注册。移除 check-then-create 模式,让数据库做并发仲裁。

🟡 Should Fix

7. DbRouter 双重持有 — router.rs:31 + service.rs:372

  • 类别: 架构
  • 发现者: 老K
  • 描述: DbRouter 同时存在于 SqlMemoryStore.db_routerMemoryService.db_routerset_db_router(&mut self) 在构造后注入,clone 出的 store 实例不保证携带 router。25+ 处 if let Some(router) 分支散落在 governance/store/auth/service 各层。
  • 根治方案: Router 只存在于 MemoryService 层。长期引入 RoutedMemoryStore 装饰器或 enum StoreDispatch { Single, MultiDb } 在 trait 层面解决路由。

8. /metrics + admin N+1 查询风暴 — metrics.rs:80-130 + admin.rs:103-128

  • 类别: 性能
  • 发现者: 闪电、补丁犬 (共识)
  • 描述: 每次 /metrics 请求遍历所有用户 × 6 查询/用户。1000 用户 = 6000 queries/scrape → 12s 延迟 → Prometheus scrape 超时 → 监控失明。system_stats 同样 N+1。list_users 先 SELECT 全量再内存截断。
  • 根治方案: cross-database 聚合查询或后台 worker 定时计算。list_users 改为 SQL LIMIT/OFFSET 分页。

9. Governance 定时任务 N+1 遍历 — governance.rs:170-200

  • 类别: 性能
  • 发现者: 闪电
  • 描述: ~10 个 governance 方法各自独立调用 routed_user_stores()。1000 用户 × 10 方法 × 2 queries = 20,000 queries/cycle。
  • 根治方案: governance cycle 入口一次性获取所有 user stores,传递给子方法。

10. Migration 大表 INSERT...SELECT 无分页 — migration.rs:666-732

  • 类别: 性能 / 可靠性
  • 发现者: 闪电
  • 描述: copy_user_scoped_table 一次性 INSERT INTO ... SELECT * FROM ... WHERE user_id = ?。10 万 memories × VECF32(1536) (6KB/row) = 600MB 临时内存。MatrixOne 可能物化整个结果集 → OOM/事务超时。
  • 根治方案: 分批迁移 LIMIT 5000 OFFSET N,逐批 INSERT。

11. list_known_users × 3 + init_storage × 2 + replace_db_name × 2 重复定义 — 跨文件

  • 类别: 代码健康
  • 发现者: 补丁犬、老K
  • 描述: list_known_users() 在 admin.rs/metrics.rs 各定义一次 + governance.rs 第三份。cmd_serve/cmd_mcp 有 ~25 行 copy-paste 的 multi-db 初始化。replace_db_name 在 config.rs/router.rs 各一份,parse_db_name 在 router.rs/migration.rs 各一份。
  • 根治方案: 提取公共函数。list_known_users → MemoryService/DbRouter 方法。init → async fn init_storage(cfg) 工厂。URL 工具 → crate-level utility。

12. 错误消息泄露内部 db_name — router.rs:213-220

  • 类别: 安全
  • 发现者: 铁壁
  • 描述: MemoriaError::Internal(format!("user db registration collision for {user_id} -> {db_name}")) 若通过 API 500 返回,泄露数据库命名规则和具体 db_name。
  • 根治方案: 内部日志保留详情,返回给用户的错误使用泛化消息 "database provisioning failed"。

13. migrate_shared() 创建 mem_user_registry 越界 — store.rs:421

  • 类别: 架构
  • 发现者: 老K
  • 描述: SqlMemoryStore::migrate_shared() 创建 mem_user_registry 表,但这是 DbRouter.ensure_user_registry_table() 的私有基础设施。Store 不应知道 registry 表的存在。两处同时 CREATE TABLE IF NOT EXISTS 不会报错,但是职责泄漏。
  • 根治方案: 从 migrate_shared() 中删除 mem_user_registry 的 DDL。

🟢 Nit

14. provision 创建临时连接池 — router.rs:246-260

  • 类别: 性能
  • 发现者: 闪电
  • 描述: create_database_if_missing 每次创建临时 MySqlPool(max_connections=1) → connect → execute → drop。可用 shared_pool 执行 CREATE DATABASE。

15. service.rs 12 处 if self.sql_store.is_some() 残留守卫 — service.rs 多处

  • 类别: 代码健康
  • 发现者: 补丁犬
  • 描述: user_sql_store() 内部已处理 fallback 逻辑。外层的 is_some() 检查的是 shared store,但 user_sql_store 返回的是 per-user store。这些守卫是从旧代码 if let Some(sql) = &self.sql_store 机械转换来的残留,可直接移除。

💬 陪审团共识记录

议题 老K 铁壁 闪电 测姐 补丁犬 裁决
连接池爆炸 🔴 🔴 🟡 🔴 Must Fix (共识)
DDL SQL 注入 🔴 🔴 🟢 🔴 Must Fix (共识)
凭据日志泄露 🔴 🔴 Must Fix
Multi-DB 零测试 🟢 🔴 🔴 🔴 Must Fix (共识)
N+1 查询 🟡 🟡 🟡 Should Fix (共识)
Router 双持有 🔴 🟡 降级 (设计改进, 不阻塞功能)
补丁链 🔴 🔴 升为 Must Fix (并发安全)

🤖 Generated by 7-agent jury code review system v1.5 · Agents: 老K(架构), 铁壁(安全), 闪电(性能), 测姐(测试), 补丁犬(代码健康) · 定锤(裁判) synthesis by orchestrator

gouhongshen and others added 2 commits April 3, 2026 23:29
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
@gouhongshen
Copy link
Copy Markdown
Contributor Author

🔄 Re-review Report — PR #167

Incremental review of 3 new commits: fd8fae3, 7187320, 212483b
Total incremental delta: +1207/−165 across 17 files

🔴 修复状态

# 原始问题 修复状态 说明
1 连接池爆炸 + 无界缓存 已修复 user_store_cacheMutex<HashMap>moka::Cache(max=128, idle=600s)user_db_cachetry_get_with_by_ref() coalesced provisioning 消除 thundering herd;新增 connect_routed() 将 per-user pool 默认限制为 4 连接(可配 MEMORIA_ROUTED_DB_MAX_CONNECTIONS,上限 64)
2 凭据明文写入日志 已修复 新增 redact_url() 函数,cmd_serve + cmd_mcp 均使用 redacted_db_url / redacted_shared_db_url 输出
3 DDL 标识符 SQL 注入面 已修复 router.rs 新增 quote_ident() 函数,CREATE DATABASE IF NOT EXISTS 已改为 quote_ident(db_name)store.rsconnect_with_pool_limit 同样使用 quote_ident()git_tools.rs 新增 validate_identifier() 白名单校验
4 Multi-DB 核心路径零测试 ⚠️ 大幅改善 新增 router_multi_db.rs(113行)验证用户隔离到不同数据库 + 数据不互通;新增 test_multi_db_snapshot_rollback_isolates_users 验证 rollback 不影响其他用户;新增 test_concurrent_duplicate_snapshot_name_is_coalesced 验证并发 snapshot 创建。仍缺 migration.rs 集成测试
5 provision 并发竞态(字符串匹配) 已修复 新增 is_duplicate_key_error() 使用 MySqlDatabaseError.number() == 1062 替代 message().contains("Duplicate")user_store() 改用 try_get_with_by_ref() coalesce 并发请求
6 Snapshot 创建 TOCTOU 竞态 已修复 并发 snapshot 测试 test_concurrent_duplicate_snapshot_name_is_coalescedBarrier 同步 2 个并发任务,验证只创建 1 个 snapshot + 1 个 registration

📊 改善摘要

维度 之前 之后
连接池模型 无界 HashMap + Mutex,每用户默认 64 连接 moka Cache(max=128, idle=600s),每用户 4 连接
DDL 安全 format! 拼接 quote_ident() + validate_identifier()
凭据保护 明文 INFO 日志 redact_url() 屏蔽
并发安全 字符串匹配错误消息 error.number() == 1062 + moka coalesced
Multi-DB 测试 0 个 3 个(隔离验证 + 并发 snapshot + multi-db rollback)

🟡 仍待关注

1. migration.rs 仍无集成测试

1137 行迁移代码(含 DROP DATABASE)仍然只有 5 个纯函数单测。建议至少补充:dry-run/execute 行数一致性、部分失败报告完整性。

2. quote_ident / split_database_url 仍有多处独立副本

  • router.rs 定义了 quote_ident() + split_database_url() + parse_db_name()
  • migration.rs 定义了 quote_ident() + split_database_url() + parse_db_name()
  • store.rs 也定义了 quote_ident() + split_database_url() + parse_db_name_from_url()
  • main.rs 也定义了 parse_db_name() + redact_url()

4 个文件各有一份基本相同的 URL 解析和标识符引用逻辑。建议提取到共享 utility 模块。

3. user_store_cache 驱逐时连接池未显式关闭

moka Cache 驱逐 Arc<SqlMemoryStore> 时,MySqlPool 的关闭取决于 Arc 引用计数降为 0。若有其他地方持有引用,idle 连接池不会实际关闭。建议添加 eviction_listener 调用 pool.close()

4. N+1 查询问题(metrics/admin/governance)未在本次修复

仍然是逐用户串行查询。这是 🟡 级别,可在后续迭代中优化。

✅ 总体判定

6 个 🔴 Must Fix 中 5 个完全修复、1 个大幅改善(测试覆盖从 0 → 3 个 multi-db 测试,但 migration 仍缺)。 PR 整体质量显著提升,核心阻断问题(连接池爆炸、凭据泄露、SQL 注入、并发竞态)已解决。建议合并后跟进 migration 集成测试和 utility 函数去重。


🤖 Incremental re-review by 7-agent jury system v1.5

gouhongshen and others added 4 commits April 4, 2026 12:10
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
@gouhongshen
Copy link
Copy Markdown
Contributor Author

@mergify requeue

@XuPeng-SH XuPeng-SH merged commit ac1470a into matrixorigin:main Apr 4, 2026
4 of 5 checks passed
@mergify
Copy link
Copy Markdown

mergify Bot commented Apr 4, 2026

Merge Queue Status

  • 🟠 Waiting for queue conditions
  • ⏳ Enter queue
  • ⏳ Run checks
  • ⏳ Merge
Required conditions to enter a queue
  • -closed [📌 queue requirement]
  • -conflict [📌 queue requirement]
  • -draft [📌 queue requirement]
  • any of [📌 queue -> configuration change requirements]:
    • -mergify-configuration-changed
    • check-success = Configuration changed
  • any of [🔀 queue conditions]:
    • all of [📌 queue conditions of queue rule main]:
      • #approved-reviews-by >= 1 [🛡 GitHub branch protection]
      • #changes-requested-reviews-by = 0 [🛡 GitHub branch protection]
      • #review-threads-unresolved = 0 [🛡 GitHub branch protection]
      • branch-protection-review-decision = APPROVED [🛡 GitHub branch protection]
      • any of [🛡 GitHub branch protection]:
        • check-success = Check PR title
        • check-neutral = Check PR title
        • check-skipped = Check PR title
      • any of [🛡 GitHub branch protection]:
        • check-success = Check & Clippy
        • check-neutral = Check & Clippy
        • check-skipped = Check & Clippy
      • any of [🛡 GitHub branch protection]:
        • check-success = DB Tests
        • check-neutral = DB Tests
        • check-skipped = DB Tests
      • any of [🛡 GitHub branch protection]:
        • check-success = Unit Tests
        • check-neutral = Unit Tests
        • check-skipped = Unit Tests

mergify Bot pushed a commit that referenced this pull request Apr 6, 2026
## Summary
- rebase the branch onto current `upstream/main`, keeping only the follow-up fixes that are not already included by `#167` and `#169`
- stabilize the per-user DB rollout by finishing the routed-store / DB-switching follow-ups and fixing multi-db stats routing
- remove access-count ranking bias from retrieval scoring, refresh the architecture guide, add the cutover runbook, and expose MCP trust-tier guidance with strict validation

## Validation
- `cargo test -p memoria-mcp --test tools_unit test_tools_list -- --nocapture`
- DB-backed tests still require a live MatrixOne/MySQL instance via `DATABASE_URL` (default: `mysql://root:111@localhost:6001/...`); current local environment has no listener on `6001`


Approved by: @XuPeng-SH
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants