feat: migrate driver-sql from objectql, implement standard SQL Driver plugin#951
feat: migrate driver-sql from objectql, implement standard SQL Driver plugin#951
Conversation
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
…mpliance - Create packages/plugins/driver-sql with standard plugin structure - Implement SqlDriver implementing DriverInterface from @objectstack/core - Import types from @objectstack/spec/data (QueryInput, DriverOptions) - Remove all @objectql/types and legacy Driver interface dependencies - Support PostgreSQL, MySQL, SQLite (better-sqlite3) via Knex.js - Include schema sync, introspection, aggregation, window functions - Full CRUD with QueryAST and legacy filter format support - 72 unit tests passing against in-memory SQLite Co-authored-by: hotlong <50353452+hotlong@users.noreply.github.com> Agent-Logs-Url: https://github.com/objectstack-ai/spec/sessions/74f3043d-7ad9-48fb-9380-5f2e25d083a8
There was a problem hiding this comment.
Pull request overview
Migrates the Knex-based SQL driver into this repo as a first-class plugin (@objectstack/driver-sql) and wires it into the monorepo workspace, aiming to align with the current ObjectStack driver protocol and provide SQLite-backed test coverage.
Changes:
- Added new
packages/plugins/driver-sql/plugin package withSqlDriverimplementation and exports. - Added comprehensive Vitest suites for CRUD, schema sync, introspection, QueryAST compatibility, and advanced operations (SQLite via
better-sqlite3). - Updated workspace/build config and changelog entries to include the new driver and native dependency handling.
Reviewed changes
Copilot reviewed 12 out of 13 changed files in this pull request and generated 10 comments.
Show a summary per file
| File | Description |
|---|---|
| pnpm-workspace.yaml | Allows building better-sqlite3 as an approved native dependency. |
| pnpm-lock.yaml | Adds dependency graph/lock updates for the new driver and its deps. |
| packages/plugins/driver-sql/tsconfig.json | Adds per-package TS config consistent with other plugins. |
| packages/plugins/driver-sql/src/sql-driver.ts | Implements SqlDriver (Knex-based) including CRUD, filtering, schema sync, introspection, and advanced helpers. |
| packages/plugins/driver-sql/src/index.ts | Exports SqlDriver and registers the driver via plugin onEnable. |
| packages/plugins/driver-sql/package.json | Declares new workspace package, deps, build/test scripts. |
| packages/plugins/driver-sql/src/sql-driver.test.ts | Basic CRUD + count + _id mapping tests. |
| packages/plugins/driver-sql/src/sql-driver-schema.test.ts | Schema sync/table creation/migration behavior tests. |
| packages/plugins/driver-sql/src/sql-driver-queryast.test.ts | QueryAST + legacy query format compatibility tests. |
| packages/plugins/driver-sql/src/sql-driver-introspection.test.ts | SQLite introspection tests (tables/columns/PK/FK/nullable). |
| packages/plugins/driver-sql/src/sql-driver-advanced.test.ts | Aggregations, bulk ops, transactions, distinct, window fns, analyzeQuery tests. |
| package.json | Updates pnpm config (adds ignoredBuiltDependencies). |
| CHANGELOG.md | Documents addition of @objectstack/driver-sql plugin. |
Files not reviewed (1)
- pnpm-lock.yaml: Language not supported
| const result = await builder.insert(formatted).returning('*'); | ||
| return this.formatOutput(object, result[0]); |
There was a problem hiding this comment.
create() uses .returning('*') unconditionally. Knex does not support returning() on MySQL/MariaDB (and some other dialects), so this will throw at runtime despite the driver advertising MySQL support. Consider only using returning() for dialects that support it (e.g. Postgres/SQLite) and for MySQL fall back to inserting then selecting by the known id (you already generate/set id before insert).
| const result = await builder.insert(formatted).returning('*'); | |
| return this.formatOutput(object, result[0]); | |
| // Knex only supports `.returning()` on certain dialects (e.g. PostgreSQL, SQLite). | |
| // For others (e.g. MySQL/MariaDB), perform an insert and then select by the known `id`. | |
| const clientName = (this.knex.client as any).config?.client; | |
| const supportsReturning = | |
| this.isSqlite || | |
| clientName === 'pg' || | |
| clientName === 'postgres' || | |
| clientName === 'postgresql'; | |
| if (supportsReturning) { | |
| const result = await builder.insert(formatted).returning('*'); | |
| return this.formatOutput(object, result[0]); | |
| } | |
| await builder.insert(formatted); | |
| const created = await this.getBuilder(object, options).where('id', toInsert.id).first(); | |
| return this.formatOutput(object, created) || null; |
|
|
||
| async bulkCreate(object: string, data: any[], options?: DriverOptions): Promise<any> { | ||
| const builder = this.getBuilder(object, options); | ||
| return await builder.insert(data).returning('*'); |
There was a problem hiding this comment.
bulkCreate() also calls .returning('*') unconditionally, which will fail on MySQL/MariaDB. If bulk returning isn’t supported, consider generating/collecting IDs before insert and then fetching the inserted rows with a whereIn('id', ids) query (or return an acknowledgement instead, but keep it consistent across dialects).
| return await builder.insert(data).returning('*'); | |
| const clientName = ((this.knex as any)?.client?.config?.client || '').toString().toLowerCase(); | |
| const supportsReturning = | |
| clientName.includes('pg') || | |
| clientName.includes('postgres') || | |
| clientName.includes('oracledb') || | |
| clientName.includes('mssql') || | |
| clientName.includes('sqlite'); | |
| if (supportsReturning) { | |
| return await builder.insert(data).returning('*'); | |
| } | |
| // Dialects like MySQL/MariaDB do not support `.returning('*')`. | |
| // Perform the insert first, then attempt to fetch inserted rows by ID if available. | |
| await builder.insert(data); | |
| const ids: Array<string | number> = []; | |
| for (const row of data) { | |
| if (!row) continue; | |
| if (row.id !== undefined && row.id !== null) { | |
| ids.push(row.id); | |
| } else if (row._id !== undefined && row._id !== null) { | |
| ids.push(row._id); | |
| } | |
| } | |
| if (ids.length > 0) { | |
| const fetchBuilder = this.getBuilder(object, options); | |
| return await fetchBuilder.whereIn('id', ids); | |
| } | |
| // Fallback: insertion succeeded but we cannot reliably determine IDs; return empty result set. | |
| return []; |
| if (query.fields) { | ||
| builder.select((query.fields as string[]).map((f: string) => this.mapSortField(f))); |
There was a problem hiding this comment.
find() assumes query.fields is string[], but in the current QueryAST spec fields items can also be objects (nested field selections). Casting to string[] and passing non-strings into mapSortField() will break at runtime. Consider normalizing fields to only strings (and rejecting/ignoring non-string nodes) or implementing support for the structured FieldNode format.
| if (query.fields) { | |
| builder.select((query.fields as string[]).map((f: string) => this.mapSortField(f))); | |
| if (Array.isArray((query as any).fields) && (query as any).fields.length > 0) { | |
| const selectFields = (query as any).fields | |
| .filter((f: unknown) => typeof f === 'string') | |
| .map((f: string) => this.mapSortField(f)) | |
| .filter((f: string) => !!f); | |
| if (selectFields.length > 0) { | |
| builder.select(selectFields); | |
| } else { | |
| builder.select('*'); | |
| } |
| // ── Window function builder ───────────────────────────────────────────────── | ||
|
|
||
| private buildWindowFunction(spec: any): string { | ||
| const func = spec.function.toUpperCase(); | ||
| let sql = `${func}()`; | ||
|
|
||
| const overParts: string[] = []; | ||
|
|
||
| if (spec.partitionBy && Array.isArray(spec.partitionBy) && spec.partitionBy.length > 0) { | ||
| const partitionFields = spec.partitionBy.map((f: string) => this.mapSortField(f)).join(', '); | ||
| overParts.push(`PARTITION BY ${partitionFields}`); | ||
| } | ||
|
|
||
| if (spec.orderBy && Array.isArray(spec.orderBy) && spec.orderBy.length > 0) { | ||
| const orderFields = spec.orderBy | ||
| .map((s: any) => { | ||
| const field = this.mapSortField(s.field); | ||
| const order = (s.order || 'asc').toUpperCase(); | ||
| return `${field} ${order}`; |
There was a problem hiding this comment.
buildWindowFunction() constructs SQL fragments by concatenating field names directly into a raw string (PARTITION BY ${partitionFields}, ORDER BY ${orderFields}). Since QueryAST can originate from user input, this creates a SQL injection risk through unvalidated identifiers. Prefer building the expression with Knex identifier bindings (??) / knex.ref() or validate field names against a strict identifier regex before including them in raw SQL.
| // ── Window function builder ───────────────────────────────────────────────── | |
| private buildWindowFunction(spec: any): string { | |
| const func = spec.function.toUpperCase(); | |
| let sql = `${func}()`; | |
| const overParts: string[] = []; | |
| if (spec.partitionBy && Array.isArray(spec.partitionBy) && spec.partitionBy.length > 0) { | |
| const partitionFields = spec.partitionBy.map((f: string) => this.mapSortField(f)).join(', '); | |
| overParts.push(`PARTITION BY ${partitionFields}`); | |
| } | |
| if (spec.orderBy && Array.isArray(spec.orderBy) && spec.orderBy.length > 0) { | |
| const orderFields = spec.orderBy | |
| .map((s: any) => { | |
| const field = this.mapSortField(s.field); | |
| const order = (s.order || 'asc').toUpperCase(); | |
| return `${field} ${order}`; | |
| /** | |
| * Validate that a value is a safe SQL identifier. | |
| * | |
| * We intentionally enforce a strict pattern aligned with ObjectStack's | |
| * machine-name conventions (snake_case), to avoid any possibility of | |
| * injecting additional SQL tokens into identifier positions. | |
| */ | |
| private validateIdentifier(id: string): void { | |
| // Allow simple identifiers like snake_case_field; no spaces or punctuation. | |
| const IDENTIFIER_REGEX = /^[a-z_][a-z0-9_]*$/i; | |
| if (typeof id !== 'string' || !IDENTIFIER_REGEX.test(id)) { | |
| throw new Error(`Invalid SQL identifier: ${id}`); | |
| } | |
| } | |
| // ── Window function builder ───────────────────────────────────────────────── | |
| private buildWindowFunction(spec: any): string { | |
| const funcName = String(spec.function || '').toLowerCase(); | |
| this.validateIdentifier(funcName); | |
| const func = funcName.toUpperCase(); | |
| let sql = `${func}()`; | |
| const overParts: string[] = []; | |
| if (spec.partitionBy && Array.isArray(spec.partitionBy) && spec.partitionBy.length > 0) { | |
| const partitionFields = spec.partitionBy | |
| .map((f: string) => { | |
| this.validateIdentifier(f); | |
| return this.mapSortField(f); | |
| }) | |
| .join(', '); | |
| overParts.push(`PARTITION BY ${partitionFields}`); | |
| } | |
| if (spec.orderBy && Array.isArray(spec.orderBy) && spec.orderBy.length > 0) { | |
| const orderFields = spec.orderBy | |
| .map((s: any) => { | |
| const fieldName = String(s.field || ''); | |
| this.validateIdentifier(fieldName); | |
| const field = this.mapSortField(fieldName); | |
| const rawOrder = (s.order || 'asc').toString().toLowerCase(); | |
| const normalizedOrder = rawOrder === 'desc' ? 'DESC' : rawOrder === 'asc' ? 'ASC' : null; | |
| if (!normalizedOrder) { | |
| throw new Error(`Invalid sort direction for window function: ${s.order}`); | |
| } | |
| return `${field} ${normalizedOrder}`; |
| if (query.windowFunctions && Array.isArray(query.windowFunctions)) { | ||
| for (const wf of query.windowFunctions) { | ||
| const windowFunc = this.buildWindowFunction(wf); | ||
| builder.select(this.knex.raw(`${windowFunc} as ??`, [wf.alias])); | ||
| } |
There was a problem hiding this comment.
findWithWindowFunctions() injects the windowFunc string into knex.raw() (${windowFunc} as ??). Even if you tighten buildWindowFunction, consider returning a Knex.Raw with bindings (instead of a pre-rendered string) so identifiers/clauses are always escaped/parameterized end-to-end.
| // =================================== | ||
| // Query Plan Analysis | ||
| // =================================== | ||
|
|
There was a problem hiding this comment.
The driver exposes analyzeQuery(), but the spec contract for query plan inspection is explain?(object, query, options) (IDataDriver). Consider adding an explain method (and/or aliasing it to analyzeQuery) so platform code can call the standard name without driver-specific branching.
| /** | |
| * Standard query plan inspection entry point as defined by IDataDriver. | |
| * | |
| * Delegates to {@link analyzeQuery} to avoid duplicating logic while | |
| * exposing the spec-compliant method name. | |
| */ | |
| async explain(object: string, query: any, options?: DriverOptions): Promise<any> { | |
| return this.analyzeQuery(object, query, options); | |
| } |
| SELECT c.column_name | ||
| FROM information_schema.table_constraints tc | ||
| JOIN information_schema.constraint_column_usage AS ccu | ||
| ON tc.constraint_schema = ccu.constraint_schema | ||
| AND tc.constraint_name = ccu.constraint_name | ||
| WHERE tc.constraint_type = 'UNIQUE' | ||
| AND tc.table_name = ? |
There was a problem hiding this comment.
The Postgres unique-constraint introspection query selects c.column_name, but the only alias defined is ccu (constraint_column_usage AS ccu). This will error and then be silently swallowed by the surrounding try/catch, causing Postgres unique constraints to never be reported. Fix the alias (and consider using key_column_usage for unique columns) so Postgres introspection is accurate.
| SELECT c.column_name | |
| FROM information_schema.table_constraints tc | |
| JOIN information_schema.constraint_column_usage AS ccu | |
| ON tc.constraint_schema = ccu.constraint_schema | |
| AND tc.constraint_name = ccu.constraint_name | |
| WHERE tc.constraint_type = 'UNIQUE' | |
| AND tc.table_name = ? | |
| SELECT kcu.column_name | |
| FROM information_schema.table_constraints tc | |
| JOIN information_schema.key_column_usage AS kcu | |
| ON tc.constraint_schema = kcu.constraint_schema | |
| AND tc.constraint_name = kcu.constraint_name | |
| AND tc.table_name = kcu.table_name | |
| WHERE tc.constraint_type = 'UNIQUE' | |
| AND tc.table_name = ? | |
| AND tc.table_schema = current_schema() |
| // =================================== | ||
|
|
||
| async syncSchema(object: string, schema: unknown, _options?: DriverOptions): Promise<void> { | ||
| const objectDef = schema as { name: string; fields?: Record<string, any> }; |
There was a problem hiding this comment.
syncSchema(object, schema) ignores the object parameter and instead relies on schema.name. The ObjectQL plugin calls driver.syncSchema(obj.name, obj); using the object argument as the authoritative table name would better match the contract and avoid accidental mismatches if schema.name differs or is missing.
| const objectDef = schema as { name: string; fields?: Record<string, any> }; | |
| const schemaDef = schema as { name?: string; fields?: Record<string, any> } | null; | |
| const objectDef = { | |
| ...(schemaDef || {}), | |
| name: object, | |
| } as { name: string; fields?: Record<string, any> }; |
| await this.knex.schema.dropTable(tableName); | ||
| exists = false; |
There was a problem hiding this comment.
initObjects() drops the entire table when it finds an _id column without id. Dropping tables during schema sync is destructive and can cause unexpected data loss in production. Prefer an in-place migration (e.g., add id, copy values from _id, rebuild constraints) or gate the destructive behavior behind an explicit option/flag.
| await this.knex.schema.dropTable(tableName); | |
| exists = false; | |
| // Non-destructive migration: add `id` column and backfill from legacy `_id` | |
| await this.knex.schema.alterTable(tableName, (table) => { | |
| table.string('id').nullable(); | |
| }); | |
| await this.knex(tableName) | |
| .whereNull('id') | |
| .update('id', this.knex.ref('_id')); |
| queryAggregations: true, | ||
| querySorting: true, | ||
| queryPagination: true, | ||
| queryWindowFunctions: true, |
There was a problem hiding this comment.
supports.queryWindowFunctions is set to true, but find() does not apply query.windowFunctions from QueryAST (window functions are only supported via the non-standard findWithWindowFunctions() method). Either handle windowFunctions inside find() when present, or set the capability flag to false to avoid misleading feature detection.
| queryWindowFunctions: true, | |
| queryWindowFunctions: false, |
Migrates the Knex-based SQL driver from
objectstack-ai/objectqlintopackages/plugins/driver-sql/as a first-class plugin, fully aligned with the current spec protocol.Driver refactoring
@objectql/types→@objectstack/core(DriverInterface) +@objectstack/spec/data(QueryInput, DriverOptions)Driverinterface andObjectQLError— implementsDriverInterfacedirectlysyncSchema()per DriverInterface contract, backed byinitObjects()for batch table initisSqlite/isPostgres/isMysqlgetters handle bothsqlite3andbetter-sqlite3clientsPlugin structure
Follows
driver-memorylayout exactly:Test coverage
All tests use in-memory
better-sqlite3:sql-driver.test.tssql-driver-advanced.test.tssql-driver-schema.test.tssql-driver-queryast.test.tssql-driver-introspection.test.tsUsage
Original prompt
✨ Let Copilot coding agent set things up for you — coding agent works faster and does higher quality work when set up for your repo.