Migrate all plugin data storage to ObjectQL#250
Migrate all plugin data storage to ObjectQL#250hotlong merged 5 commits intocopilot/address-technical-debtfrom
Conversation
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
…ackages Co-authored-by: hotlong <50353452+hotlong@users.noreply.github.com>
Co-authored-by: hotlong <50353452+hotlong@users.noreply.github.com>
Co-authored-by: hotlong <50353452+hotlong@users.noreply.github.com>
Co-authored-by: hotlong <50353452+hotlong@users.noreply.github.com>
There was a problem hiding this comment.
Pull request overview
This pull request migrates all plugin data storage from in-memory implementations to ObjectQL-based persistence. Five packages (@objectos/permissions, @objectos/audit, @objectos/jobs, @objectos/automation, @objectos/workflow) now use ObjectQL for production data storage while maintaining backward compatibility with in-memory storage for testing.
Changes:
- Created ObjectQL storage adapters for five plugins with broker-based data persistence via
context.broker.call() - Added YAML object definitions for automatic ObjectQL schema loading (permission_set, audit_log, job, workflow_task, formula_field, automation_rule)
- Implemented conditional plugin upgrade pattern that switches to ObjectQL storage when broker is available, maintaining test compatibility
Reviewed changes
Copilot reviewed 18 out of 18 changed files in this pull request and generated 25 comments.
Show a summary per file
| File | Description |
|---|---|
| packages/permissions/src/objectql-storage.ts | New ObjectQL storage adapter for permission sets with broker-based persistence |
| packages/permissions/src/plugin.ts | Added conditional ObjectQL upgrade logic in init() when broker is available |
| packages/permissions/src/types.ts | Added storage config option to PermissionPluginConfig |
| packages/permissions/src/storage.ts | Re-exported ObjectQLPermissionStorage |
| packages/permissions/src/index.ts | Exported ObjectQLPermissionStorage in public API |
| packages/permissions/objects/permission_set.object.yml | Schema definition for permission_set object with name, object_name, description, permissions fields |
| packages/audit/src/objectql-storage.ts | New ObjectQL storage adapter for audit logs with field-level history tracking |
| packages/audit/src/plugin.ts | Added conditional ObjectQL upgrade logic in init() |
| packages/audit/objects/audit_log.object.yml | Schema definition for audit_log with event_type, object_name, record_id, user_id, timestamp, changes, metadata |
| packages/jobs/src/objectql-storage.ts | New ObjectQL storage adapter for jobs with retry logic and scheduling support |
| packages/jobs/src/plugin.ts | Added conditional ObjectQL upgrade with queue/scheduler reinitialization |
| packages/jobs/objects/job.object.yml | Schema definition for job with corrected field names (max_retries, retry_delay) matching Job interface |
| packages/automation/src/objectql-storage.ts | New ObjectQL storage adapter for automation rules and formula fields |
| packages/automation/src/plugin.ts | Added conditional ObjectQL upgrade logic in init() |
| packages/automation/objects/formula_field.object.yml | Schema definition for formula_field with object_name, name, formula fields |
| packages/automation/objects/automation_rule.object.yml | Schema definition for automation_rule with removed conditions field for type alignment |
| packages/workflow/src/plugin.ts | Enhanced existing ObjectQL upgrade logic with broker availability check |
| packages/workflow/objects/workflow_task.object.yml | New schema definition for workflow_task persistence |
| } catch (err: any) { | ||
| // If not found, return null | ||
| if (err.message && err.message.includes('not found')) return null; |
There was a problem hiding this comment.
The error detection relies on checking if the error message includes the string 'not found', which is fragile and language-dependent. If the broker implementation changes the error message format or if errors are localized, this check will fail. Consider using error codes or specific error types instead of string matching.
| /** | ||
| * Store an audit event | ||
| */ | ||
| async logEvent(entry: AuditLogEntry): Promise<void> { | ||
| await (this.context as any).broker.call('data.create', { | ||
| object: 'audit_log', | ||
| doc: { | ||
| event_type: entry.eventType, | ||
| object_name: (entry as any).objectName, | ||
| record_id: (entry as any).recordId, | ||
| user_id: entry.userId, | ||
| timestamp: entry.timestamp || new Date().toISOString(), | ||
| ip_address: entry.ipAddress, | ||
| user_agent: entry.userAgent, | ||
| session_id: entry.sessionId, | ||
| changes: (entry as any).changes, |
There was a problem hiding this comment.
The logEvent method casts entry to any to access properties (objectName, recordId, changes) that are defined in AuditTrailEntry but not in the base AuditLogEntry type. This type casting hides potential type mismatches. Consider using a type guard or having separate methods for different entry types, or ensure the method signature accepts the appropriate union type.
| /** | |
| * Store an audit event | |
| */ | |
| async logEvent(entry: AuditLogEntry): Promise<void> { | |
| await (this.context as any).broker.call('data.create', { | |
| object: 'audit_log', | |
| doc: { | |
| event_type: entry.eventType, | |
| object_name: (entry as any).objectName, | |
| record_id: (entry as any).recordId, | |
| user_id: entry.userId, | |
| timestamp: entry.timestamp || new Date().toISOString(), | |
| ip_address: entry.ipAddress, | |
| user_agent: entry.userAgent, | |
| session_id: entry.sessionId, | |
| changes: (entry as any).changes, | |
| private isAuditTrailEntry(entry: AuditLogEntry | AuditTrailEntry): entry is AuditTrailEntry { | |
| return ( | |
| typeof (entry as AuditTrailEntry).objectName === 'string' && | |
| typeof (entry as AuditTrailEntry).recordId === 'string' | |
| ); | |
| } | |
| /** | |
| * Store an audit event | |
| */ | |
| async logEvent(entry: AuditLogEntry | AuditTrailEntry): Promise<void> { | |
| const isTrail = this.isAuditTrailEntry(entry); | |
| const objectName = isTrail ? entry.objectName : undefined; | |
| const recordId = isTrail ? entry.recordId : undefined; | |
| const changes = isTrail ? entry.changes : undefined; | |
| await (this.context as any).broker.call('data.create', { | |
| object: 'audit_log', | |
| doc: { | |
| event_type: entry.eventType, | |
| object_name: objectName, | |
| record_id: recordId, | |
| user_id: entry.userId, | |
| timestamp: entry.timestamp || new Date().toISOString(), | |
| ip_address: entry.ipAddress, | |
| user_agent: entry.userAgent, | |
| session_id: entry.sessionId, | |
| changes: changes, |
| export class ObjectQLPermissionStorage implements PermissionStorage { | ||
| private context: PluginContext; | ||
|
|
||
| constructor(context: PluginContext) { | ||
| this.context = context; | ||
| } | ||
|
|
||
| /** | ||
| * Store a permission set | ||
| */ | ||
| async storePermissionSet(permissionSet: PermissionSet): Promise<void> { | ||
| await (this.context as any).broker.call('data.create', { | ||
| object: 'permission_set', | ||
| doc: { | ||
| _id: permissionSet.name, | ||
| name: permissionSet.name, | ||
| object_name: permissionSet.objectName, | ||
| description: permissionSet.description, | ||
| permissions: permissionSet, | ||
| } | ||
| }); | ||
| } | ||
|
|
||
| /** | ||
| * Get a permission set by name | ||
| */ | ||
| async getPermissionSet(name: string): Promise<PermissionSet | null> { | ||
| try { | ||
| const result = await (this.context as any).broker.call('data.get', { | ||
| object: 'permission_set', | ||
| id: name | ||
| }); | ||
| return result ? this.mapDocToPermissionSet(result) : null; | ||
| } catch (err: any) { | ||
| // If not found, return null | ||
| if (err.message && err.message.includes('not found')) return null; | ||
| throw err; | ||
| } | ||
| } | ||
|
|
||
| /** | ||
| * Get permission set for a specific object | ||
| */ | ||
| async getPermissionSetForObject(objectName: string): Promise<PermissionSet | null> { | ||
| try { | ||
| const results = await (this.context as any).broker.call('data.find', { | ||
| object: 'permission_set', | ||
| query: { object_name: objectName }, | ||
| limit: 1 | ||
| }); | ||
| return results && results.length > 0 ? this.mapDocToPermissionSet(results[0]) : null; | ||
| } catch (err: any) { | ||
| if (err.message && err.message.includes('not found')) return null; | ||
| throw err; | ||
| } | ||
| } | ||
|
|
||
| /** | ||
| * Get all permission sets | ||
| */ | ||
| async getAllPermissionSets(): Promise<PermissionSet[]> { | ||
| const results = await (this.context as any).broker.call('data.find', { | ||
| object: 'permission_set', | ||
| query: {} | ||
| }); | ||
| return results.map((doc: any) => this.mapDocToPermissionSet(doc)); | ||
| } | ||
|
|
||
| /** | ||
| * Delete a permission set | ||
| */ | ||
| async deletePermissionSet(name: string): Promise<void> { | ||
| await (this.context as any).broker.call('data.delete', { | ||
| object: 'permission_set', | ||
| id: name | ||
| }); | ||
| } | ||
|
|
||
| /** | ||
| * Clear all permission sets | ||
| */ | ||
| async clear(): Promise<void> { | ||
| const allSets = await this.getAllPermissionSets(); | ||
| for (const permSet of allSets) { | ||
| await this.deletePermissionSet(permSet.name); | ||
| } | ||
| } | ||
|
|
||
| /** | ||
| * Map document to PermissionSet | ||
| */ | ||
| private mapDocToPermissionSet(doc: any): PermissionSet { | ||
| return doc.permissions || { | ||
| name: doc.name, | ||
| objectName: doc.object_name, | ||
| description: doc.description, | ||
| }; | ||
| } | ||
| } |
There was a problem hiding this comment.
The new ObjectQLPermissionStorage class has no test coverage. Existing storage tests only cover InMemoryPermissionStorage. The ObjectQL storage adapter should have tests that verify: 1) proper mapping between PermissionSet and database documents, 2) error handling for broker call failures, 3) correct behavior when broker.call returns expected/unexpected data structures, 4) the fallback logic in mapDocToPermissionSet.
| */ | ||
| async storePermissionSet(permissionSet: PermissionSet): Promise<void> { | ||
| await (this.context as any).broker.call('data.create', { | ||
| object: 'permission_set', | ||
| doc: { | ||
| _id: permissionSet.name, | ||
| name: permissionSet.name, | ||
| object_name: permissionSet.objectName, | ||
| description: permissionSet.description, | ||
| permissions: permissionSet, | ||
| } | ||
| }); |
There was a problem hiding this comment.
The storePermissionSet method uses 'data.create' which typically only creates new records. The InMemoryPermissionStorage.storePermissionSet() method uses Map.set() which performs an upsert. If storePermissionSet() is called on an existing permission set name, the ObjectQL implementation might fail with a duplicate key error, whereas the in-memory version would update the permission set. Consider using an upsert operation or checking if the permission set exists and calling 'data.update' accordingly.
| */ | |
| async storePermissionSet(permissionSet: PermissionSet): Promise<void> { | |
| await (this.context as any).broker.call('data.create', { | |
| object: 'permission_set', | |
| doc: { | |
| _id: permissionSet.name, | |
| name: permissionSet.name, | |
| object_name: permissionSet.objectName, | |
| description: permissionSet.description, | |
| permissions: permissionSet, | |
| } | |
| }); | |
| * | |
| * Uses upsert semantics to align with the in-memory implementation: | |
| * - If a permission set with the given name exists, it is updated. | |
| * - Otherwise, a new permission set record is created. | |
| */ | |
| async storePermissionSet(permissionSet: PermissionSet): Promise<void> { | |
| const broker = (this.context as any).broker; | |
| const doc = { | |
| _id: permissionSet.name, | |
| name: permissionSet.name, | |
| object_name: permissionSet.objectName, | |
| description: permissionSet.description, | |
| permissions: permissionSet, | |
| }; | |
| try { | |
| // Try to update existing permission set first | |
| await broker.call('data.update', { | |
| object: 'permission_set', | |
| id: permissionSet.name, | |
| doc, | |
| }); | |
| } catch (err: any) { | |
| // If it does not exist yet, create a new record | |
| if (err?.message && err.message.includes('not found')) { | |
| await broker.call('data.create', { | |
| object: 'permission_set', | |
| doc, | |
| }); | |
| return; | |
| } | |
| // Propagate unexpected errors | |
| throw err; | |
| } |
| }); | ||
| return result ? this.mapDocToRule(result) : null; | ||
| } catch (err: any) { | ||
| if (err.message && err.message.includes('not found')) return null; |
There was a problem hiding this comment.
Similar to other storage implementations, error detection relies on checking if the error message includes 'not found', which is fragile. Consider using error codes or specific error types instead of string matching for more robust error handling.
| if (err.message && err.message.includes('not found')) return null; | |
| const errorCode = err?.code ?? err?.status ?? err?.statusCode ?? err?.httpStatus; | |
| if (errorCode === 404 || errorCode === 'NOT_FOUND') { | |
| return null; | |
| } |
|
|
||
| // Upgrade storage to ObjectQL if not explicitly provided and broker is available | ||
| // We do this in init because we need the context | ||
| if (!this.config.storage && (context as any).broker) { |
There was a problem hiding this comment.
The broker availability check (context as any).broker uses type casting without validating the broker interface. This same issue appears in all plugin init methods. Consider adding runtime validation or a more robust type guard.
| export class ObjectQLJobStorage implements JobStorage { | ||
| private context: PluginContext; | ||
|
|
||
| constructor(context: PluginContext) { | ||
| this.context = context; | ||
| } | ||
|
|
||
| async save(job: Job): Promise<void> { | ||
| await (this.context as any).broker.call('data.create', { | ||
| object: 'job', | ||
| doc: this.mapJobToDoc(job) | ||
| }); | ||
| } | ||
|
|
||
| async get(id: string): Promise<Job | null> { | ||
| try { | ||
| const result = await (this.context as any).broker.call('data.get', { | ||
| object: 'job', | ||
| id: id | ||
| }); | ||
| return result ? this.mapDocToJob(result) : null; | ||
| } catch (err: any) { | ||
| if (err.message && err.message.includes('not found')) return null; | ||
| throw err; | ||
| } | ||
| } | ||
|
|
||
| async update(id: string, updates: Partial<Job>): Promise<void> { | ||
| const docUpdates: any = {}; | ||
|
|
||
| if (updates.name !== undefined) docUpdates.name = updates.name; | ||
| if (updates.status !== undefined) docUpdates.status = updates.status; | ||
| if (updates.priority !== undefined) docUpdates.priority = updates.priority; | ||
| if (updates.data !== undefined) docUpdates.data = updates.data; | ||
| if (updates.result !== undefined) docUpdates.result = updates.result; | ||
| if (updates.error !== undefined) docUpdates.error = updates.error; | ||
| if (updates.attempts !== undefined) docUpdates.attempts = updates.attempts; | ||
| if (updates.maxRetries !== undefined) docUpdates.max_retries = updates.maxRetries; | ||
| if (updates.retryDelay !== undefined) docUpdates.retry_delay = updates.retryDelay; | ||
| if (updates.timeout !== undefined) docUpdates.timeout = updates.timeout; | ||
| if (updates.nextRun !== undefined) docUpdates.next_run = updates.nextRun; | ||
| if (updates.cronExpression !== undefined) docUpdates.cron_expression = updates.cronExpression; | ||
| if (updates.startedAt !== undefined) docUpdates.started_at = updates.startedAt; | ||
| if (updates.completedAt !== undefined) docUpdates.completed_at = updates.completedAt; | ||
| if (updates.createdAt !== undefined) docUpdates.created_at = updates.createdAt; | ||
|
|
||
| await (this.context as any).broker.call('data.update', { | ||
| object: 'job', | ||
| id: id, | ||
| doc: docUpdates | ||
| }); | ||
| } | ||
|
|
||
| async delete(id: string): Promise<void> { | ||
| await (this.context as any).broker.call('data.delete', { | ||
| object: 'job', | ||
| id: id | ||
| }); | ||
| } | ||
|
|
||
| async query(options: JobQueryOptions = {}): Promise<Job[]> { | ||
| const query: any = {}; | ||
|
|
||
| // Filter by name | ||
| if (options.name) { | ||
| query.name = options.name; | ||
| } | ||
|
|
||
| // Filter by status | ||
| if (options.status) { | ||
| const statuses = Array.isArray(options.status) | ||
| ? options.status | ||
| : [options.status]; | ||
| query.status = { $in: statuses }; | ||
| } | ||
|
|
||
| // Filter by priority | ||
| if (options.priority) { | ||
| query.priority = options.priority; | ||
| } | ||
|
|
||
| // Sort | ||
| let sort = '-created_at'; // default | ||
| if (options.sortBy) { | ||
| const sortOrder = options.sortOrder === 'desc' ? '-' : ''; | ||
| const field = options.sortBy === 'createdAt' ? 'created_at' | ||
| : options.sortBy === 'nextRun' ? 'next_run' | ||
| : options.sortBy === 'priority' ? 'priority' | ||
| : 'created_at'; | ||
| sort = `${sortOrder}${field}`; | ||
| } | ||
|
|
||
| const results = await (this.context as any).broker.call('data.find', { | ||
| object: 'job', | ||
| query: query, | ||
| sort: sort, | ||
| limit: options.limit, | ||
| skip: options.skip, | ||
| }); | ||
|
|
||
| return results.map((doc: any) => this.mapDocToJob(doc)); | ||
| } | ||
|
|
||
| async getStats(): Promise<JobQueueStats> { | ||
| const allJobs = await this.query({}); | ||
|
|
||
| const stats: JobQueueStats = { | ||
| total: allJobs.length, | ||
| pending: 0, | ||
| running: 0, | ||
| completed: 0, | ||
| failed: 0, | ||
| cancelled: 0, | ||
| scheduled: 0, | ||
| }; | ||
|
|
||
| for (const job of allJobs) { | ||
| switch (job.status) { | ||
| case 'pending': | ||
| stats.pending++; | ||
| break; | ||
| case 'running': | ||
| stats.running++; | ||
| break; | ||
| case 'completed': | ||
| stats.completed++; | ||
| break; | ||
| case 'failed': | ||
| stats.failed++; | ||
| break; | ||
| case 'cancelled': | ||
| stats.cancelled++; | ||
| break; | ||
| case 'scheduled': | ||
| stats.scheduled++; | ||
| break; | ||
| } | ||
| } | ||
|
|
||
| return stats; | ||
| } | ||
|
|
||
| async getNextPending(): Promise<Job | null> { | ||
| const pendingJobs = await this.query({ | ||
| status: 'pending', | ||
| sortBy: 'priority', | ||
| sortOrder: 'desc', | ||
| limit: 1 | ||
| }); | ||
|
|
||
| return pendingJobs.length > 0 ? pendingJobs[0] : null; | ||
| } | ||
|
|
||
| async getScheduledDue(): Promise<Job[]> { | ||
| const now = new Date(); | ||
| const allScheduled = await this.query({ status: 'scheduled' }); | ||
|
|
||
| return allScheduled.filter(job => | ||
| job.nextRun && job.nextRun <= now | ||
| ); | ||
| } | ||
|
|
||
| /** | ||
| * Clear all jobs (for testing) | ||
| */ | ||
| async clear(): Promise<void> { | ||
| const allJobs = await this.query({}); | ||
| for (const job of allJobs) { | ||
| await this.delete(job.id); | ||
| } | ||
| } | ||
|
|
||
| /** | ||
| * Get all jobs (for testing) | ||
| */ | ||
| async getAll(): Promise<Job[]> { | ||
| return this.query({}); | ||
| } | ||
|
|
||
| /** | ||
| * Map Job to document | ||
| */ | ||
| private mapJobToDoc(job: Job): any { | ||
| return { | ||
| _id: job.id, | ||
| id: job.id, | ||
| name: job.name, | ||
| status: job.status, | ||
| priority: job.priority, | ||
| data: job.data, | ||
| result: job.result, | ||
| error: job.error, | ||
| attempts: job.attempts, | ||
| max_retries: job.maxRetries, | ||
| retry_delay: job.retryDelay, | ||
| timeout: job.timeout, | ||
| next_run: job.nextRun, | ||
| cron_expression: job.cronExpression, | ||
| started_at: job.startedAt, | ||
| completed_at: job.completedAt, | ||
| created_at: job.createdAt, | ||
| }; | ||
| } | ||
|
|
||
| /** | ||
| * Map document to Job | ||
| */ | ||
| private mapDocToJob(doc: any): Job { | ||
| return { | ||
| id: doc.id || doc._id, | ||
| name: doc.name, | ||
| status: doc.status, | ||
| priority: doc.priority, | ||
| data: doc.data, | ||
| result: doc.result, | ||
| error: doc.error, | ||
| attempts: doc.attempts || 0, | ||
| maxRetries: doc.max_retries || 3, | ||
| retryDelay: doc.retry_delay || 1000, | ||
| timeout: doc.timeout, | ||
| nextRun: doc.next_run ? new Date(doc.next_run) : undefined, | ||
| cronExpression: doc.cron_expression, | ||
| startedAt: doc.started_at ? new Date(doc.started_at) : undefined, | ||
| completedAt: doc.completed_at ? new Date(doc.completed_at) : undefined, | ||
| createdAt: new Date(doc.created_at || Date.now()), | ||
| } as Job; | ||
| } | ||
| } |
There was a problem hiding this comment.
The new ObjectQLJobStorage class has no test coverage. Existing storage tests only cover InMemoryJobStorage. The ObjectQL storage adapter should have tests that verify: 1) proper mapping between Job interface (camelCase) and database documents (snake_case), 2) job query filtering and sorting, 3) getNextPending and getScheduledDue logic, 4) error handling for broker call failures.
| await (this.context as any).broker.call('data.create', { | ||
| object: 'job', | ||
| doc: this.mapJobToDoc(job) |
There was a problem hiding this comment.
The save method uses 'data.create' which typically only creates new records. The InMemoryJobStorage.save() method uses Map.set() which performs an upsert (create or update). If save() is called on an existing job ID, the ObjectQL implementation might fail or throw a duplicate key error, whereas the in-memory version would update the job. Consider using an upsert operation or checking if the job exists first and calling 'data.update' accordingly to maintain behavioral consistency with the in-memory implementation.
| await (this.context as any).broker.call('data.create', { | |
| object: 'job', | |
| doc: this.mapJobToDoc(job) | |
| const doc = this.mapJobToDoc(job); | |
| // If we have an ID, attempt an upsert by checking for existence first | |
| if ((job as any).id) { | |
| const existing = await this.get((job as any).id as string); | |
| if (existing) { | |
| await (this.context as any).broker.call('data.update', { | |
| object: 'job', | |
| id: (job as any).id, | |
| doc, | |
| }); | |
| return; | |
| } | |
| } | |
| // Fallback to create when no existing record is found or no ID is provided | |
| await (this.context as any).broker.call('data.create', { | |
| object: 'job', | |
| doc, |
| }); | ||
| return result ? this.mapDocToJob(result) : null; | ||
| } catch (err: any) { | ||
| if (err.message && err.message.includes('not found')) return null; |
There was a problem hiding this comment.
Similar to the permissions storage, error detection relies on checking if the error message includes 'not found', which is fragile. Consider using error codes or specific error types instead of string matching for more robust error handling.
| if (err.message && err.message.includes('not found')) return null; | |
| const code = err?.code; | |
| const status = err?.status ?? err?.statusCode; | |
| const isNotFound = | |
| code === 'NOT_FOUND' || | |
| code === 'E_NOT_FOUND' || | |
| status === 404; | |
| if (isNotFound) return null; |
| * Store a permission set | ||
| */ | ||
| async storePermissionSet(permissionSet: PermissionSet): Promise<void> { | ||
| await (this.context as any).broker.call('data.create', { |
There was a problem hiding this comment.
The broker property is accessed via type casting (this.context as any).broker, which bypasses TypeScript's type safety. This pattern is repeated throughout all ObjectQL storage implementations. If the broker is not available or the API signature changes, these calls will fail at runtime with unclear error messages. Consider adding proper typing for the broker property in PluginContext or creating a typed wrapper method that validates broker availability before calling.
All data storage must use ObjectQL per project requirements. Five packages (
@objectos/permissions,@objectos/audit,@objectos/jobs,@objectos/automation,@objectos/workflow) were using in-memory storage.Changes
ObjectQL Storage Implementations
Created storage adapters that delegate to ObjectQL via
context.broker:ObjectQLPermissionStorage- permission setsObjectQLAuditStorage- audit events with field-level historyObjectQLJobStorage- background jobs with retry logicObjectQLAutomationStorage- automation rules and formulasObjectQLWorkflowStorage- added task persistenceObject Definitions
Added YAML schemas for automatic ObjectQL loading via
packages/*/objects/*.object.ymlpattern:permission_set.object.ymlaudit_log.object.ymljob.object.yml(fields:max_retries,retry_delaymatching Job interface)workflow_task.object.ymlformula_field.object.ymlautomation_rule.object.yml(removedconditionsfield for type alignment)Plugin Auto-Upgrade Pattern
Plugins conditionally upgrade to ObjectQL when broker is available:
This maintains:
Field name corrections:
job.object.yml:maxAttempts→max_retries, addedretry_delayautomation_rule.object.yml: removedconditions,objectName(not in AutomationRule type)All packages build clean, existing tests pass (93 tests in permissions).
Original prompt
💡 You can make Copilot smarter by setting up custom instructions, customizing its development environment and configuring Model Context Protocol (MCP) servers. Learn more Copilot coding agent tips in the docs.