Conversation
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
Co-authored-by: hotlong <50353452+hotlong@users.noreply.github.com>
Co-authored-by: hotlong <50353452+hotlong@users.noreply.github.com>
Co-authored-by: hotlong <50353452+hotlong@users.noreply.github.com>
There was a problem hiding this comment.
Pull request overview
This pull request implements server-side support for batch operations, metadata caching with ETag support, and view storage APIs to align with the client-side specifications introduced in PR #380. The implementation adds protocol method definitions, concrete implementations, and HTTP route handlers across three packages.
Changes:
- Extended the ObjectStack protocol interface with 9 new methods for batch operations, cached metadata retrieval, and view storage
- Implemented batch data operations (create, update, delete, upsert) with configurable atomic transaction semantics
- Added ETag-based metadata caching using a browser-compatible hash function
- Created in-memory view storage with CRUD operations and filtering capabilities
- Fixed Hono adapter body parsing issue that was causing empty request bodies
- Registered 10 new HTTP endpoints for the added functionality
Reviewed changes
Copilot reviewed 4 out of 4 changed files in this pull request and generated 24 comments.
| File | Description |
|---|---|
packages/spec/src/api/protocol.ts |
Extended IObjectStackProtocol interface with method signatures for batch operations, cached metadata, and view storage |
packages/objectql/src/protocol.ts |
Implemented protocol methods including batch processing logic, ETag-based caching, and in-memory view storage with a simple hash function |
packages/plugins/plugin-hono-server/src/hono-plugin.ts |
Added HTTP route handlers for batch endpoints, enhanced metadata endpoint with caching headers, and view storage CRUD endpoints |
packages/plugins/plugin-hono-server/src/adapter.ts |
Fixed request body parsing to handle JSON first, preventing double-parse issue that left empty bodies |
| case 'upsert': | ||
| // For upsert, try to update first, then create if not found | ||
| if (record.id) { | ||
| try { | ||
| result = await this.engine.update(object, record.id, record.data); | ||
| results.push({ | ||
| id: record.id, | ||
| success: true, | ||
| index: i, | ||
| data: returnRecords ? result : undefined, | ||
| }); | ||
| succeeded++; | ||
| } catch (updateError) { | ||
| // If update fails, try create | ||
| result = await this.engine.insert(object, record.data); | ||
| results.push({ | ||
| id: result._id || result.id, | ||
| success: true, | ||
| index: i, | ||
| data: returnRecords ? result : undefined, | ||
| }); | ||
| succeeded++; | ||
| } | ||
| } else { | ||
| result = await this.engine.insert(object, record.data); | ||
| results.push({ | ||
| id: result._id || result.id, | ||
| success: true, | ||
| index: i, | ||
| data: returnRecords ? result : undefined, | ||
| }); | ||
| succeeded++; | ||
| } | ||
| break; |
There was a problem hiding this comment.
The upsert operation doesn't handle the externalId field specified in the BatchRecordSchema. According to the schema and batch.zod.ts documentation, upsert should support matching records by externalId, but the implementation only checks for record.id.
The schema defines externalId as 'External ID for upsert matching', which is a common pattern in data integration (matching on business keys instead of internal IDs). The implementation should check for externalId and perform appropriate matching logic.
| private engine: IDataEngine; | ||
| private viewStorage: Map<string, SavedView> = new Map(); | ||
|
|
||
| constructor(engine: IDataEngine) { | ||
| this.engine = engine; |
There was a problem hiding this comment.
The view storage uses an in-memory Map which means all saved views will be lost when the server restarts. This violates the expectation of "saved views" being persistent.
While the PR description mentions this is "DB-agnostic interface" and uses in-memory storage temporarily, this should be clearly documented in the code. Consider adding a comment warning about the temporary nature and the need for future persistence implementation, or at minimum log a warning on startup indicating views are not persisted.
| private engine: IDataEngine; | |
| private viewStorage: Map<string, SavedView> = new Map(); | |
| constructor(engine: IDataEngine) { | |
| this.engine = engine; | |
| private engine: IDataEngine; | |
| /** | |
| * Temporary in-memory storage for SavedView definitions. | |
| * | |
| * IMPORTANT: | |
| * - This implementation is NON-PERSISTENT: all views will be lost when the | |
| * process restarts. | |
| * - It exists only as a DB-agnostic placeholder for the protocol surface. | |
| * - Replace with a durable backing store (e.g. SQL/NoSQL) when wiring this | |
| * protocol into a production environment. | |
| */ | |
| private viewStorage: Map<string, SavedView> = new Map(); | |
| constructor(engine: IDataEngine) { | |
| this.engine = engine; | |
| // Warn at runtime so operators understand that saved views are not durable. | |
| // This avoids surprising data loss when the server process is restarted. | |
| console.warn( | |
| '[ObjectStackProtocol] Saved views are stored in-memory only and will be lost on restart. ' + | |
| 'Configure a persistent backing implementation before using this in production.' | |
| ); |
|
|
||
| // Batch Operations | ||
| this.server.post('/api/v1/data/:object/batch', async (req, res) => { | ||
| ctx.logger.info('Batch operation request', { | ||
| object: req.params.object, | ||
| operation: req.body?.operation, | ||
| hasBody: !!req.body, | ||
| bodyType: typeof req.body, | ||
| bodyKeys: req.body ? Object.keys(req.body) : [] | ||
| }); | ||
| try { | ||
| const result = await p.batchData(req.params.object, req.body); | ||
| ctx.logger.info('Batch operation completed', { | ||
| object: req.params.object, | ||
| operation: req.body?.operation, | ||
| total: result.total, | ||
| succeeded: result.succeeded, | ||
| failed: result.failed | ||
| }); | ||
| res.json(result); | ||
| } catch (e: any) { | ||
| ctx.logger.error('Batch operation failed', e, { object: req.params.object }); | ||
| res.status(400).json({ error: e.message }); | ||
| } | ||
| }); | ||
|
|
||
| this.server.post('/api/v1/data/:object/createMany', async (req, res) => { | ||
| ctx.logger.debug('Create many request', { object: req.params.object, count: req.body?.length }); | ||
| try { | ||
| const result = await p.createManyData(req.params.object, req.body || []); | ||
| ctx.logger.info('Create many completed', { object: req.params.object, count: result.length }); | ||
| res.status(201).json(result); | ||
| } catch (e: any) { | ||
| ctx.logger.error('Create many failed', e, { object: req.params.object }); | ||
| res.status(400).json({ error: e.message }); | ||
| } | ||
| }); | ||
|
|
||
| this.server.post('/api/v1/data/:object/updateMany', async (req, res) => { | ||
| ctx.logger.debug('Update many request', { object: req.params.object, count: req.body?.records?.length }); | ||
| try { | ||
| const result = await p.updateManyData(req.params.object, req.body); | ||
| ctx.logger.info('Update many completed', { | ||
| object: req.params.object, | ||
| total: result.total, | ||
| succeeded: result.succeeded, | ||
| failed: result.failed | ||
| }); | ||
| res.json(result); | ||
| } catch (e: any) { | ||
| ctx.logger.error('Update many failed', e, { object: req.params.object }); | ||
| res.status(400).json({ error: e.message }); | ||
| } | ||
| }); | ||
|
|
||
| this.server.post('/api/v1/data/:object/deleteMany', async (req, res) => { | ||
| ctx.logger.debug('Delete many request', { object: req.params.object, count: req.body?.ids?.length }); | ||
| try { | ||
| const result = await p.deleteManyData(req.params.object, req.body); | ||
| ctx.logger.info('Delete many completed', { | ||
| object: req.params.object, | ||
| total: result.total, | ||
| succeeded: result.succeeded, | ||
| failed: result.failed | ||
| }); | ||
| res.json(result); | ||
| } catch (e: any) { | ||
| ctx.logger.error('Delete many failed', e, { object: req.params.object }); | ||
| res.status(400).json({ error: e.message }); | ||
| } | ||
| }); | ||
|
|
||
| // Enhanced Metadata Route with ETag Support | ||
| this.server.get('/api/v1/meta/:type/:name', async (req, res) => { | ||
| ctx.logger.debug('Meta item request with cache support', { | ||
| type: req.params.type, | ||
| name: req.params.name, | ||
| ifNoneMatch: req.headers['if-none-match'] | ||
| }); | ||
| try { | ||
| const cacheRequest = { | ||
| ifNoneMatch: req.headers['if-none-match'] as string, | ||
| ifModifiedSince: req.headers['if-modified-since'] as string, | ||
| }; | ||
|
|
||
| const result = await p.getMetaItemCached(req.params.type, req.params.name, cacheRequest); | ||
|
|
||
| if (result.notModified) { | ||
| ctx.logger.debug('Meta item not modified (304)', { type: req.params.type, name: req.params.name }); | ||
| res.status(304).json({}); | ||
| } else { | ||
| // Set cache headers | ||
| if (result.etag) { | ||
| const etagValue = result.etag.weak ? `W/"${result.etag.value}"` : `"${result.etag.value}"`; | ||
| res.header('ETag', etagValue); | ||
| } | ||
| if (result.lastModified) { | ||
| res.header('Last-Modified', new Date(result.lastModified).toUTCString()); | ||
| } | ||
| if (result.cacheControl) { | ||
| const directives = result.cacheControl.directives.join(', '); | ||
| const maxAge = result.cacheControl.maxAge ? `, max-age=${result.cacheControl.maxAge}` : ''; | ||
| res.header('Cache-Control', directives + maxAge); | ||
| } | ||
|
|
||
| ctx.logger.debug('Meta item returned with cache headers', { | ||
| type: req.params.type, | ||
| name: req.params.name, | ||
| etag: result.etag?.value | ||
| }); | ||
| res.json(result.data); | ||
| } | ||
| } catch (e: any) { | ||
| ctx.logger.warn('Meta item not found', { type: req.params.type, name: req.params.name }); | ||
| res.status(404).json({ error: e.message }); | ||
| } | ||
| }); | ||
|
|
||
| // View Storage Routes | ||
| this.server.post('/api/v1/ui/views', async (req, res) => { | ||
| ctx.logger.debug('Create view request', { name: req.body?.name, object: req.body?.object }); | ||
| try { | ||
| const result = await p.createView(req.body); | ||
| if (result.success) { | ||
| ctx.logger.info('View created', { id: result.data?.id, name: result.data?.name }); | ||
| res.status(201).json(result); | ||
| } else { | ||
| ctx.logger.warn('View creation failed', { error: result.error }); | ||
| res.status(400).json(result); | ||
| } | ||
| } catch (e: any) { | ||
| ctx.logger.error('View creation error', e); | ||
| res.status(500).json({ success: false, error: { code: 'internal_error', message: e.message } }); | ||
| } | ||
| }); | ||
|
|
||
| this.server.get('/api/v1/ui/views/:id', async (req, res) => { | ||
| ctx.logger.debug('Get view request', { id: req.params.id }); | ||
| try { | ||
| const result = await p.getView(req.params.id); | ||
| if (result.success) { | ||
| ctx.logger.debug('View retrieved', { id: req.params.id }); | ||
| res.json(result); | ||
| } else { | ||
| ctx.logger.warn('View not found', { id: req.params.id }); | ||
| res.status(404).json(result); | ||
| } | ||
| } catch (e: any) { | ||
| ctx.logger.error('Get view error', e, { id: req.params.id }); | ||
| res.status(500).json({ success: false, error: { code: 'internal_error', message: e.message } }); | ||
| } | ||
| }); | ||
|
|
||
| this.server.get('/api/v1/ui/views', async (req, res) => { | ||
| ctx.logger.debug('List views request', { query: req.query }); | ||
| try { | ||
| const request: any = {}; | ||
| if (req.query.object) request.object = req.query.object as string; | ||
| if (req.query.type) request.type = req.query.type; | ||
| if (req.query.visibility) request.visibility = req.query.visibility; | ||
| if (req.query.createdBy) request.createdBy = req.query.createdBy as string; | ||
| if (req.query.isDefault !== undefined) request.isDefault = req.query.isDefault === 'true'; | ||
| if (req.query.limit) request.limit = parseInt(req.query.limit as string); | ||
| if (req.query.offset) request.offset = parseInt(req.query.offset as string); | ||
|
|
||
| const result = await p.listViews(request); | ||
| ctx.logger.debug('Views listed', { count: result.data?.length, total: result.pagination?.total }); | ||
| res.json(result); | ||
| } catch (e: any) { | ||
| ctx.logger.error('List views error', e); | ||
| res.status(500).json({ success: false, error: { code: 'internal_error', message: e.message } }); | ||
| } | ||
| }); | ||
|
|
||
| this.server.patch('/api/v1/ui/views/:id', async (req, res) => { | ||
| ctx.logger.debug('Update view request', { id: req.params.id }); | ||
| try { | ||
| const result = await p.updateView({ ...req.body, id: req.params.id }); | ||
| if (result.success) { | ||
| ctx.logger.info('View updated', { id: req.params.id }); | ||
| res.json(result); | ||
| } else { | ||
| ctx.logger.warn('View update failed', { id: req.params.id, error: result.error }); | ||
| res.status(result.error?.code === 'resource_not_found' ? 404 : 400).json(result); | ||
| } | ||
| } catch (e: any) { | ||
| ctx.logger.error('Update view error', e, { id: req.params.id }); | ||
| res.status(500).json({ success: false, error: { code: 'internal_error', message: e.message } }); | ||
| } | ||
| }); | ||
|
|
||
| this.server.delete('/api/v1/ui/views/:id', async (req, res) => { | ||
| ctx.logger.debug('Delete view request', { id: req.params.id }); | ||
| try { | ||
| const result = await p.deleteView(req.params.id); | ||
| if (result.success) { | ||
| ctx.logger.info('View deleted', { id: req.params.id }); | ||
| res.json(result); | ||
| } else { | ||
| ctx.logger.warn('View deletion failed', { id: req.params.id }); | ||
| res.status(404).json(result); | ||
| } | ||
| } catch (e: any) { | ||
| ctx.logger.error('Delete view error', e, { id: req.params.id }); | ||
| res.status(500).json({ success: false, error: { code: 'internal_error', message: e.message } }); | ||
| } | ||
| }); |
There was a problem hiding this comment.
The error response format is inconsistent across routes. Some routes return structured ViewResponse/BatchUpdateResponse objects with success/error fields, while error cases return plain objects like {error: {code, message}} or just {error: e.message}.
This inconsistency makes it harder for clients to handle errors uniformly. According to the schemas in batch.zod.ts and view-storage.zod.ts, all responses should follow the standardized response format with success, error, and data fields.
| this.server.get('/api/v1/ui/views', async (req, res) => { | ||
| ctx.logger.debug('List views request', { query: req.query }); | ||
| try { | ||
| const request: any = {}; | ||
| if (req.query.object) request.object = req.query.object as string; | ||
| if (req.query.type) request.type = req.query.type; | ||
| if (req.query.visibility) request.visibility = req.query.visibility; | ||
| if (req.query.createdBy) request.createdBy = req.query.createdBy as string; | ||
| if (req.query.isDefault !== undefined) request.isDefault = req.query.isDefault === 'true'; | ||
| if (req.query.limit) request.limit = parseInt(req.query.limit as string); | ||
| if (req.query.offset) request.offset = parseInt(req.query.offset as string); | ||
|
|
||
| const result = await p.listViews(request); | ||
| ctx.logger.debug('Views listed', { count: result.data?.length, total: result.pagination?.total }); | ||
| res.json(result); | ||
| } catch (e: any) { | ||
| ctx.logger.error('List views error', e); | ||
| res.status(500).json({ success: false, error: { code: 'internal_error', message: e.message } }); | ||
| } | ||
| }); |
There was a problem hiding this comment.
The listViews endpoint doesn't validate or sanitize query parameters before using them in filters. Query parameters from the URL are directly used to filter views without validation against the ListViewsRequestSchema.
While the risk is low since this is filtering in-memory data, it's inconsistent with the schema-based validation approach used elsewhere. Consider validating the request object against ListViewsRequestSchema before processing, especially if this will eventually query a database where injection attacks could be a concern.
| const updated: SavedView = { | ||
| ...existing, | ||
| ...updates, | ||
| id, // Preserve ID | ||
| updatedBy: 'system', // Placeholder | ||
| updatedAt: new Date().toISOString(), | ||
| }; |
There was a problem hiding this comment.
The updateView method spreads all updates into the existing view without validation. This could allow:
- Overwriting immutable fields like createdBy or createdAt
- Setting invalid values that don't conform to SavedViewSchema
- Changing the object or type to invalid values
The implementation should either:
- Validate updates against UpdateViewRequestSchema before applying
- Explicitly list which fields can be updated
- Preserve system-managed fields (createdBy, createdAt, isSystem)
|
|
||
| // Try to parse JSON body first if content-type is JSON | ||
| if (c.req.header('content-type')?.includes('application/json')) { | ||
| try { | ||
| body = await c.req.json(); | ||
| } catch(e) { | ||
| // If JSON parsing fails, try parseBody | ||
| try { | ||
| body = await c.req.parseBody(); | ||
| } catch(e2) {} | ||
| } | ||
| } else { | ||
| // For non-JSON content types, use parseBody | ||
| try { | ||
| body = await c.req.parseBody(); | ||
| } catch(e) {} |
There was a problem hiding this comment.
The body parsing logic is fragile and may still fail for certain request types. The code attempts JSON parsing first, then falls back to parseBody, but both attempts catch and suppress errors without logging.
This could make debugging difficult when requests fail to parse. Consider:
- Logging parse failures at debug level to help troubleshoot issues
- Verifying this works correctly for multipart/form-data and other content types
- Only calling parseBody for content types that actually need it (form-data, urlencoded)
| // Try to parse JSON body first if content-type is JSON | |
| if (c.req.header('content-type')?.includes('application/json')) { | |
| try { | |
| body = await c.req.json(); | |
| } catch(e) { | |
| // If JSON parsing fails, try parseBody | |
| try { | |
| body = await c.req.parseBody(); | |
| } catch(e2) {} | |
| } | |
| } else { | |
| // For non-JSON content types, use parseBody | |
| try { | |
| body = await c.req.parseBody(); | |
| } catch(e) {} | |
| const contentType = c.req.header('content-type') || ''; | |
| // Try to parse JSON body first if content-type is JSON | |
| if (contentType.includes('application/json')) { | |
| try { | |
| body = await c.req.json(); | |
| } catch (e) { | |
| // Log JSON parse failures at debug level for troubleshooting | |
| console.debug?.( | |
| '[HonoHttpServer] Failed to parse JSON body', | |
| { contentType, error: e } | |
| ); | |
| } | |
| } else if ( | |
| contentType.includes('multipart/form-data') || | |
| contentType.includes('application/x-www-form-urlencoded') | |
| ) { | |
| // For form-data and urlencoded content types, use parseBody | |
| try { | |
| body = await c.req.parseBody(); | |
| } catch (e) { | |
| // Log parseBody failures at debug level for troubleshooting | |
| console.debug?.( | |
| '[HonoHttpServer] Failed to parse form body', | |
| { contentType, error: e } | |
| ); | |
| } |
| async deleteView(id: string): Promise<{ success: boolean }> { | ||
| const exists = this.viewStorage.has(id); | ||
|
|
||
| if (!exists) { | ||
| return { success: false }; | ||
| } | ||
|
|
||
| this.viewStorage.delete(id); | ||
| return { success: true }; | ||
| } |
There was a problem hiding this comment.
The deleteView method returns a plain object with only success field, which doesn't match the pattern established by other view methods. According to the ViewStorageApiContracts in view-storage.zod.ts, deleteView should return an object conforming to the contract output type.
For consistency with getView, createView, updateView, and listViews, consider returning an error object when the view is not found, rather than just returning success: false. This would allow clients to distinguish between different failure scenarios.
| return { | ||
| data: item, | ||
| etag, | ||
| lastModified: new Date().toISOString(), |
There was a problem hiding this comment.
The lastModified timestamp always returns the current time (new Date().toISOString()) rather than the actual modification time of the metadata. This makes the Last-Modified header meaningless for caching purposes.
The lastModified should reflect when the metadata was actually last changed. Consider:
- Tracking modification timestamps when metadata is registered in SchemaRegistry
- Using a static timestamp if metadata is immutable after registration
- Omitting the lastModified field if not tracking actual modification times
| lastModified: new Date().toISOString(), |
| // Batch Operations | ||
| this.server.post('/api/v1/data/:object/batch', async (req, res) => { | ||
| ctx.logger.info('Batch operation request', { | ||
| object: req.params.object, | ||
| operation: req.body?.operation, | ||
| hasBody: !!req.body, | ||
| bodyType: typeof req.body, | ||
| bodyKeys: req.body ? Object.keys(req.body) : [] | ||
| }); | ||
| try { | ||
| const result = await p.batchData(req.params.object, req.body); | ||
| ctx.logger.info('Batch operation completed', { | ||
| object: req.params.object, | ||
| operation: req.body?.operation, | ||
| total: result.total, | ||
| succeeded: result.succeeded, | ||
| failed: result.failed | ||
| }); | ||
| res.json(result); | ||
| } catch (e: any) { | ||
| ctx.logger.error('Batch operation failed', e, { object: req.params.object }); | ||
| res.status(400).json({ error: e.message }); | ||
| } | ||
| }); | ||
|
|
||
| this.server.post('/api/v1/data/:object/createMany', async (req, res) => { | ||
| ctx.logger.debug('Create many request', { object: req.params.object, count: req.body?.length }); | ||
| try { | ||
| const result = await p.createManyData(req.params.object, req.body || []); | ||
| ctx.logger.info('Create many completed', { object: req.params.object, count: result.length }); | ||
| res.status(201).json(result); | ||
| } catch (e: any) { | ||
| ctx.logger.error('Create many failed', e, { object: req.params.object }); | ||
| res.status(400).json({ error: e.message }); | ||
| } | ||
| }); | ||
|
|
||
| this.server.post('/api/v1/data/:object/updateMany', async (req, res) => { | ||
| ctx.logger.debug('Update many request', { object: req.params.object, count: req.body?.records?.length }); | ||
| try { | ||
| const result = await p.updateManyData(req.params.object, req.body); | ||
| ctx.logger.info('Update many completed', { | ||
| object: req.params.object, | ||
| total: result.total, | ||
| succeeded: result.succeeded, | ||
| failed: result.failed | ||
| }); | ||
| res.json(result); | ||
| } catch (e: any) { | ||
| ctx.logger.error('Update many failed', e, { object: req.params.object }); | ||
| res.status(400).json({ error: e.message }); | ||
| } | ||
| }); | ||
|
|
||
| this.server.post('/api/v1/data/:object/deleteMany', async (req, res) => { | ||
| ctx.logger.debug('Delete many request', { object: req.params.object, count: req.body?.ids?.length }); | ||
| try { | ||
| const result = await p.deleteManyData(req.params.object, req.body); | ||
| ctx.logger.info('Delete many completed', { | ||
| object: req.params.object, | ||
| total: result.total, | ||
| succeeded: result.succeeded, | ||
| failed: result.failed | ||
| }); | ||
| res.json(result); | ||
| } catch (e: any) { | ||
| ctx.logger.error('Delete many failed', e, { object: req.params.object }); | ||
| res.status(400).json({ error: e.message }); | ||
| } | ||
| }); | ||
|
|
||
| // Enhanced Metadata Route with ETag Support | ||
| this.server.get('/api/v1/meta/:type/:name', async (req, res) => { | ||
| ctx.logger.debug('Meta item request with cache support', { | ||
| type: req.params.type, | ||
| name: req.params.name, | ||
| ifNoneMatch: req.headers['if-none-match'] | ||
| }); | ||
| try { | ||
| const cacheRequest = { | ||
| ifNoneMatch: req.headers['if-none-match'] as string, | ||
| ifModifiedSince: req.headers['if-modified-since'] as string, | ||
| }; | ||
|
|
||
| const result = await p.getMetaItemCached(req.params.type, req.params.name, cacheRequest); | ||
|
|
||
| if (result.notModified) { | ||
| ctx.logger.debug('Meta item not modified (304)', { type: req.params.type, name: req.params.name }); | ||
| res.status(304).json({}); | ||
| } else { | ||
| // Set cache headers | ||
| if (result.etag) { | ||
| const etagValue = result.etag.weak ? `W/"${result.etag.value}"` : `"${result.etag.value}"`; | ||
| res.header('ETag', etagValue); | ||
| } | ||
| if (result.lastModified) { | ||
| res.header('Last-Modified', new Date(result.lastModified).toUTCString()); | ||
| } | ||
| if (result.cacheControl) { | ||
| const directives = result.cacheControl.directives.join(', '); | ||
| const maxAge = result.cacheControl.maxAge ? `, max-age=${result.cacheControl.maxAge}` : ''; | ||
| res.header('Cache-Control', directives + maxAge); | ||
| } | ||
|
|
||
| ctx.logger.debug('Meta item returned with cache headers', { | ||
| type: req.params.type, | ||
| name: req.params.name, | ||
| etag: result.etag?.value | ||
| }); | ||
| res.json(result.data); | ||
| } | ||
| } catch (e: any) { | ||
| ctx.logger.warn('Meta item not found', { type: req.params.type, name: req.params.name }); | ||
| res.status(404).json({ error: e.message }); | ||
| } | ||
| }); | ||
|
|
||
| // View Storage Routes | ||
| this.server.post('/api/v1/ui/views', async (req, res) => { | ||
| ctx.logger.debug('Create view request', { name: req.body?.name, object: req.body?.object }); | ||
| try { | ||
| const result = await p.createView(req.body); | ||
| if (result.success) { | ||
| ctx.logger.info('View created', { id: result.data?.id, name: result.data?.name }); | ||
| res.status(201).json(result); | ||
| } else { | ||
| ctx.logger.warn('View creation failed', { error: result.error }); | ||
| res.status(400).json(result); | ||
| } | ||
| } catch (e: any) { | ||
| ctx.logger.error('View creation error', e); | ||
| res.status(500).json({ success: false, error: { code: 'internal_error', message: e.message } }); | ||
| } | ||
| }); | ||
|
|
||
| this.server.get('/api/v1/ui/views/:id', async (req, res) => { | ||
| ctx.logger.debug('Get view request', { id: req.params.id }); | ||
| try { | ||
| const result = await p.getView(req.params.id); | ||
| if (result.success) { | ||
| ctx.logger.debug('View retrieved', { id: req.params.id }); | ||
| res.json(result); | ||
| } else { | ||
| ctx.logger.warn('View not found', { id: req.params.id }); | ||
| res.status(404).json(result); | ||
| } | ||
| } catch (e: any) { | ||
| ctx.logger.error('Get view error', e, { id: req.params.id }); | ||
| res.status(500).json({ success: false, error: { code: 'internal_error', message: e.message } }); | ||
| } | ||
| }); | ||
|
|
||
| this.server.get('/api/v1/ui/views', async (req, res) => { | ||
| ctx.logger.debug('List views request', { query: req.query }); | ||
| try { | ||
| const request: any = {}; | ||
| if (req.query.object) request.object = req.query.object as string; | ||
| if (req.query.type) request.type = req.query.type; | ||
| if (req.query.visibility) request.visibility = req.query.visibility; | ||
| if (req.query.createdBy) request.createdBy = req.query.createdBy as string; | ||
| if (req.query.isDefault !== undefined) request.isDefault = req.query.isDefault === 'true'; | ||
| if (req.query.limit) request.limit = parseInt(req.query.limit as string); | ||
| if (req.query.offset) request.offset = parseInt(req.query.offset as string); | ||
|
|
||
| const result = await p.listViews(request); | ||
| ctx.logger.debug('Views listed', { count: result.data?.length, total: result.pagination?.total }); | ||
| res.json(result); | ||
| } catch (e: any) { | ||
| ctx.logger.error('List views error', e); | ||
| res.status(500).json({ success: false, error: { code: 'internal_error', message: e.message } }); | ||
| } | ||
| }); | ||
|
|
||
| this.server.patch('/api/v1/ui/views/:id', async (req, res) => { | ||
| ctx.logger.debug('Update view request', { id: req.params.id }); | ||
| try { | ||
| const result = await p.updateView({ ...req.body, id: req.params.id }); | ||
| if (result.success) { | ||
| ctx.logger.info('View updated', { id: req.params.id }); | ||
| res.json(result); | ||
| } else { | ||
| ctx.logger.warn('View update failed', { id: req.params.id, error: result.error }); | ||
| res.status(result.error?.code === 'resource_not_found' ? 404 : 400).json(result); | ||
| } | ||
| } catch (e: any) { | ||
| ctx.logger.error('Update view error', e, { id: req.params.id }); | ||
| res.status(500).json({ success: false, error: { code: 'internal_error', message: e.message } }); | ||
| } | ||
| }); | ||
|
|
||
| this.server.delete('/api/v1/ui/views/:id', async (req, res) => { | ||
| ctx.logger.debug('Delete view request', { id: req.params.id }); | ||
| try { | ||
| const result = await p.deleteView(req.params.id); | ||
| if (result.success) { | ||
| ctx.logger.info('View deleted', { id: req.params.id }); | ||
| res.json(result); | ||
| } else { | ||
| ctx.logger.warn('View deletion failed', { id: req.params.id }); | ||
| res.status(404).json(result); | ||
| } | ||
| } catch (e: any) { | ||
| ctx.logger.error('Delete view error', e, { id: req.params.id }); | ||
| res.status(500).json({ success: false, error: { code: 'internal_error', message: e.message } }); | ||
| } | ||
| }); |
There was a problem hiding this comment.
The HTTP route handlers for batch operations, metadata caching, and view storage lack test coverage. Integration or unit tests should verify:
- Request parsing and validation
- Status code mapping (200, 201, 304, 400, 404, 500)
- Header handling (ETag, Cache-Control, If-None-Match)
- Error response formatting
- Edge cases like missing request bodies or invalid parameters
| /** | ||
| * Simple hash function for ETag generation (browser-compatible) | ||
| * Uses a basic hash algorithm instead of crypto.createHash | ||
| */ | ||
| function simpleHash(str: string): string { | ||
| let hash = 0; | ||
| for (let i = 0; i < str.length; i++) { | ||
| const char = str.charCodeAt(i); | ||
| hash = ((hash << 5) - hash) + char; | ||
| hash = hash & hash; // Convert to 32bit integer | ||
| } | ||
| return Math.abs(hash).toString(16); | ||
| } |
There was a problem hiding this comment.
The simpleHash function lacks documentation about its limitations and why it was chosen. Given that this is a critical function for cache validation, it should include:
- A warning about collision probability
- Why crypto.createHash was avoided (browser compatibility)
- Recommendation to replace with stronger hash in production
- Link or reference to the algorithm being used
Client aligned to PR #380 API spec but server implementation was missing. This adds the server-side protocol methods and HTTP routes.
Protocol Extensions
Interface (
packages/spec/src/api/protocol.ts)batchData,createManyData,updateManyData,deleteManyDatagetMetaItemCachedwith ETag supportcreateView,getView,listViews,updateView,deleteViewImplementation (
packages/objectql/src/protocol.ts)HTTP Routes
Added 10 endpoints in
packages/plugins/plugin-hono-server/src/hono-plugin.ts:Critical Fixes
Hono adapter body parsing (
adapter.ts): Request body was parsed twice, leaving empty object. Now parses JSON first, falls back to parseBody only for non-JSON content types.Browser compatibility (
protocol.ts): Replacedcrypto.createHash()with simple string hash function to work in browser environments.Example Usage
All responses conform to schemas in
batch.zod.ts,cache.zod.ts,view-storage.zod.ts. Errors useStandardErrorCodefromerrors.zod.ts.Original prompt
✨ Let Copilot coding agent set things up for you — coding agent works faster and does higher quality work when set up for your repo.