Skip to content

test: end-to-end upload integration test for presigned URL flow#974

Merged
pyramation merged 5 commits intomainfrom
devin/1775801555-upload-integration-test
Apr 11, 2026
Merged

test: end-to-end upload integration test for presigned URL flow#974
pyramation merged 5 commits intomainfrom
devin/1775801555-upload-integration-test

Conversation

@pyramation
Copy link
Copy Markdown
Contributor

@pyramation pyramation commented Apr 10, 2026

Summary

Adds a new integration test in graphql/server-test that exercises the full presigned URL upload pipeline against real MinIO. This is a prerequisite for the more comprehensive RLS-aware tests in constructive-db.

Test file: __tests__/upload.integration.test.ts

  • Public file: requestUploadUrl → PUT to presigned URL → confirmUpload (status → ready)
  • Private file: same flow with bucketKey: 'private'
  • Deduplication: re-requesting the same content hash returns deduplicated: true with no upload URL

Seed fixtures: __fixtures__/seed/simple-seed-storage/

  • setup.sql — storage-specific additions only (~70 lines): jwt_private schema + storage_module table definition. Loaded after simple-seed-services/setup.sql, which provides all metaschema/services/role infrastructure via pgpm.
  • schema.sql — creates the simple-storage-public schema with buckets, files, upload_requests tables (with @storageBuckets / @storageFiles smart tags)
  • test-data.sql — seeds a database, schema, table entries, services API, storage_module config row, and two buckets (public + private)

The test composes seed files from both simple-seed-services and simple-seed-storage via seed.sqlfile([...]), so metaschema infrastructure is reused rather than duplicated.

The test uses the services API mode (enableServicesApi: true) with X-Database-Id header so the middleware sets jwt.claims.database_id, which the presigned URL plugin reads via jwt_private.current_database_id() to resolve per-database storage config.

Relies on lazy S3 bucket provisioning (PR #969) and per-database bucket naming (PR #968).

Production bug fixes discovered by the test

1. pgClient.query format mismatch (plugin.ts, storage-module-cache.ts, download-url-field.ts)

The @dataplan/pg adapter wraps pgClient with a custom query() method that destructures { text, values } from the argument. The presigned URL plugin was passing raw SQL strings (pgClient.query('BEGIN'), pgClient.query(sql, params)) — when the adapter destructured a string, text became undefined, causing "A query must have either text or a name" errors.

Fix: Converted all pgClient.query(string, params) calls to pgClient.query({ text, values }) format and replaced manual BEGIN/COMMIT/ROLLBACK with pgClient.withTransaction().

2. Bucket provisioner graceful degradation (provisioner.ts)

The BucketProvisioner.provision() flow calls PutPublicAccessBlock, PutBucketCors, and DeleteBucketPolicy — none of which are supported by older MinIO (edge-cicd). The provisioner threw ProvisionerError and the entire upload flow failed.

Fix: setPublicAccessBlock, setCors, and deleteBucketPolicy now catch XML parse / "not implemented" errors and skip gracefully. The bucket is still created and usable; these operations are best-effort for S3-compatible backends.

Review & Testing Checklist for Human

  • pgClient.withTransaction availability: The fix replaces manual BEGIN/COMMIT/ROLLBACK with pgClient.withTransaction(callback). Verify this method exists on the @dataplan/pg-wrapped client in all execution paths (requestUploadUrl and confirmUpload). If withTransaction is not available in some edge case, the mutation will throw a runtime error.
  • Graceful degradation error matching is string-based: The provisioner catches errors by matching err.message?.includes('not well-formed'), err.message?.includes('CORSConfiguration'), etc. If a legitimate AWS S3 error happens to contain these substrings, it would be silently swallowed. Consider whether this is acceptable for production or if the check should be tighter (e.g. also checking err.$metadata?.httpStatusCode).
  • SQL fixture ↔ plugin query alignment: The STORAGE_MODULE_QUERY in storage-module-cache.ts joins storage_module → metaschema_public.table → metaschema_public.schema. Verify the seed data UUIDs (buckets_table_id, files_table_id, upload_requests_table_id → table rows → schema row) form a valid join path. A mismatch here means the plugin gets null config and all mutations fail.
  • Missing downloadUrl verification: The test confirms uploads succeed but does not query downloadUrl on the file afterward. The public-URL-prefix path and presigned-GET path are untested here.

Recommended manual test

  1. Check out this branch and run pnpm test -- --testPathPattern=upload.integration in graphql/server-test with MinIO running locally
  2. Verify that requestUploadUrl → PUT → confirmUpload works for both public and private buckets
  3. Optionally, deploy to a staging environment with real AWS S3 to confirm the withTransaction and graceful degradation paths work against a non-MinIO backend

Notes

  • No RLS is exercised — all tables grant full CRUD to anonymous. RLS-aware upload tests are planned for constructive-db.
  • Test ordering within each describe block is sequential and state-sharing (let uploadUrl, let fileId). This is consistent with existing tests in this package.
  • CI is green (47/47 checks pass).

Link to Devin session: https://app.devin.ai/sessions/e47513cf8b974ae6985c42c0a657e4d7
Requested by: @pyramation


Open with Devin

Adds a new integration test that exercises the full upload pipeline
for both public and private files using real MinIO:

- requestUploadUrl → PUT to presigned URL → confirmUpload
- Tests public bucket (is_public=true) and private bucket (is_public=false)
- Tests content-hash deduplication
- Uses lazy S3 bucket provisioning (from PR #969)
- Uses per-database bucket naming (from PR #968)

Includes seed fixtures (simple-seed-storage) that create:
- jwt_private schema with current_database_id() function
- metaschema tables (database, schema, table, field)
- services tables (apis, domains, api_schemas)
- storage_module config row
- storage tables (buckets, files, upload_requests)
- Two buckets: public and private
@devin-ai-integration
Copy link
Copy Markdown
Contributor

🤖 Devin AI Engineer

I'll be helping with this pull request! Here's what you should know:

✅ I will automatically:

  • Address comments on this PR. Add '(aside)' to your comment to have me ignore it.
  • Look at CI failures and help fix them

Note: I can only respond to comments from users who have write access to this repository.

⚙️ Control Options:

  • Disable automatic comment and CI monitoring

Copy link
Copy Markdown
Contributor

@devin-ai-integration devin-ai-integration bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

✅ Devin Review: No Issues Found

Devin Review analyzed this PR and found no potential bugs to report.

View in Devin Review to see 4 additional findings.

Open in Devin Review

…signed URL plugin

The @dataplan/pg adapter wraps pgClient with a custom query method that
expects { text, values } objects, not raw SQL strings. The plugin was
calling pgClient.query('BEGIN') etc. with raw strings, causing 'text'
to be undefined when destructured by the adapter.

Changes:
- Convert all pgClient.query(string, params) calls to
  pgClient.query({ text: string, values: params }) format
- Replace manual BEGIN/COMMIT/ROLLBACK with pgClient.withTransaction()
  since the adapter already manages transactions when pgSettings are present
- Update storage-module-cache.ts and download-url-field.ts similarly
@pyramation pyramation merged commit 5a77963 into main Apr 11, 2026
49 checks passed
@pyramation pyramation deleted the devin/1775801555-upload-integration-test branch April 11, 2026 00:22
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant