-
Notifications
You must be signed in to change notification settings - Fork 0
Support attachments in room requests updates + multiregion s3 for assets bucket #367
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WalkthroughAdds multi‑region S3 assets (buckets, lifecycle, tiering, CORS) and an IAM policy; exposes policy ARN. Adds presigned PUT/GET helpers and API endpoints to create/upload/download attachments. UI supports client‑side file upload/download with validation. Terraform Lambda modules accept and attach additional IAM policies; prod adds an SQS event source mapping. Changes
Sequence Diagram(s)sequenceDiagram
actor User
participant UI
participant API
participant S3
participant DB as DynamoDB
Note over User,DB: Create status update with attachment
User->>UI: Select file + submit status (attachmentInfo)
UI->>API: POST /:semesterId/:requestId/status (attachmentInfo)
API->>API: compute attachmentS3key
API->>S3: createPresignedPut(bucket, key) ▶ getSignedUrl
S3-->>API: presigned PUT URL
API->>DB: save status + attachmentS3key
API-->>UI: 201 { uploadUrl }
UI->>S3: PUT file to uploadUrl (uploadToS3PresignedUrl)
S3-->>UI: 200/OK
UI->>UI: refresh view
Note over User,DB: Download attachment
User->>UI: request download
UI->>API: GET /.../attachmentDownloadUrl/...
API->>DB: fetch attachmentS3key
API->>S3: createPresignedGet(bucket, key)
S3-->>API: presigned GET URL
API-->>UI: { downloadUrl }
UI->>S3: GET downloadUrl (downloadFromS3PresignedUrl)
S3-->>UI: file blob
UI->>User: trigger browser download
Estimated code review effort🎯 4 (Complex) | ⏱️ ~70 minutes
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Comment |
💰 Infracost reportMonthly estimate increased by $5 📈
*Usage costs were estimated using infracost-usage.yml, see docs for other options. Estimate details |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🧹 Nitpick comments (5)
terraform/modules/assets/variables.tf (1)
1-14: Add validation to region variables.Consider adding validation blocks to ensure the region variables contain valid AWS region identifiers, preventing misconfiguration at apply time.
Example validation:
-variable "PrimaryRegion" { +variable "primary_region" { type = string default = "us-east-2" + description = "Primary AWS region for S3 bucket deployment" + validation { + condition = can(regex("^[a-z]{2}-[a-z]+-\\d{1}$", var.primary_region)) + error_message = "Region must be a valid AWS region format (e.g., us-east-2)." + } }terraform/modules/assets/main.tf (4)
2-2: Remove unused data source.Line 2 fetches
data.aws_region.currentbut it is never referenced. Remove it unless needed for future extensibility.data "aws_caller_identity" "current" {} -data "aws_region" "current" {}
8-13: Document module version pinning rationale.The module is pinned to a specific commit hash (good for reproducibility and safety). Add a comment explaining why this specific version is pinned or document the module versioning strategy for future updates.
module "buckets" { source = "git::https://github.com/acm-uiuc/terraform-modules.git//multiregion-s3?ref=99de4c350d1e35931f94499e0c06cbf29d0d5b8a" + # Pinned to specific commit for reproducibility. Update via terraform init -upgrade Region1 = var.PrimaryRegion Region2 = var.SecondaryRegion BucketPrefix = local.asset_bucket_prefix }
15-45: Document lifecycle policy intent and consider configurability.The 3-day TTL for noncurrent versions and incomplete multipart uploads is relatively aggressive. Add comments explaining the intent and consider whether these values should be configurable variables for different asset retention requirements.
resource "aws_s3_bucket_lifecycle_configuration" "expire_noncurrent" { for_each = module.buckets.buckets_info bucket = each.value.id + # Lifecycle policies to manage object versions and cleanup incomplete uploads rule { id = "expire-noncurrent-versions" status = "Enabled" + # Remove non-current versions after 3 days to prevent version accumulation noncurrent_version_expiration { noncurrent_days = 3 } }Alternatively, extract TTL values as variables (
noncurrent_version_days,incomplete_multipart_days) for flexibility.
15-17: Document module output structure.The
for_each = module.buckets.buckets_infoloop assumes a specific structure from the external module. Add a comment clarifying whatbuckets_infocontains and whethereach.value.idis a bucket name or ARN, improving maintainability.resource "aws_s3_bucket_lifecycle_configuration" "expire_noncurrent" { + # Apply lifecycle rules to each bucket created by the multiregion-s3 module + # module.buckets.buckets_info is a map of { id = bucket_name, ... } for_each = module.buckets.buckets_info bucket = each.value.id
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to data retention organization setting
📒 Files selected for processing (2)
terraform/modules/assets/main.tf(1 hunks)terraform/modules/assets/variables.tf(1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: Run Unit Tests
- GitHub Check: Build Application
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to data retention organization setting
📒 Files selected for processing (3)
src/api/routes/roomRequests.ts(2 hunks)src/common/types/roomRequest.ts(1 hunks)terraform/modules/assets/main.tf(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- terraform/modules/assets/main.tf
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: Build Application
- GitHub Check: Run Unit Tests
🔇 Additional comments (2)
src/api/routes/roomRequests.ts (1)
58-69: Response schema defined but not fully implemented.The 201 response schema correctly defines an optional
uploadUrl, but the actual response at line 191 doesn't populate this field. This creates an inconsistency between the OpenAPI schema and the runtime behavior.Ensure the implementation at line 191 is updated to include the
uploadUrlwhenattachmentFilenameis present, or clarify if this is intentional for a future implementation.src/common/types/roomRequest.ts (1)
135-135: LGTM!The
attachmentFilenamefield is properly defined as optional with a reasonable 100-character limit. This aligns with the attachment upload feature being added.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 6
♻️ Duplicate comments (2)
terraform/modules/assets/main.tf (1)
5-5: Update variable references to snake_case.Lines 5, 10–12 reference
var.ProjectId,var.PrimaryRegion, andvar.SecondaryRegionusing PascalCase. Update these to use snake_case (var.project_id,var.primary_region,var.secondary_region) to match the variables.tf convention.Apply this diff:
locals { - asset_bucket_prefix = "${data.aws_caller_identity.current.account_id}-${var.ProjectId}-assets" + asset_bucket_prefix = "${data.aws_caller_identity.current.account_id}-${var.project_id}-assets" } module "buckets" { source = "git::https://github.com/acm-uiuc/terraform-modules.git//multiregion-s3?ref=976e9eb8f8a29746b02d9e85d78d2952e3548bf9" - Region1 = var.PrimaryRegion - Region2 = var.SecondaryRegion + Region1 = var.primary_region + Region2 = var.secondary_region BucketPrefix = local.asset_bucket_prefix }Also applies to: 10-12
src/common/types/roomRequest.ts (1)
142-145: Revert toz.nativeEnum(RoomRequestStatus).
z.enum(RoomRequestStatus)still passes the enum object toz.enum, which throws at runtime because it expects a tuple of literals. This was already flagged earlier and remains a blocking bug—please switch back toz.nativeEnum(RoomRequestStatus)(or spell out the literal array). Based on learnings
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to data retention organization setting
⛔ Files ignored due to path filters (1)
yarn.lockis excluded by!**/yarn.lock,!**/*.lock
📒 Files selected for processing (8)
src/api/functions/s3.ts(1 hunks)src/api/package.json(1 hunks)src/api/routes/roomRequests.ts(6 hunks)src/api/types.d.ts(2 hunks)src/common/config.ts(3 hunks)src/common/constants.ts(1 hunks)src/common/types/roomRequest.ts(2 hunks)terraform/modules/assets/main.tf(1 hunks)
🧰 Additional context used
🪛 ESLint
src/common/config.ts
[error] 147-147: Insert ,
(prettier/prettier)
[error] 180-180: Insert ,
(prettier/prettier)
src/api/functions/s3.ts
[error] 1-1: Resolve error: EACCES: permission denied, open '/MbKxcXRqjK'
at Object.writeFileSync (node:fs:2409:20)
at l (/home/jailuser/git/node_modules/get-tsconfig/dist/index.cjs:7:13685)
at createFilesMatcher (/home/jailuser/git/node_modules/get-tsconfig/dist/index.cjs:7:14437)
at Object.resolve (/home/jailuser/git/node_modules/eslint-import-resolver-typescript/lib/index.cjs:298:107)
at withResolver (/home/jailuser/git/node_modules/eslint-module-utils/resolve.js:180:23)
at fullResolve (/home/jailuser/git/node_modules/eslint-module-utils/resolve.js:201:22)
at relative (/home/jailuser/git/node_modules/eslint-module-utils/resolve.js:217:10)
at resolve (/home/jailuser/git/node_modules/eslint-module-utils/resolve.js:233:12)
at checkFileExtension (/home/jailuser/git/node_modules/eslint-plugin-import/lib/rules/extensions.js:205:53)
at checkSourceValue (/home/jailuser/git/node_modules/eslint-module-utils/moduleVisitor.js:32:5)
(import/extensions)
[error] 3-3: Unexpected use of file extension "js" for "common/errors/index.js"
(import/extensions)
src/api/routes/roomRequests.ts
[error] 36-36: Unexpected use of file extension "js" for "api/functions/s3.js"
(import/extensions)
src/common/types/roomRequest.ts
[error] 4-4: Replace 'application/pdf',·'image/jpeg',·'image/heic',·'image/pdf'] with ⏎··"application/pdf",⏎··"image/jpeg",⏎··"image/heic",⏎··"image/pdf",⏎];
(prettier/prettier)
[error] 139-139: Insert ,
(prettier/prettier)
[error] 140-140: Insert ;
(prettier/prettier)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: Build Application
- GitHub Check: Run Unit Tests
🔇 Additional comments (1)
terraform/modules/assets/main.tf (1)
48-58: Verify the 1-day intelligent tiering setting is intentional.The intelligent tiering configuration transitions objects to ARCHIVE_ACCESS after just 1 day, which is very aggressive and may cause unexpected costs or performance issues if the bucket stores recently-created assets that are accessed frequently. Verify that this aligns with your asset lifecycle expectations and storage access patterns.
| GithubIdpSyncEnabled: false, | ||
| AssetsBucketId: `427040638965-infra-core-api-assets-${genericConfig.AwsRegion}` | ||
| }, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix Prettier-required trailing comma.
Prettier fails on this object because the new AssetsBucketId entry isn’t followed by a comma, so the lint step will block the build. Please add the trailing comma.
- AssetsBucketId: `427040638965-infra-core-api-assets-${genericConfig.AwsRegion}`
+ AssetsBucketId: `427040638965-infra-core-api-assets-${genericConfig.AwsRegion}`,📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| GithubIdpSyncEnabled: false, | |
| AssetsBucketId: `427040638965-infra-core-api-assets-${genericConfig.AwsRegion}` | |
| }, | |
| GithubIdpSyncEnabled: false, | |
| AssetsBucketId: `427040638965-infra-core-api-assets-${genericConfig.AwsRegion}`, | |
| }, |
🧰 Tools
🪛 ESLint
[error] 147-147: Insert ,
(prettier/prettier)
🤖 Prompt for AI Agents
In src/common/config.ts around lines 146 to 148, the object literal ending with
the AssetsBucketId property is missing a trailing comma which breaks Prettier;
add a trailing comma after the AssetsBucketId line so the object entries are
properly comma-separated and the file passes Prettier/lint checks.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (1)
terraform/modules/assets/main.tf (1)
10-12: Update variable references to snake_case.Lines 10–12 reference
var.PrimaryRegion,var.SecondaryRegion, andvar.ProjectIdusing PascalCase. Once the variables.tf file is updated to use snake_case (as indicated in prior review feedback), update these references accordingly tovar.primary_region,var.secondary_region, andvar.project_id.Apply this diff:
module "buckets" { source = "git::https://github.com/acm-uiuc/terraform-modules.git//multiregion-s3?ref=v2.0.0" - Region1 = var.PrimaryRegion - Region2 = var.SecondaryRegion + Region1 = var.primary_region + Region2 = var.secondary_region BucketPrefix = local.asset_bucket_prefix }
🧹 Nitpick comments (1)
terraform/modules/assets/main.tf (1)
9-9: Consider using a commit hash instead of a semantic version tag for module source.The module source uses a git tag reference (
ref=v2.0.0) rather than a commit hash. For better auditability and to mitigate supply-chain risks, consider pinning to a specific commit hash instead.- source = "git::https://github.com/acm-uiuc/terraform-modules.git//multiregion-s3?ref=v2.0.0" + source = "git::https://github.com/acm-uiuc/terraform-modules.git//multiregion-s3?ref=<commit-hash>"Verify the commit hash associated with tag
v2.0.0in theterraform-modulesrepository and update accordingly.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to data retention organization setting
📒 Files selected for processing (4)
terraform/envs/prod/main.tf(1 hunks)terraform/envs/qa/main.tf(1 hunks)terraform/modules/assets/main.tf(1 hunks)terraform/modules/assets/variables.tf(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- terraform/envs/qa/main.tf
- terraform/modules/assets/variables.tf
🧰 Additional context used
🪛 Checkov (3.2.334)
terraform/modules/assets/main.tf
[medium] 8-13: Ensure Terraform module sources use a commit hash
(CKV_TF_1)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: Build Application
- GitHub Check: Run Unit Tests
🔇 Additional comments (1)
terraform/envs/prod/main.tf (1)
130-134: Module instantiation looks good.The
module "assets"is properly configured with correct variable references and source path. The CORS origins are appropriately restricted to the CorePublicDomain.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (2)
src/common/types/roomRequest.ts (2)
4-5: Add const assertion tovalidMimeTypesfor type safety.
z.enum()on line 138 requires a readonly tuple, butvalidMimeTypesis currently typed asstring[]. Without theas constassertion, TypeScript will reject this usage.Apply this diff:
-export const validMimeTypes = ['application/pdf', 'image/jpeg', 'image/heic', 'image/png'] +export const validMimeTypes = [ + "application/pdf", + "image/jpeg", + "image/heic", + "image/png", +] as const;
142-142: Critical: Usez.nativeEnum()for TypeScript enum validation.
z.enum()expects a tuple of string literals, not a TypeScript enum object. SinceRoomRequestStatusis defined as a TypeScript enum (lines 126-133), this will cause runtime validation failures.Apply this diff:
- status: z.enum(RoomRequestStatus), + status: z.nativeEnum(RoomRequestStatus),
🧹 Nitpick comments (1)
src/ui/pages/roomRequest/ViewRoomRequest.page.tsx (1)
66-92: Remove type assertion oncevalidMimeTypeshas const assertion.Line 81 uses
as anyto work around the type mismatch. Once the const assertion is added tovalidMimeTypesinsrc/common/types/roomRequest.ts(as flagged earlier), this cast can be removed for better type safety.Apply this diff after fixing
validMimeTypes:- if (!validMimeTypes.includes(file.type as any)) { + if (!validMimeTypes.includes(file.type)) {
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to data retention organization setting
⛔ Files ignored due to path filters (1)
yarn.lockis excluded by!**/yarn.lock,!**/*.lock
📒 Files selected for processing (7)
src/api/functions/s3.ts(1 hunks)src/api/routes/roomRequests.ts(10 hunks)src/common/types/roomRequest.ts(2 hunks)src/ui/App.tsx(1 hunks)src/ui/package.json(1 hunks)src/ui/pages/roomRequest/ViewRoomRequest.page.tsx(6 hunks)src/ui/util/s3.ts(1 hunks)
✅ Files skipped from review due to trivial changes (1)
- src/ui/App.tsx
🧰 Additional context used
🪛 ESLint
src/api/routes/roomRequests.ts
[error] 16-16: Unexpected use of file extension "js" for "common/errors/index.js"
(import/extensions)
[error] 37-37: Unexpected use of file extension "js" for "api/functions/s3.js"
(import/extensions)
src/common/types/roomRequest.ts
[error] 4-4: Replace 'application/pdf',·'image/jpeg',·'image/heic',·'image/png'] with ⏎··"application/pdf",⏎··"image/jpeg",⏎··"image/heic",⏎··"image/png",⏎];
(prettier/prettier)
[error] 138-138: Insert ,
(prettier/prettier)
[error] 139-139: Insert ;
(prettier/prettier)
[error] 144-144: Insert ,
(prettier/prettier)
[error] 150-150: Insert ,
(prettier/prettier)
src/api/functions/s3.ts
[error] 1-1: Resolve error: EACCES: permission denied, open '/ITxzzCUcMT'
at Object.writeFileSync (node:fs:2409:20)
at l (/home/jailuser/git/node_modules/get-tsconfig/dist/index.cjs:7:13685)
at createFilesMatcher (/home/jailuser/git/node_modules/get-tsconfig/dist/index.cjs:7:14437)
at Object.resolve (/home/jailuser/git/node_modules/eslint-import-resolver-typescript/lib/index.cjs:298:107)
at withResolver (/home/jailuser/git/node_modules/eslint-module-utils/resolve.js:180:23)
at fullResolve (/home/jailuser/git/node_modules/eslint-module-utils/resolve.js:201:22)
at relative (/home/jailuser/git/node_modules/eslint-module-utils/resolve.js:217:10)
at resolve (/home/jailuser/git/node_modules/eslint-module-utils/resolve.js:233:12)
at checkFileExtension (/home/jailuser/git/node_modules/eslint-plugin-import/lib/rules/extensions.js:205:53)
at checkSourceValue (/home/jailuser/git/node_modules/eslint-module-utils/moduleVisitor.js:32:5)
(import/extensions)
[error] 7-7: Unexpected use of file extension "js" for "common/errors/index.js"
(import/extensions)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: Run Unit Tests
- GitHub Check: Build Application
🔇 Additional comments (16)
src/ui/package.json (1)
26-26: LGTM!The
@mantine/dropzonedependency version is consistent with other Mantine packages in the project and aligns with the attachment upload feature introduced in this PR.src/ui/util/s3.ts (2)
8-28: LGTM!The implementation correctly uses a PUT request with the appropriate Content-Type header for S3 presigned URL uploads. Error handling is clear and informative.
36-69: LGTM!The download implementation correctly handles blob creation, temporary anchor generation, and resource cleanup. The approach is standard for triggering browser downloads.
src/common/types/roomRequest.ts (1)
148-150: LGTM!The schema updates use Zod 4's new standalone validators (
z.iso.datetime()andz.email()) correctly, and the optionalattachmentFilenamefield is appropriate.src/api/routes/roomRequests.ts (5)
15-15: LGTM!The new imports for S3 functionality and error handling are appropriate for the attachment feature.
Also applies to: 18-18, 37-38
85-87: LGTM!The S3 key structure appropriately namespaces attachments by request ID, status, and update ID for uniqueness.
100-119: LGTM!The presigned PUT URL generation correctly initializes the S3 client and passes the required parameters. The implementation appropriately handles the optional attachment scenario.
520-540: LGTM!The projection correctly includes the
attachmentS3keyfield, and the filename derivation logic appropriately extracts the last path segment, which aligns with the S3 key structure defined on line 86.
562-700: LGTM!The new attachment download endpoint correctly:
- Uses GetItemCommand to fetch the specific status update by composite key
- Validates the existence of the attachment before generating a presigned URL
- Follows established authorization patterns
- Handles errors appropriately with NotFoundError
src/api/functions/s3.ts (2)
9-45: LGTM!The presigned PUT URL generation correctly constructs the PutObjectCommand with appropriate parameters. The optional MD5 hash is properly documented as requiring base64 encoding, and the default 15-minute expiration is reasonable.
47-74: LGTM!The presigned GET URL generation is correctly implemented with appropriate error handling and a consistent 15-minute default expiration.
src/ui/pages/roomRequest/ViewRoomRequest.page.tsx (5)
15-25: LGTM!All new imports are appropriate and utilized in the attachment upload/download functionality.
Also applies to: 38-39, 48-51
57-57: LGTM!The state management for the uploaded file uses the correct type from Mantine Dropzone.
98-119: LGTM!The attachment download handler correctly fetches the presigned URL from the API and uses the S3 helper with appropriate error handling.
159-221: LGTM!The status update submission correctly handles the optional attachment with proper error handling. The logic appropriately distinguishes between status update success and file upload failure, ensuring the UI is updated even when the upload fails.
314-405: LGTM!The attachment UI components are well-implemented with:
- Proper Dropzone configuration matching server-side validation
- Clear visual states (accept/reject/idle)
- User-friendly file preview with size display
- Conditional download button in timeline
Also applies to: 456-471
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (3)
src/ui/util/s3.test.ts (1)
176-210: Consolidate repeated mock setup to reduce duplication.The mock setup in the fourth test (lines 188–204) largely duplicates the pattern from the first test (lines 113–132). Extract this to a shared helper function or use a more DRY approach to avoid maintenance burden.
+const createMockAnchor = () => ({ + href: "", + download: "", + style: { display: "" }, + click: vi.fn(), +}); + describe("downloadFromS3PresignedUrl", () => { test("successfully downloads a file and triggers browser download", async () => { const mockBlob = new Blob(["file content"], { type: "application/pdf" }); const mockUrl = "https://s3.amazonaws.com/bucket/key?signature=xyz"; const mockFilename = "downloaded-file.pdf"; // Mock fetch response vi.mocked(fetch).mockResolvedValueOnce({ ok: true, status: 200, statusText: "OK", blob: vi.fn().mockResolvedValue(mockBlob), } as unknown as Response); // Mock DOM APIs - const mockAnchor = { - href: "", - download: "", - style: { display: "" }, - click: vi.fn(), - }; + const mockAnchor = createMockAnchor(); // ... rest of test remains the sameApply the same pattern to lines 188–193 in the "handles different file types correctly" test.
tests/unit/s3.test.ts (2)
92-118: Consolidate redundant error assertions.The test sets up the mock rejection twice and runs essentially the same test twice. Lines 95-105 verify
InternalServerErroris thrown, then lines 107-117 verify the error message. These can be combined into a single assertion.Apply this diff to consolidate the test:
test("throws InternalServerError when URL generation fails", async () => { const mockS3Client = new S3Client({ region: "us-east-1" }); vi.mocked(getSignedUrl).mockRejectedValueOnce(new Error("AWS Error")); - await expect( + const promise = createPresignedPut({ + s3client: mockS3Client, + bucketName: "test-bucket", + key: "test-key", + length: 1024, + mimeType: "application/pdf", + }); + + await expect(promise).rejects.toThrow(InternalServerError); + await expect(promise).rejects.toThrow( + "Could not create S3 upload presigned url.", + ); - createPresignedPut({ - s3client: mockS3Client, - bucketName: "test-bucket", - key: "test-key", - length: 1024, - mimeType: "application/pdf", - }), - ).rejects.toThrow(InternalServerError); - - vi.mocked(getSignedUrl).mockRejectedValueOnce(new Error("AWS Error")); - - await expect( - createPresignedPut({ - s3client: mockS3Client, - bucketName: "test-bucket", - key: "test-key", - length: 1024, - mimeType: "application/pdf", - }), - ).rejects.toThrow("Could not create S3 upload presigned url."); });
163-185: Consolidate redundant error assertions.Similar to the
createPresignedPuterror test, this test has redundant mock setup and assertions. Lines 166-174 verifyInternalServerErroris thrown, then lines 176-184 verify the error message.Apply this diff to consolidate the test:
test("throws InternalServerError when URL generation fails", async () => { const mockS3Client = new S3Client({ region: "us-east-1" }); vi.mocked(getSignedUrl).mockRejectedValueOnce(new Error("AWS Error")); - await expect( + const promise = createPresignedGet({ + s3client: mockS3Client, + bucketName: "test-bucket", + key: "test-key", + }); + + await expect(promise).rejects.toThrow(InternalServerError); + await expect(promise).rejects.toThrow( + "Could not create S3 download presigned url.", + ); - createPresignedGet({ - s3client: mockS3Client, - bucketName: "test-bucket", - key: "test-key", - }), - ).rejects.toThrow(InternalServerError); - - vi.mocked(getSignedUrl).mockRejectedValueOnce(new Error("AWS Error")); - - await expect( - createPresignedGet({ - s3client: mockS3Client, - bucketName: "test-bucket", - key: "test-key", - }), - ).rejects.toThrow("Could not create S3 download presigned url."); });
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to data retention organization setting
📒 Files selected for processing (3)
src/ui/util/s3.test.ts(1 hunks)tests/unit/roomRequests.test.ts(2 hunks)tests/unit/s3.test.ts(1 hunks)
🧰 Additional context used
🪛 ESLint
tests/unit/roomRequests.test.ts
[error] 558-558: Unexpected use of file extension "js" for "../../src/api/functions/s3.js"
(import/extensions)
tests/unit/s3.test.ts
[error] 1-1: Resolve error: EACCES: permission denied, open '/unrZdyxUio'
at Object.writeFileSync (node:fs:2409:20)
at l (/home/jailuser/git/node_modules/get-tsconfig/dist/index.cjs:7:13685)
at createFilesMatcher (/home/jailuser/git/node_modules/get-tsconfig/dist/index.cjs:7:14437)
at Object.resolve (/home/jailuser/git/node_modules/eslint-import-resolver-typescript/lib/index.cjs:298:107)
at withResolver (/home/jailuser/git/node_modules/eslint-module-utils/resolve.js:180:23)
at fullResolve (/home/jailuser/git/node_modules/eslint-module-utils/resolve.js:201:22)
at relative (/home/jailuser/git/node_modules/eslint-module-utils/resolve.js:217:10)
at resolve (/home/jailuser/git/node_modules/eslint-module-utils/resolve.js:233:12)
at checkFileExtension (/home/jailuser/git/node_modules/eslint-plugin-import/lib/rules/extensions.js:205:53)
at checkSourceValue (/home/jailuser/git/node_modules/eslint-module-utils/moduleVisitor.js:32:5)
(import/extensions)
[error] 11-11: Unexpected use of file extension "js" for "../../src/api/functions/s3.js"
(import/extensions)
[error] 12-12: Unexpected use of file extension "js" for "../../src/common/errors/index.js"
(import/extensions)
src/ui/util/s3.test.ts
[error] 1-1: Resolve error: EACCES: permission denied, open '/yUWEIUxWaD'
at Object.writeFileSync (node:fs:2409:20)
at l (/home/jailuser/git/node_modules/get-tsconfig/dist/index.cjs:7:13685)
at createFilesMatcher (/home/jailuser/git/node_modules/get-tsconfig/dist/index.cjs:7:14437)
at Object.resolve (/home/jailuser/git/node_modules/eslint-import-resolver-typescript/lib/index.cjs:298:107)
at withResolver (/home/jailuser/git/node_modules/eslint-module-utils/resolve.js:180:23)
at fullResolve (/home/jailuser/git/node_modules/eslint-module-utils/resolve.js:201:22)
at relative (/home/jailuser/git/node_modules/eslint-module-utils/resolve.js:217:10)
at resolve (/home/jailuser/git/node_modules/eslint-module-utils/resolve.js:233:12)
at checkFileExtension (/home/jailuser/git/node_modules/eslint-plugin-import/lib/rules/extensions.js:205:53)
at checkSourceValue (/home/jailuser/git/node_modules/eslint-module-utils/moduleVisitor.js:32:5)
(import/extensions)
[error] 2-2: Missing file extension for "./s3"
(import/extensions)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: Run Unit Tests
- GitHub Check: Build Application
🔇 Additional comments (12)
src/ui/util/s3.test.ts (2)
7-14: Good test lifecycle management.The beforeEach/afterEach pattern correctly manages mock state between tests, ensuring test isolation.
17-95: Comprehensive uploadToS3PresignedUrl test coverage.All four tests cover the critical paths: successful upload, non-ok responses, network errors, and correct headers. The mocks are appropriately minimal and focused.
tests/unit/roomRequests.test.ts (5)
20-24: LGTM! Proper module mocking setup.The S3 functions module is correctly mocked at the top level, allowing individual tests to configure mock behavior as needed.
552-591: LGTM! Comprehensive attachment upload test.The test properly:
- Imports and configures the mocked S3 module
- Verifies the presigned URL is returned when attachment info is provided
- Confirms
createPresignedPutis called exactly onceThe ESLint warning on line 558 about the
.jsextension is a false positive. When using ES modules in TypeScript, the.jsextension is often required for the compiled output to resolve correctly.
593-612: LGTM! Proper negative test case.Correctly verifies that no
uploadUrlis returned when attachment information is not provided.
614-639: LGTM! File size validation test.Properly validates that excessively large file sizes are rejected with a 400 status code.
641-666: LGTM! Security-focused content type validation.Properly validates that dangerous content types (e.g., executables) are rejected with a 400 status code.
tests/unit/s3.test.ts (5)
14-19: Excellent explanatory comment!The comment clearly explains the rationale for the mocking approach, which will help future maintainers understand why
vi.mockis used instead ofaws-sdk-client-mock.
26-48: LGTM! Thorough test coverage.The test properly verifies:
- The returned presigned URL
- Default expiration value (900 seconds)
- Correct command type (PutObjectCommand)
- Mock called exactly once
50-71: LGTM! Custom expiration parameter test.Properly validates that custom expiration values are passed through correctly.
73-90: LGTM! MD5 hash parameter test.Validates that the optional MD5 hash parameter is handled correctly.
121-161: LGTM! Consistent test coverage for GET operations.The tests for
createPresignedGetmirror the structure and thoroughness of the PUT tests, ensuring comprehensive coverage of both success paths with default and custom expirations.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
♻️ Duplicate comments (5)
terraform/modules/lambdas/main.tf (1)
322-326: Resolve variable type before these resources are applied.These
for_eachattachments are blocked by theAdditionalIamPoliciesvariable type issue in variables.tf. Once the variable is changed fromset(string)tomap(string), these resource definitions will be correct and the pipeline errors on lines 323 and 341 will be resolved.Verify that once the variable type is corrected to
map(string), environment configurations (prod/main.tf, qa/main.tf) passAdditionalIamPoliciesas a map, e.g.:AdditionalIamPolicies = { "bucket-access" = module.assets.access_policy_arn }terraform/modules/assets/main.tf (3)
17-17: Remove invalidregionargument from S3 lifecycle configuration resource.The
aws_s3_bucket_lifecycle_configurationresource does not accept aregionargument. This will cause Terraform validation to fail. The region is automatically inherited from the S3 bucket's provider configuration.resource "aws_s3_bucket_lifecycle_configuration" "expire_noncurrent" { for_each = module.buckets.buckets_info - region = each.key bucket = each.value.id
63-63: Remove invalidregionargument from S3 CORS configuration resource.The
aws_s3_bucket_cors_configurationresource does not accept aregionargument and will fail Terraform validation. Remove this line to allow the resource to inherit region from the bucket provider.resource "aws_s3_bucket_cors_configuration" "ui_uploads" { for_each = module.buckets.buckets_info bucket = each.value.id - region = each.key cors_rule {
51-51: Remove invalidregionargument from S3 intelligent tiering configuration resource.The
aws_s3_bucket_intelligent_tiering_configurationresource does not accept aregionargument and will fail Terraform validation. The region is inherited from the bucket provider configuration.resource "aws_s3_bucket_intelligent_tiering_configuration" "tiering" { for_each = module.buckets.buckets_info bucket = each.value.id - region = each.key name = "EntireBucketIntelligentTiering"src/api/routes/roomRequests.ts (1)
102-119: Critical: MD5 hash not extracted or converted for S3 integrity check.The code destructures only
fileSizeBytesandcontentTypefromattachmentInfo(line 103), but does not extract or pass themd5hashfield tocreatePresignedPut. S3 requires theContent-MD5header in base64 format to verify upload integrity. The schema likely providesmd5hashas a 32-character hex digest, which must be converted to base64 before signing.Apply this diff to extract and convert the MD5 hash:
if (request.body.attachmentInfo) { - const { fileSizeBytes, contentType } = request.body.attachmentInfo; + const { md5hash, fileSizeBytes, contentType } = + request.body.attachmentInfo; request.log.info( request.body.attachmentInfo, `Creating presigned URL to store attachment`, ); if (!fastify.s3Client) { fastify.s3Client = new S3Client({ region: genericConfig.AwsRegion, }); } + // Convert hex MD5 to base64 for S3 Content-MD5 header + const contentMd5 = + md5hash.length === 32 + ? Buffer.from(md5hash, "hex").toString("base64") + : md5hash; uploadUrl = await createPresignedPut({ s3client: fastify.s3Client, key: attachmentS3key!, bucketName: fastify.environmentConfig.AssetsBucketId, length: fileSizeBytes, mimeType: contentType, + md5hash: contentMd5, }); }
🧹 Nitpick comments (4)
terraform/modules/assets/main.tf (1)
8-13: Verify module source is pinned correctly and consider static commit hash.The module source uses a git tag reference (
ref=v2.0.0). While tags are more readable than commit hashes, they are mutable. For production infrastructure and compliance, prefer pinning to a specific commit hash to ensure reproducibility and prevent unexpected changes if the tag is reassigned.- source = "git::https://github.com/acm-uiuc/terraform-modules.git//multiregion-s3?ref=v2.0.0" + source = "git::https://github.com/acm-uiuc/terraform-modules.git//multiregion-s3?ref=<commit-hash>"Verify the specific commit hash for the
v2.0.0tag and update accordingly. This aligns with the Checkov best-practice hint (CKV_TF_1).tests/unit/s3.test.ts (2)
90-116: Consider consolidating redundant error assertions.The test mocks rejection twice and asserts the same error condition twice—once for the error type (line 103) and once for the error message (line 115). Since
InternalServerErrorwith that specific message is thrown in both cases, the second assertion is redundant.Apply this diff to streamline the test:
test("throws InternalServerError when URL generation fails", async () => { const mockS3Client = new S3Client({ region: "us-east-1" }); vi.mocked(getSignedUrl).mockRejectedValueOnce(new Error("AWS Error")); await expect( createPresignedPut({ s3client: mockS3Client, bucketName: "test-bucket", key: "test-key", length: 1024, mimeType: "application/pdf", }), - ).rejects.toThrow(InternalServerError); - - vi.mocked(getSignedUrl).mockRejectedValueOnce(new Error("AWS Error")); - - await expect( - createPresignedPut({ - s3client: mockS3Client, - bucketName: "test-bucket", - key: "test-key", - length: 1024, - mimeType: "application/pdf", - }), - ).rejects.toThrow("Could not create S3 upload presigned url."); + ).rejects.toThrow(InternalServerError); + + expect(getSignedUrl).toHaveBeenCalledTimes(1); });
161-183: Consider consolidating redundant error assertions.Similar to the
createPresignedPuterror test, this test duplicates the same error assertion twice.Apply this diff:
test("throws InternalServerError when URL generation fails", async () => { const mockS3Client = new S3Client({ region: "us-east-1" }); vi.mocked(getSignedUrl).mockRejectedValueOnce(new Error("AWS Error")); await expect( createPresignedGet({ s3client: mockS3Client, bucketName: "test-bucket", key: "test-key", }), - ).rejects.toThrow(InternalServerError); - - vi.mocked(getSignedUrl).mockRejectedValueOnce(new Error("AWS Error")); - - await expect( - createPresignedGet({ - s3client: mockS3Client, - bucketName: "test-bucket", - key: "test-key", - }), - ).rejects.toThrow("Could not create S3 download presigned url."); + ).rejects.toThrow(InternalServerError); + + expect(getSignedUrl).toHaveBeenCalledTimes(1); });src/api/routes/roomRequests.ts (1)
606-643: Extract duplicated request validation logic.Lines 607-643 duplicate the authorization check from the existing
GET /:semesterId/:requestIdendpoint (lines 478-504). This pattern will likely be needed for other attachment operations too.Consider extracting the validation into a shared helper function:
async function validateRoomRequestAccess( fastify: FastifyInstance, username: string | undefined, userRoles: Set<string> | undefined, semesterId: string, requestId: string ): Promise<void> { const command = userRoles?.has(AppRoles.BYPASS_OBJECT_LEVEL_AUTH) ? new QueryCommand({ TableName: genericConfig.RoomRequestsTableName, IndexName: "RequestIdIndex", KeyConditionExpression: "requestId = :requestId", FilterExpression: "semesterId = :semesterId", ExpressionAttributeValues: { ":requestId": { S: requestId }, ":semesterId": { S: semesterId }, }, Limit: 1, }) : new QueryCommand({ TableName: genericConfig.RoomRequestsTableName, KeyConditionExpression: "semesterId = :semesterId AND #userIdRequestId = :userRequestId", ExpressionAttributeValues: { ":userRequestId": { S: `${username}#${requestId}` }, ":semesterId": { S: semesterId }, }, ExpressionAttributeNames: { "#userIdRequestId": "userId#requestId", }, Limit: 1, }); const resp = await fastify.dynamoClient.send(command); if (!resp.Items || resp.Count !== 1) { throw new DatabaseFetchError({ message: "Recieved no response.", }); } }Then reuse it in both endpoints.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to data retention organization setting
📒 Files selected for processing (8)
src/api/routes/roomRequests.ts(10 hunks)src/ui/pages/roomRequest/ViewRoomRequest.page.tsx(6 hunks)terraform/envs/prod/main.tf(3 hunks)terraform/envs/qa/main.tf(3 hunks)terraform/modules/assets/main.tf(1 hunks)terraform/modules/lambdas/main.tf(1 hunks)terraform/modules/lambdas/variables.tf(1 hunks)tests/unit/s3.test.ts(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- terraform/envs/qa/main.tf
- terraform/envs/prod/main.tf
🧰 Additional context used
🪛 Checkov (3.2.334)
terraform/modules/assets/main.tf
[medium] 8-13: Ensure Terraform module sources use a commit hash
(CKV_TF_1)
🪛 ESLint
tests/unit/s3.test.ts
[error] 1-1: Resolve error: EACCES: permission denied, open '/cwdaGkeAaA'
at Object.writeFileSync (node:fs:2409:20)
at l (/home/jailuser/git/node_modules/get-tsconfig/dist/index.cjs:7:13685)
at createFilesMatcher (/home/jailuser/git/node_modules/get-tsconfig/dist/index.cjs:7:14437)
at Object.resolve (/home/jailuser/git/node_modules/eslint-import-resolver-typescript/lib/index.cjs:298:107)
at withResolver (/home/jailuser/git/node_modules/eslint-module-utils/resolve.js:180:23)
at fullResolve (/home/jailuser/git/node_modules/eslint-module-utils/resolve.js:201:22)
at relative (/home/jailuser/git/node_modules/eslint-module-utils/resolve.js:217:10)
at resolve (/home/jailuser/git/node_modules/eslint-module-utils/resolve.js:233:12)
at checkFileExtension (/home/jailuser/git/node_modules/eslint-plugin-import/lib/rules/extensions.js:205:53)
at checkSourceValue (/home/jailuser/git/node_modules/eslint-module-utils/moduleVisitor.js:32:5)
(import/extensions)
[error] 11-11: Unexpected use of file extension "js" for "../../src/api/functions/s3.js"
(import/extensions)
[error] 12-12: Unexpected use of file extension "js" for "../../src/common/errors/index.js"
(import/extensions)
src/api/routes/roomRequests.ts
[error] 16-16: Unexpected use of file extension "js" for "common/errors/index.js"
(import/extensions)
[error] 38-38: Unexpected use of file extension "js" for "api/functions/s3.js"
(import/extensions)
🪛 GitHub Actions: QA deploy - @devksingh4
terraform/modules/lambdas/main.tf
[error] 323-323: Terraform Invalid for_each argument: var.AdditionalIamPolicies is a set of strings with a value that cannot be determined until apply. This prevents identifying keys for instances of aws_iam_role_policy_attachment. Consider using a static map instead of a set.
[error] 341-341: Terraform Invalid for_each argument: var.AdditionalIamPolicies is a set of strings with a value that cannot be determined until apply. This prevents identifying keys for instances of aws_iam_role_policy_attachment. Consider using a static map instead of a set.
🔇 Additional comments (12)
terraform/modules/lambdas/main.tf (1)
340-344: Consistent with api_attach_addl; verify after variables.tf is fixed.The SQS Lambda policy attachment mirrors the API attachment and will work correctly once the variable type issue is resolved in variables.tf.
terraform/modules/assets/main.tf (2)
73-104: IAM policy and bucket access configuration look sound.The policy document correctly grants PutObject, GetObject, DeleteObject, HeadObject, and ListBucket permissions scoped to the multi-region bucket ARNs. The output exposes the policy ARN for consumption by Lambda role attachments via the
AdditionalIamPoliciesvariable.
10-11: The code is correct as-is. The review comment is based on a false premise.The variables in
terraform/modules/assets/variables.tfare defined with PascalCase naming (PrimaryRegion,SecondaryRegion), and the references inmain.tflines 10-11 correctly match these definitions. The suggested diff would introduce a breaking change, asvar.primary_regionandvar.secondary_regiondo not exist in the codebase.If a prior review suggested updating variable names to snake_case, that change would need to occur in
variables.tfitself—but it has not been applied. The current code is internally consistent and requires no modifications.Likely an incorrect or invalid review comment.
tests/unit/s3.test.ts (3)
1-26: LGTM: Clean test setup.The imports, mock configuration, and test suite structure are well-organized. The mock uses
vi.mockappropriately for the standalonegetSignedUrlfunction, andbeforeEachensures test isolation.
29-88: LGTM: Comprehensive success path coverage.The tests properly validate presigned PUT URL generation with default expiration, custom expiration, and optional MD5 hash parameter. Assertions correctly verify command types and expiration values.
120-159: LGTM: Complete GET URL test coverage.The tests validate presigned GET URL generation with both default and custom expiration values, correctly verifying the
GetObjectCommandusage and expiration parameters.src/api/routes/roomRequests.ts (6)
15-15: LGTM: Required imports for attachment handling.The new imports support the S3 attachment upload/download functionality and DynamoDB operations for tracking attachment keys.
Also applies to: 18-18, 21-21, 38-39
62-73: LGTM: Clear response schema.The response schema appropriately defines the optional
uploadUrlfield, which is only present when an attachment is included in the status update.
86-88: Verify S3 key uniqueness strategy.The S3 key includes
request.id(the Fastify HTTP request UUID), which means each status update with an attachment—even with identicalattachmentInfo—will generate a distinct S3 key. Is this intentional for versioning, or should the key be based on content-addressable identifiers like the MD5 hash?
135-150: LGTM: Attachment key stored correctly.The item correctly includes
attachmentS3keywhen an attachment is present, andremoveUndefinedValuesappropriately handles cases without attachments.
521-541: LGTM: Filename derived correctly from S3 key.The projection includes
attachmentS3keyand correctly extracts the filename from the S3 key path by taking the last path segment.
674-711: LGTM: Robust S3 verification with cleanup.The endpoint correctly verifies object existence in S3 before generating a presigned URL and removes stale DynamoDB references if the S3 object is missing. This prevents returning download URLs for non-existent files.
| variable "AdditionalIamPolicies" { | ||
| type = set(string) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Change variable type from set(string) to map(string) to resolve for_each constraint.
The Terraform pipeline is failing with "Invalid for_each argument" errors (lines 323 and 341 in main.tf). When for_each iterates over a set(string) with values that cannot be determined until apply time (e.g., module outputs), Terraform cannot identify stable keys at plan time. The error message recommends using a static map instead.
Additionally, add a description to document the variable's purpose.
variable "AdditionalIamPolicies" {
- type = set(string)
+ type = map(string)
+ description = "Map of additional IAM policy ARNs to attach to Lambda roles"
+ default = {}
}This requires coordinated updates in main.tf (lines 323 and 341) and in environment configurations (prod/main.tf, qa/main.tf) that pass values to this variable—they should provide a map instead of a list/set.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| variable "AdditionalIamPolicies" { | |
| type = set(string) | |
| } | |
| variable "AdditionalIamPolicies" { | |
| type = map(string) | |
| description = "Map of additional IAM policy ARNs to attach to Lambda roles" | |
| default = {} | |
| } |
🤖 Prompt for AI Agents
In terraform/modules/lambdas/variables.tf around lines 41 to 43, the variable
"AdditionalIamPolicies" is declared as type = set(string) which causes "Invalid
for_each argument" when for_each needs stable keys; change the type to
map(string) and add a description documenting that this is a map of policy names
to policy ARNs (or similar), then update main.tf at the for_each sites (around
lines 323 and 341) to iterate over the map keys/entries accordingly and ensure
prod/main.tf, qa/main.tf and any other environment callers supply a map (e.g., {
name = "arn:..." }) instead of a list/set; make coordinated updates to the call
sites so all passed values are maps and for_each uses stable keys.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (1)
terraform/modules/lambdas/variables.tf (1)
41-43: Add description and default value to complete the variable definition.The variable type is correctly changed to
map(string)to supportfor_eachiteration, but the definition is incomplete. Based on the previous feedback, add a description and default value.Apply this diff:
variable "AdditionalIamPolicies" { type = map(string) + description = "Map of additional IAM policy ARNs to attach to Lambda roles" + default = {} }This makes the variable optional and self-documenting, aligning with the recommended fix from the prior review.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to data retention organization setting
📒 Files selected for processing (3)
terraform/envs/prod/main.tf(3 hunks)terraform/envs/qa/main.tf(3 hunks)terraform/modules/lambdas/variables.tf(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- terraform/envs/qa/main.tf
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Deploy to QA and Run Live Tests
🔇 Additional comments (3)
terraform/envs/prod/main.tf (3)
134-134: Confirm the CORS origin configuration supports all intended use cases.Line 134 hardcodes a single CORS origin:
["https://${var.CorePublicDomain}"]. If the application needs to support multiple frontends, subdomains, or multi-environment deployments, this configuration may be too restrictive.Please verify this aligns with your asset-serving requirements.
131-135: The assets module correctly exports the requiredaccess_policy_arnoutput.Verification confirms that
terraform/modules/assets/main.tfproperly defines theaccess_policy_arnoutput, which referencesaws_iam_policy.bucket_access.arn. The IAM policy attachment mechanism in lines 107 and 159 is correctly configured. No issues found.
131-160: No issues identified—cross-region asset access is correctly configured.Verification confirms all three concerns are properly addressed:
- ✓ The IAM policy grants S3 access (PutObject, GetObject, DeleteObject, HeadObject, ListBucket) to buckets in both us-east-2 and us-west-2
- ✓ The lambdas module correctly attaches additional policies via
for_each = var.AdditionalIamPoliciesto bothapi_roleandsqs_consumer_role- ✓ Both regional Lambda functions (us-east-2 and us-west-2) receive the policy attachment, enabling cross-region bucket access
S3 bucket resources are globally accessible by IAM role ARN, so both regional Lambda execution roles can read from and write to either bucket.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to data retention organization setting
📒 Files selected for processing (1)
src/api/routes/roomRequests.ts(10 hunks)
🧰 Additional context used
🪛 ESLint
src/api/routes/roomRequests.ts
[error] 16-16: Unexpected use of file extension "js" for "common/errors/index.js"
(import/extensions)
[error] 38-38: Unexpected use of file extension "js" for "api/functions/s3.js"
(import/extensions)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: Build Application
- GitHub Check: Run Unit Tests
🔇 Additional comments (6)
src/api/routes/roomRequests.ts (6)
86-88: LGTM: S3 key format is well-structured.The S3 key format uses
request.id(Fastify's unique request identifier) to prevent collisions and organizes attachments hierarchically by request, status, and request ID. This is a solid approach.
139-147: LGTM: Attachment metadata storage is correct.The code properly stores
attachmentS3keyin DynamoDB and usesremoveUndefinedValuesto ensure the field is omitted when no attachment is present.
538-540: LGTM: Filename derivation is clean.Deriving the filename from the S3 key using
.split("/").at(-1)is concise and correct. Note that this is coupled to the S3 key format defined at line 87, so both should be updated together if the format changes.
679-711: Excellent defensive programming with S3 existence check.The
HeadObjectCommandverification before generating a presigned URL is a best practice that prevents returning URLs for non-existent objects. The cleanup logic (removingattachmentS3keyfrom DynamoDB when the S3 object is missing) ensures data consistency and handles edge cases like manual S3 deletions or failed uploads.
718-726: LGTM: NotFoundError handling correctly implemented.The error handling now properly distinguishes
NotFoundError(404) from other errors (500) by rethrowing it unchanged. This addresses the past review comment about incorrect error wrapping.
102-119: No action required—the code is correct.The original review comment assumes
md5hashexists in theattachmentInfoschema, but it does not. TheroomRequestStatusAttachmentInfoschema (src/common/types/roomRequest.ts, lines 135-139) only definesfilename,fileSizeBytes, andcontentType. The code correctly extracts only these fields and passes them tocreatePresignedPut, which is the appropriate behavior given the schema definition.Likely an incorrect or invalid review comment.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Cache: Disabled due to data retention organization setting
Knowledge base: Disabled due to data retention organization setting
📒 Files selected for processing (3)
src/api/routes/membership.ts(1 hunks)src/api/routes/mobileWallet.ts(0 hunks)src/api/routes/roomRequests.ts(12 hunks)
💤 Files with no reviewable changes (1)
- src/api/routes/mobileWallet.ts
🧰 Additional context used
🪛 ESLint
src/api/routes/membership.ts
[error] 15-15: Unexpected use of file extension "js" for "common/config.js"
(import/extensions)
src/api/routes/roomRequests.ts
[error] 1-1: Resolve error: EACCES: permission denied, open '/cMBYiilbzH'
at Object.writeFileSync (node:fs:2409:20)
at l (/home/jailuser/git/node_modules/get-tsconfig/dist/index.cjs:7:13685)
at createFilesMatcher (/home/jailuser/git/node_modules/get-tsconfig/dist/index.cjs:7:14437)
at Object.resolve (/home/jailuser/git/node_modules/eslint-import-resolver-typescript/lib/index.cjs:298:107)
at withResolver (/home/jailuser/git/node_modules/eslint-module-utils/resolve.js:180:23)
at fullResolve (/home/jailuser/git/node_modules/eslint-module-utils/resolve.js:201:22)
at relative (/home/jailuser/git/node_modules/eslint-module-utils/resolve.js:217:10)
at resolve (/home/jailuser/git/node_modules/eslint-module-utils/resolve.js:233:12)
at checkFileExtension (/home/jailuser/git/node_modules/eslint-plugin-import/lib/rules/extensions.js:205:53)
at checkSourceValue (/home/jailuser/git/node_modules/eslint-module-utils/moduleVisitor.js:32:5)
(import/extensions)
[error] 16-16: Unexpected use of file extension "js" for "common/errors/index.js"
(import/extensions)
[error] 39-39: Unexpected use of file extension "js" for "api/functions/s3.js"
(import/extensions)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: Run Unit Tests
- GitHub Check: Build Application
🔇 Additional comments (1)
src/api/routes/membership.ts (1)
15-16: Clarify why membership route changes are in an S3 multiregion PR.The PR objectives state this is for setting up multiregion S3 for the assets bucket, yet this file contains only import statement modifications in a membership management route with no S3 or assets functionality. According to the AI summary, these are purely import cleanup changes with no runtime impact.
Please verify:
- Whether this file was included intentionally or by accident
- If intentional, explain how these changes relate to the S3 multiregion setup
- Consider moving unrelated cleanup to a separate housekeeping PR for clearer change tracking
| if (error instanceof NotFound) { | ||
| // Key doesn't exist in S3, delete the attribute from DynamoDB | ||
| await fastify.dynamoClient.send( | ||
| new UpdateItemCommand({ | ||
| TableName: genericConfig.RoomRequestsStatusTableName, | ||
| Key: { | ||
| requestId: { S: request.params.requestId }, | ||
| "createdAt#status": { | ||
| S: `${request.params.createdAt}#${request.params.status}`, | ||
| }, | ||
| }, | ||
| UpdateExpression: "REMOVE #attachmentS3key", | ||
| ExpressionAttributeNames: { | ||
| "#attachmentS3key": "attachmentS3key", | ||
| }, | ||
| }), | ||
| ); | ||
|
|
||
| throw new NotFoundError({ | ||
| endpointName: request.url, | ||
| }); | ||
| } else { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don't delete attachment metadata on transient 404s
If the client asks for a download URL before the file is uploaded (or while replication is still catching up), HeadObject will return a 404. This block then removes attachmentS3key from DynamoDB, so even after the upload completes there is no longer any pointer—downloads stay broken until someone manually repopulates the record. Please stop mutating Dynamo on this path (or only clean up in a deliberate background sweep once you're certain the object is gone).(docs.aws.amazon.com)
🤖 Prompt for AI Agents
In src/api/routes/roomRequests.ts around lines 682 to 703, the current handler
deletes the DynamoDB attachmentS3key when HeadObject returns a NotFound, which
can remove the pointer for transient 404s; change this path to stop mutating
DynamoDB on a simple S3 NotFound: remove the UpdateItemCommand call and simply
throw the NotFoundError (or return the NotFound response) so transient states
don't wipe metadata; if you must perform cleanup, implement a separate
background sweep that only deletes attachmentS3key after verifying the object is
truly gone (e.g., repeated missing checks and an age threshold) or mark records
for async cleanup rather than deleting on first 404.
Summary by CodeRabbit
New Features
Enhancements
Tests
Chores