fix: extensionsless key upload problems#2159
Conversation
|
Caution Review failedThe pull request is closed. ℹ️ Recent review info⚙️ Run configurationConfiguration used: Organization UI Review profile: ASSERTIVE Plan: Pro Run ID: 📒 Files selected for processing (1)
WalkthroughAdded getEventRuleArn and switched EventBridge rule handling to use derived rule-name constants; refactored the Lambda permission helper to accept ruleName and use getEventRuleArn. Created two SQS DLQs (search-upload and text-extractor) with 14-day retention and QueuePolicy resources allowing events.amazonaws.com to SendMessage constrained by the specific EventBridge rule ARNs; wired DLQs into EventBridge targets via deadLetterConfig. Changed the text-extractor rule to trigger on all S3 Object Created events. Added Pulumi config keys for document_storage_service_auth_key, fetched the secret and injected DOCUMENT_STORAGE_SERVICE_AUTH_KEY into the search-upload Lambda env. Bumped Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Comment |
53491b7 to
cf1981d
Compare
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@infra/stacks/document-storage-bucket-integrations/index.ts`:
- Around line 43-49: The new SQS queue declaration (searchUploadDlq using
aws.sqs.Queue) is failing the infra formatter; run `biome format` on
infra/stacks/document-storage-bucket-integrations/index.ts (or at repo root) to
reformat the block and commit the formatted changes so the queue block conforms
to the project's Biome formatting rules.
- Around line 51-66: The SQS DLQ policy currently allows any EventBridge rule to
send messages because it lacks a Condition; update the QueuePolicy for
searchUploadDlq so the policy Statement includes a Condition that restricts
SendMessage to the specific EventBridge rule (search-upload-rule-${stack})
and/or the account. Modify the policy JSON returned in the
searchUploadDlq.arn.apply callback to add "Condition": { "ArnEquals": {
"aws:SourceArn": "<arn-of-search-upload-rule-${stack}>" } } (or include
"aws:SourceAccount": "<account-id>")—use the actual EventBridge rule ARN
(search-upload-rule-${stack}) value or build it from stack/account/region rather
than leaving it unrestricted.
In `@infra/stacks/search-upload/index.ts`:
- Around line 14-18: Replace the permissive config.get(...) ?? '' usage with a
required lookup so missing config fails fast: in the
DOCUMENT_STORAGE_SERVICE_AUTH_KEY assignment (the
aws.secretsmanager.getSecretVersionOutput(...).apply(...) expression), call
config.require('document_storage_service_auth_key') instead of
config.get('document_storage_service_auth_key') ?? '' so the deployment errors
clearly if the secret id is not provided; leave the rest of the secret lookup
and .apply(...) logic unchanged.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: d72c15a3-6e23-4bc0-aecb-7595d54237ba
📒 Files selected for processing (5)
infra/stacks/document-storage-bucket-integrations/index.tsinfra/stacks/search-upload/Pulumi.dev.yamlinfra/stacks/search-upload/Pulumi.prod.yamlinfra/stacks/search-upload/index.tsinfra/stacks/search-upload/search-upload-lambda.ts
There was a problem hiding this comment.
♻️ Duplicate comments (1)
infra/stacks/document-storage-bucket-integrations/index.ts (1)
48-63:⚠️ Potential issue | 🟠 MajorScope the DLQ QueuePolicy to the intended EventBridge rule.
Line 56 currently grants
events.amazonaws.comsend access without source conditions. Addaws:SourceArn(and ideallyaws:SourceAccount) so onlysearch-upload-rule-${stack}can write to this DLQ.🔐 Suggested hardening diff
new aws.sqs.QueuePolicy(`search-upload-dlq-policy-${stack}`, { queueUrl: searchUploadDlq.url, - policy: searchUploadDlq.arn.apply((arn) => + policy: pulumi + .all([searchUploadDlq.arn, aws.getCallerIdentityOutput({}).accountId]) + .apply(([arn, accountId]) => JSON.stringify({ Version: '2012-10-17', Statement: [ { Effect: 'Allow', Principal: { Service: 'events.amazonaws.com' }, Action: 'sqs:SendMessage', Resource: arn, + Condition: { + ArnEquals: { + 'aws:SourceArn': `arn:aws:events:${aws.config.region}:${accountId}:rule/search-upload-rule-${stack}`, + }, + StringEquals: { + 'aws:SourceAccount': accountId, + }, + }, }, ], }) - ), + ), });#!/bin/bash set -euo pipefail FILE="infra/stacks/document-storage-bucket-integrations/index.ts" # Inspect current DLQ policy block sed -n '48,63p' "$FILE" # Verify whether source-scoping conditions exist rg -n '"Condition"|"aws:SourceArn"|"aws:SourceAccount"' "$FILE" -C2🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@infra/stacks/document-storage-bucket-integrations/index.ts` around lines 48 - 63, The DLQ QueuePolicy created by QueuePolicy for searchUploadDlq currently allows events.amazonaws.com to SendMessage broadly; update the policy JSON inside searchUploadDlq.arn.apply (the QueuePolicy instantiation) to add a Condition that scopes access to the intended EventBridge rule by adding "Condition": { "ArnEquals": { "aws:SourceArn": /* ARN of search-upload-rule-${stack} */ }, "StringEquals": { "aws:SourceAccount": /* account id */ } } so only the rule search-upload-rule-${stack} (and optionally the account) can send to the DLQ; keep the Effect, Principal and Action as-is but nest them under this Condition.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@infra/stacks/document-storage-bucket-integrations/index.ts`:
- Around line 48-63: The DLQ QueuePolicy created by QueuePolicy for
searchUploadDlq currently allows events.amazonaws.com to SendMessage broadly;
update the policy JSON inside searchUploadDlq.arn.apply (the QueuePolicy
instantiation) to add a Condition that scopes access to the intended EventBridge
rule by adding "Condition": { "ArnEquals": { "aws:SourceArn": /* ARN of
search-upload-rule-${stack} */ }, "StringEquals": { "aws:SourceAccount": /*
account id */ } } so only the rule search-upload-rule-${stack} (and optionally
the account) can send to the DLQ; keep the Effect, Principal and Action as-is
but nest them under this Condition.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: d415bab4-9b5d-48ff-9f05-336c3ad1881f
⛔ Files ignored due to path filters (1)
infra/bun.lockis excluded by!**/*.lock
📒 Files selected for processing (3)
infra/package.jsoninfra/stacks/document-storage-bucket-integrations/index.tsinfra/stacks/search-upload/index.ts
…load lambda event rule can write to the dlq
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
infra/stacks/document-storage-bucket-integrations/index.ts (1)
133-137: 🧹 Nitpick | 🔵 TrivialConsider adding DLQ for text-extractor target for consistency.
The
search-upload-targetnow has a DLQ configured, buttext-extractor-targetdoes not. For consistent error handling across both triggers, consider adding a DLQ here as well.♻️ Optional: Add DLQ for text-extractor
If desired, create a similar DLQ and wire it:
+ // Add DLQ for text extractor + const textExtractorDlq = new aws.sqs.Queue(`text-extractor-dlq-${stack}`, { + name: `text-extractor-dlq-${stack}`, + messageRetentionSeconds: 14 * 24 * 60 * 60, + }); // Add the Lambda as a target new aws.cloudwatch.EventTarget('text-extractor-target', { rule: textExtractorRule.name, arn: extractorArn, + deadLetterConfig: { + arn: textExtractorDlq.arn, + }, });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@infra/stacks/document-storage-bucket-integrations/index.ts` around lines 133 - 137, The text-extractor CloudWatch EventTarget ('text-extractor-target' using rule textExtractorRule and arn extractorArn) lacks a dead-letter queue while the 'search-upload-target' has one; create an SQS DLQ (e.g., textExtractorDeadLetterQueue) and pass its ARN into the EventTarget's deadLetterConfig when constructing 'text-extractor-target' so failed invocations are routed to that queue for consistency with the search-upload target.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@infra/stacks/document-storage-bucket-integrations/index.ts`:
- Around line 5-7: The function getEventRuleArn currently hardcodes account ID;
update it to use aws.getCallerIdentityOutput().accountId so the ARN is portable
across accounts. Replace the literal 569036502058 in the pulumi.interpolate call
with aws.getCallerIdentityOutput().accountId (using the Output value inside the
interpolation), keeping the function name getEventRuleArn and the
pulumi.interpolate pattern so callers remain unchanged.
---
Outside diff comments:
In `@infra/stacks/document-storage-bucket-integrations/index.ts`:
- Around line 133-137: The text-extractor CloudWatch EventTarget
('text-extractor-target' using rule textExtractorRule and arn extractorArn)
lacks a dead-letter queue while the 'search-upload-target' has one; create an
SQS DLQ (e.g., textExtractorDeadLetterQueue) and pass its ARN into the
EventTarget's deadLetterConfig when constructing 'text-extractor-target' so
failed invocations are routed to that queue for consistency with the
search-upload target.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: 4983aac4-b7ef-459a-b75f-b6453f39bdb2
📒 Files selected for processing (1)
infra/stacks/document-storage-bucket-integrations/index.ts
deployed this to dev stack already