New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
bug: pre-signed POST to s3 does not trigger s3 object created event for lambda trigger #5554
Comments
Hi @paulrobello, thank you for your patience. After triaging your issue it seems to be an usage error, I'm guessing because you're using POST instead of PUT as aws specifies here. I took me some time to validate this due to Pulumi behaving very irrational some of the times with Localstack. Either way I'm attaching my stack config file, stack file and lambda code so you can validate my results. index.ts import * as pulumi from "@pulumi/pulumi";
import * as aws from "@pulumi/aws";
import { rootPulumiStackTypeName } from "@pulumi/pulumi/runtime";
const bucket = new aws.s3.Bucket("my-bucket");
const lambda = new aws.lambda.Function("lambda", {
code: new pulumi.asset.FileArchive('function.zip'),
runtime: "nodejs14.x",
handler: "index.handler",
role: "no-real-role"
})
const createdObjectEvent = bucket.onObjectCreated("uploadEvent", lambda, {
event: '*',
filterPrefix: '/'
}); Pulumi.pinzon.yaml config:
aws:accessKey: test
aws:endpoints:
- acm: http://localhost:4566
amplify: http://localhost:4566
apigateway: http://localhost:4566
apigatewayv2: http://localhost:4566
applicationautoscaling: http://localhost:4566
appsync: http://localhost:4566
athena: http://localhost:4566
autoscaling: http://localhost:4566
batch: http://localhost:4566
cloudformation: http://localhost:4566
cloudfront: http://localhost:4566
cloudsearch: http://localhost:4566
cloudtrail: http://localhost:4566
cloudwatch: http://localhost:4566
cloudwatchevents: http://localhost:4566
cloudwatchlogs: http://localhost:4566
codecommit: http://localhost:4566
cognitoidentity: http://localhost:4566
cognitoidp: http://localhost:4566
docdb: http://localhost:4566
dynamodb: http://localhost:4566
ec2: http://localhost:4566
ecr: http://localhost:4566
ecs: http://localhost:4566
eks: http://localhost:4566
elasticache: http://localhost:4566
elasticbeanstalk: http://localhost:4566
elb: http://localhost:4566
emr: http://localhost:4566
es: http://localhost:4566
firehose: http://localhost:4566
glacier: http://localhost:4566
glue: http://localhost:4566
iam: http://localhost:4566
iot: http://localhost:4566
kafka: http://localhost:4566
kinesis: http://localhost:4566
kinesisanalytics: http://localhost:4566
kms: http://localhost:4566
lambda: http://localhost:4566
mediastore: http://localhost:4566
neptune: http://localhost:4566
organizations: http://localhost:4566
qldb: http://localhost:4566
rds: http://localhost:4566
redshift: http://localhost:4566
route53: http://localhost:4566
s3: http://localhost:4566
sagemaker: http://localhost:4566
secretsmanager: http://localhost:4566
servicediscovery: http://localhost:4566
ses: http://localhost:4566
sns: http://localhost:4566
sqs: http://localhost:4566
ssm: http://localhost:4566
stepfunctions: http://localhost:4566
sts: http://localhost:4566
swf: http://localhost:4566
transfer: http://localhost:4566
xray: http://localhost:4566
aws:region: us-east-1
aws:s3ForcePathStyle: 'true'
aws:secretKey: test
aws:skipCredentialsValidation: 'true'
aws:skipRequestingAccountId: 'true' lambda code const https = require('https')
const url = 'https://webhook.site/043b22e6-8656-455e-b7cc-3420ee4d1221'
exports.handler = function (event, context, callback) {
const promise = new Promise(function (resolve, reject) {
https.get(url, (res) => {
callback(null, res.statusCode)
}).on('error', (e) => {
callback(Error(e))
})
})
} I'm closing this issue but if you feel that this is not the case please comment with more info about the error logs and your exact setup/stack-code. |
The url I am generating is for POST not PUT and does work with AWS production env. Sample python boto code here:
|
ah ok, thanks for the promptly response. so can you confirm that the issue of Localstack is not with the notification but with the post request not being handled as aws would do? |
The post request to s3 should trigger the s3 event which triggers a lambda, All i can really tell is that it works in AWS but the lambda is never triggered in localstack |
@paulrobello , could you provide LOGS of your Localstack instance when encountering your issue? Just finished testing with the pulumi setup I shared before and code to test the upload with a code to upload with a generated_presigned_post import boto3
import requests
s3 = boto3.client("s3", endpoint_url="http://localhost:4566")
bucket_name = "my-bucket-982bc46"
object_name = "text.txt"
file = open("text.txt", "r")
response = s3.generate_presigned_post(bucket_name, object_name)
files = {"file": (object_name, file)}
post(response["url"], data=response["fields"], files=files)
response = s3.get_object(Bucket=bucket_name, Key=object_name) |
I'm closing this issue as usage, but if you feel that this is not the case please comment with more info about the error logs and your exact setup/stack-code. |
@pinzon See logs attached...all constants that @paulrobello states remains true, using localstack/localstack:1.0.0 issue still present even with put usage as specified. As you can see from the logs it receives the presigned url upload but never fires the corresponding lambda as it should to process the upload. |
have tried with create_presigned_post func above && post request as @paulrobello stated above with v1 as well...see logs for that below |
@kingster307 thanks for the update. I'll take a look soon. |
@pinzon just got clearance to release the full logs for v1 utilizing put not post....expected behavior - get presigned url - make put request to it - s3 onObjectCreated fires up lambda - lambda returns msgs via websockets
|
@kingster307 thanks for the logs, I'm working on this right now. |
Full logs for POST alternative - presigned POST & POST request to upload file
|
hi again @kingster307 I tested again this problem but everything seems fine for both PUT and POST requests, and both trigger the lambda function. Could you provide exactly how are you setting your lambda to be triggered by the s3 upload? By the way I'm attaching the scripts for how I'm testing this feature |
@pinzon .... same as @paulrobello we are working on this project together. All our iac is built within Pulumi. I believe this could possible be a localstack x Pulumi integration issue Pulumi typescript - IAC: defining s3 env vars
Creating lambda function
Setting up event
creating presigned url function used within lambda PUT flavor
POST flavor
|
essentially this is our workflow
|
Hi @kingster307. The reason your PUT request is not activating the Lambda Function is because you're setting a Prefix filter("uploads/") in your notification configuration and when you send your request, the key is missing the prefix. Url from your PUT request:
it should be:
For the POST request, my best guest for that 404 response is that you're using the URL from Here is my reproducer, notice that both my keys have the "uploads/" prefix, the URL of each method is used differently and both trigger my Lambda Function: import boto3
import requests
with open('file.txt', 'r') as object_file:
object_text = object_file.read()
s3 = boto3.client('s3', endpoint_url="http://localhost:4566")
bucket_name = "my-bucket-13d6bec"
put_url = s3.generate_presigned_url(
ClientMethod="put_object",
Params={"Bucket": bucket_name, "Key": "uploads/file.txt"}
)
print(requests.put(put_url, data=object_text))
presigned_post = s3.generate_presigned_post(Bucket=bucket_name, Key='uploads/file2.txt')
print(requests.post(
presigned_post['url'],
data=presigned_post['fields'],
files={'file': open('file.txt', 'r')})) Here is my pulumi stack using TS: import * as aws from "@pulumi/aws";
import { FileArchive } from "@pulumi/pulumi/asset";
// Create an AWS resource (S3 Bucket)
const bucket = new aws.s3.Bucket("my-bucket");
const fuelUploadLambda = new aws.lambda.Function(
'fuelUploadLambda',
{
code: new FileArchive('../function.zip'),
role: "arn:aws:iam::074255357339:role/lambda-ex",
handler: 'index.handler',
runtime: 'nodejs14.x',
memorySize: 512,
timeout: 15 * 60,
}
);
const uploadObjectCreateEvent = bucket.onObjectCreated('uploadObjectCreateEvent',
fuelUploadLambda,
{
event: '*',
filterPrefix: 'uploads/'
}
); |
@pinzon apologies....the onObjectCreated code snippet that I took from previous comments was out of date with our project. I have updated it in the code snippet above. Our key when creating presigned url's is prefixed with Everything works in prod & sb account in AWS environments, its just localstack we are having issues with |
@kingster307 could you give me an example of the final prefix you're setting on the lambda trigger and the key name you're sending? By the way, how |
final lambda trigger would look like this
Key Name example - Bucket Name Example - normalizeS3Prefix - strips any leading slash in a string & ensures that string has a trailing slash |
@kingster307 with your parameters I get the same results, even with multiple regions. |
added a little demo project here to better present the issue. We are utilizing post pre-signed url. It seems as though the onObjectCreated lambda is never hit, as the logs/prints from within it never show && the logs never show the lambda being spun up |
Thank you very much @kingster307. I'm taking a look right now. |
@kingster307 your reproducer showed a clearer picture of what's happening with LocalStack: in your pytest you send your file like this: presigned_post['fields']['file'] = open("file.txt",'r')
print(requests.post(
presigned_post['url'],
data=presigned_post['fields'] and I do it like this: requests.post(
presigned_post['url'],
data=presigned_post['fields']
files={"file": open("file.txt",'r')})
Note: in the future please consider writing a reproducer from the beginning. 😅 |
hi @kingster307, upon further testing it seems to me that the method you use to upload your file is not valid, one cannot just simply add the file to the files dictionary and send it. Using your method with AWS returns a 405 due to requests sending the data with the header here is the code I use to test it: import boto3
import requests
s3 = boto3.client('s3')
bucket_name = "test-crist-bucket"
key = "test-file"
s3.create_bucket(Bucket=bucket_name)
cors_configuration = {
'CORSRules': [{
'AllowedHeaders': ['*'],
'AllowedMethods': ['PUT','POST'],
'AllowedOrigins': ['*'],
'ExposeHeaders': ['ETag', 'x-amz-request-id'],
'MaxAgeSeconds': 3000
}]
}
s3.put_bucket_cors(Bucket=bucket_name, CORSConfiguration=cors_configuration)
s3.put_bucket_acl(ACL='public-read-write',Bucket=bucket_name)
presigned_post = s3.generate_presigned_post(
Bucket=bucket_name,
Key=key
)
presigned_post['fields']['file'] = open("file.txt",'r')
response = requests.post(
presigned_post['url'],
data=presigned_post['fields']
)
print(response) # <Response [405]>
print(response.content) # b'<?xml version="1.0" encoding="UTF-8"?>\n<Error><Code>MethodNotAllowed</Code><Message>The specified method is not allowed against this resource.</Message><Method>POST</Method><ResourceType>BUCKETPOLICY</ResourceType><RequestId>S3TFN88Y64H7ZHTE</RequestId><HostId>+BS4q74kic+Vlt0IRJe6/Sm62xijAEEanGXQzi9wpwdBKn/zRuPlQTejhYozOkPqaU82c4suavE=</HostId></Error>' Please let me know if there any other setting I should add to make the previous code work as we desire, if not I'll just finish the tests that validate that Localstack is working as expected. |
@pinzon Appreciate your work on this! ....Wrote an extra test case for adding headers for content type restrictions...Neither adding headers nor utilizing the files param seem to work completely. - see tests here When adding the files param I can see the lambda fire but no log output show. see logs here When adding headers to request for content type I do not see the lambda fire nor the log output |
AWS returns a 400 response when adding the line |
Hi @kingster307 I made a pull request to your reproducer repo with the changes I made so the tests work with LocalStack. As I mention in my last comments I corroborate that the way you POST your file (adding it to the fields dictionary) doesn't work with AWS. I made a PULL request that makes your reproducer trigger the Lambda function and I added an integration test to LocalStack (#6498) that shows that a Lambda is being triggered by an S3 object created through a presign PUT and a presign POST request. Also the test is AWS Validated which means that the exact code will also work with AWS. |
@pinzon appreciate your work on this. Feel free to close at will...Can confirm when utilizing the new POST url structure & using files param the post & integration succeeds |
Is there an existing issue for this?
Current Behavior
Using pre-signed POST to s3 does not trigger s3 object created event for lambda trigger.
Expected Behavior
Using pre-signed POST to s3 should trigger s3 object created event for lambda trigger.
How are you starting LocalStack?
With a docker-compose file
Steps To Reproduce
How are you starting localstack (e.g.,
bin/localstack
command, arguments, ordocker-compose.yml
)Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands)
Pulumi typescript:
export const uploadObjectCreateEvent = bucket.onObjectCreated(
'uploadObjectCreateEvent',
fuelUploadLambda,
{
event: '*',
filterPrefix: 'uploads/'
},
{ provider: PulumiUtil.awsProvider }
);
Environment
Anything else?
Event does trigger if I use cli s3 cp.
Maybe related to #4809 ?
The text was updated successfully, but these errors were encountered: