-
Notifications
You must be signed in to change notification settings - Fork 819
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ability to add custom EventSource and lambda triggers via amplify add function, Kinesis support in analytics category #2463
Ability to add custom EventSource and lambda triggers via amplify add function, Kinesis support in analytics category #2463
Conversation
@ambientlight Does this also work with multiple environments? Specifically the @model triggers? |
@kaustavghosh06: it does, an api category root stack reexports embedded DynamoDB stack output(streamArn) since actually streams are already enabled on these dynamo tables, so then we can use dependsOn and pass those outputs to lambda CFT as parameters |
@ambientlight Thanks so much for this PR. We reviewed this today and would really like to merge this, however we have some requests to have this fit within the current Amplify design philosophy. For this message:
Can you change this to just say Next, for this part of the flow:
We have two asks:
If the customer selects Pinpoint the flow continues as-is today, however if they select Kinesis they can enter the name and number of shards:
You will also need to modify At this point there will be a Kinesis stream in the project, so similar to your From looking at your PR it appears you have understanding of the CLI infrastructure to perform these changes, however if you need some assistance please let us know. |
@undefobj: oh yeah, totally forgot amplify has kineses integration. I will try to have these changes sorted out these days so we can keep the momentum here |
@ambientlight Love your work! However, it does not provide a way of giving access to lambda functions to read, write, update and delete Just wondering if there are any plans on implementing CRUD permissions for |
@Jkap7788: thanks for the feedback, If my memory is right, additional permissions can be added in the next question following trigger selection question, for additional permissions we are able to select API category graphql resource and that will bring the selection for CRUD permissions, I will make to sure to verify this as I finalize this PR |
@ambientlight correct, I can select CRUD operation against the API when adding a new function. This is what you see in the CLI:
But that applies to the whole api, not a specific |
@ambientlight Please let us know if you need any help from our side on this PR. Eager to merge this! |
exports.handler = function (event, context) { //eslint-disable-line | ||
console.log(JSON.stringify(event, null, 2)); | ||
event.Records.forEach((record) => { | ||
console.log(record.eventID); | ||
console.log(record.eventName); | ||
console.log('DynamoDB Record: %j', record.dynamodb); | ||
}); | ||
context.done(null, 'Successfully processed DynamoDB record'); // SUCCESS with message | ||
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why not use the callback model since this isn't an async function? https://docs.aws.amazon.com/lambda/latest/dg/nodejs-prog-model-handler.html
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
absolutely, actually originally just copied this template from
Lines 1 to 10 in 08c0c70
exports.handler = function(event, context) { | |
//eslint-disable-line | |
console.log(JSON.stringify(event, null, 2)); | |
event.Records.forEach(record => { | |
console.log(record.eventID); | |
console.log(record.eventName); | |
console.log('DynamoDB Record: %j', record.dynamodb); | |
}); | |
context.done(null, 'Successfully processed DynamoDB record'); // SUCCESS with message | |
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not suggesting you change it. Just curious. If that's the template used then follow that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it is a default lambda code template for @model trigger, thanks for bringing this up, it makes sense refining both templates. I mean, for now storage category trigger does not utilize function-category implementation, while actually having a very same logic for triggers, if this gets accepted, will be great to have another PR updating that utilizes function-category implementation in storage flow.
@kaustavghosh06: I think I should be fine with completing proposed changes, thus far I have the only question: I have noticed pinpoint policy for cognito identity pool is same for unauth / auth case, I have followed the same approach here, I guess it is indented so |
This pull request introduces 2 alerts when merging 1f79657 into ad3a137 - view on LGTM.com new alerts:
|
@ambientlight Do you know why the circleci tests are failing? Is it just due to rebasing? |
@ambientlight You'd have to rebase your PR. We fixed this issue in this PR - https://github.com/aws-amplify/amplify-cli/pull/3457/files around a week back. That should hopefully unblock and make the CircleCI tests pass. |
Yes, if you look at the PR metnioned above, we added this to our config since there seems to be some issue with the docker image on CircleCI:
|
@kaustavghosh06: To summarize the above described bugs:
were covered. Also a two other discovered bugs were fixed:
And |
Thanks @ambientlight. I'm going through the flow end to end - hopefully one last time :) and reviewing your code as well. Once again, thanks for taking action on the feedback so promptly. |
Awesome. Great to finally see this merged! Is there any chance we can see an official article/tutorial soon about how to integrate something like sending push notifications when a dynamodb collection is changed? For things like chat messages for example. |
@andreialecu: thanks, how are doing them as of now? you are thinking about something like a lambda trigger that will push to SNS topic wired up with your APNS / Firebase? Existing amplify push notification support is through pinpoint, those are a bit different in purpose imho, for marketing campaigns rather then custom notifications. |
Not doing them yet on this amplify app, it's still under development. I was waiting for this PR to be merged before moving on to implementing push notifications. 🚀 I'm really not sure what the best course of action would be. It's the first app on Amplify for us. In previous apps I've been sending them manually from the backend, completely separate from AWS. SNS Mobile Push seems like the right way to tackle this, but I'm completely open to suggestions. :) |
@andreialecu: myself handling this in |
Hi, Thx for the support |
@ldariozero12: it's under canary for now, you can try it with:
|
Ok, thx. There are an alternative to create a DynamoDB stream and related lambda trigger for a type @model without this functionality? |
@ldariozero12: yes, you can write cloudformation template yourself (which would be kinda same as this pr's lambda trigger), please refer to #2463 (comment) |
This is cool. Great work. |
Hello, I want to make sure I'm understanding the nature of this feature, since all of the bugs that seem to address my problem redirect here. I'm wanting to create a custom field pointing to a lambda that sources from my e.g.: type Cookie
@model
{
flavor: string @function(name: "MyFunctionIMadeWithAmplifyAddFunction");
} Is there an |
Much appreciated for this timely PR. I was using |
when running How to add a dynamodb trigger to python lambda using amplify framework? |
@jagadish432 Sadly, python doesn't seem to have received much love or attention in amplify yet. I think you either need to keep pushing or implement that (yourself?) in the Python community and make the necessary PRs. |
This pull request has been automatically locked since there hasn't been any recent activity after it was closed. Please open a new issue for related bugs. Looking for a help forum? We recommend joining the Amplify Community Discord server |
Issue #, if available:
#997
#2256
Description of changes:
streamArnOutputId
,datasourceOutputId
,tableNameOutputId
from nested api category stack backing each @model dynamoDB table, so they would available inside root api category stack output and insideamplify-meta
and can be references in other categories templates (graphql-dynamodb-transformer
). It is needed in the context of this PR, so we can reference @model dynamoDB table streamArn in our custom eventSource.amplify-category-function
to confirm a lambda with a custom event source.DynamoDBStream
/Kinesis
event source types are supported. For DynamoDBStream, user is able to choose from api category resource@model
dynamoDB tables, storage category existing dynamoDB table, or as last resort directly enterstreamArn
. ForKinesis, user is able to choose from analytics category resource kinesis stream or also directly enter
streamArn`.lambda-cloudformation-template.json.ejs
template has been extended to optionally includeeventSourceMapping
andLambdaTriggerPolicy
when available asprops.triggerEventSourceMapping
.Motivation:
@model
backed dynamoDB tables. Sincegraphql-transformer-core
will update CloudFormation templates on each graphql update, it is not possible to modify the templates directly like in other category resources to output nested stacks outputsamplify-category-storage
that explicitly load and manipulate resources and parameters in function CloudFormation templates, if we would go with replacing a storage category trigger creation with function category built-in implementation presented here.amplify-cli/packages/amplify-category-storage/provider-utils/awscloudformation/service-walkthroughs/dynamoDb-walkthrough.js
Lines 606 to 684 in cb8f6dd
Caveats:
To make CLI discover
@model
backed dynamoDB table outputs, user will need to push any update to api category to have CloudFormation templates regenerated with nested stack outputs.Usage:
When using
amplify add function
new function template is now available:Custom event source mapping will then present an ability to choose event source type:
Selecting dynamoDB stream will allow user either select
@model
backed dynamoDB table as event source, or storage category dynamoDB table. api category graphql resource is REQUIRED to be already deployed.Now if we go ahead with API category
@model
option, resource selection and then@model
selection will follow.Alternatively, if user would go on to select storage category dynamoDB table, he will be just prompted to select the resource. All other options will involve event source ARN input that will be validated by the following regex:
"arn:(aws[a-zA-Z0-9-]*):([a-zA-Z0-9\\-])+:([a-z]{2}(-gov)?-[a-z]+-\\d{1})?:(\\d{12})?:(.*)"
Generated dynamoDB event source lambda function will have the same placeholder as the dynamoDB trigger function looks like right now:
If selecting AWS KInesis instead:
then
Kinesis will have a generic empty trigger function:
Update (2019-10-23): Kinesis support for analytics.
This PR also extended analytics category to have native cloudformation wrappers for kinesis streams.
prompt:
This is a useful, taking into account amplify has kinesis support on the client side in analytics:
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.