New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RFC - Pipeline Resolver Support #430
Comments
Hi @mikeparisstuff, My team is going to be implementing pipeline resolvers heavily on a current project, and I am looking forward to updates like these to allow us to control the whole pipeline from within our codebase. I realize that there is support for constructing custom resolvers locally, per this section of the Amplify docs. I might be overlooking a way to accomplish this that combines multiple techniques from the docs/issues, but if not, just wanted to put this out there, given the nature of this RFC. Thanks! |
+1! |
According to Proposal 1 Generated Functions section, does it mean amplify-cli will auto generate queries look like: // queries.js
const getPost = `query GetPost($id: ID!) {
"stash": {
"args": {
"GetPost": {
"id": $id
}
}
}
}
` and we can execute graphql operation by API.graphql(graphqlOperation(queries.getPost, { id: postId }))
.then(response => response.data.xxx) What will be the response different from current version? How can we access |
|
Looking at proposal 2, What do you think about making Then
becomes:
Then multiples of pipeline resolver function can be chained before and after @model's mutation resolvers providing us greater flexibility. We can treat
One example for use case when we might need multiple @functions after 'primary' resolver: say we need some custom business logic to run inside VPC, and those lambdas take pretty long cold starts, we can factor out logic from multiple @function to this single one, attach a keep alive cloudwatch event that will keep periodically calling this function to avoid cold starts on it and probably another @function transform past this function to transform the result into a desired form. |
I like the Audit use case. How would you search through that audit log model you made? Hook it up to elastic search? How would you protect this search through @auth rules? Would the graphql transformer support this use case fully. |
If I have to query some data before doing a mutation in the same operation, Is the pipeline the solution for this too? Any plans on implementing transactional pipelines? |
Also for the audit use case, from performance perspective, it seems like recording to audit table should happen in parallel to the primary mutation rather than before or after. It might be better to implement it as a lambda stream from a dynamodb table for example. I assume pipeline resolvers currently don't yet allow for async or parallel operations.. |
I don't know enough about the implementation, but what are the challenges with making proposal 2 work for query/read operations? Ideally we'd want to support the |
Just tried to create and run a pipeline function that depends a result of an autogenerated resolver and realized the |
Both proposals look good.
I'm not sure why @before and @after approach wouldn't also be useful for In Proposal 1, it appears that authentication is only provided for dynamo resolvers (ie. AuthorizeCreateX, AuthorizeUpdateX). However, now that the @function directive has been added to the API ( aws-amplify/amplify-cli#83 ) there should also be an (AuthorizeInvokeX or AuthorizeFunctionX). Custom resolver lambda functions can add security in code but preventing invocation would provide an additional layer of security that conforms to the auth groups defined throughout the schema. It would also be easier to add group logic into AuthorizeInvokeX than in lambda code. |
Any estimate for when any of this RFC would start getting implemented and relative priority to other RFCs? It's seriously needed for any multi-tenancy app. |
@mikeparisstuff any updates? |
@mikeparisstuff , can we have tenatative deadline so as to decide if we should wait for the future or go for alternate approach? :) |
@artista7 Could you take a look at the "Chaining functions" section out here - https://aws-amplify.github.io/docs/cli/graphql#function and see if it solves for your use-case? |
@kaustavghosh06 , Point is I want to put a filter in case of some mutations, for eg. createUser (but only on monday), thus I wanted to override createUser resolver to pipeline resolver instead of creating new mutation fields with different names. |
Creating a logger is a PITA right now until we have some form of this RFC. Current workaround we're thinking of is to create a custom mutation for each CRUD action with the help of @function directive. So basically overriding all of our models' gql mutation. |
I'm a big fan of prop 1. Prop 2 adds a lot to the schema if you have bigger pipelines, and it seems to offer less customization. If we the users are going to be messing with resolver pipelines, we have to dive into the pipelineFunctions folder (or whatever) anyway, so the config is fine to go there rather than built-in as directives like prop 2. To expand on this more, I'm making pipeline resolvers for queries, deletes, updates, and creates. I don't want to have 10+ lines in my schema per model just for pipeline functions. |
can we have any updates? It has been 5 months and this is a much needed feature for an app that is not a todo list |
I have a Python script handling this I can stick on Github if ya want @idanlo |
@hisham thanks for referencing aws-amplify/amplify-cli#6217 ... will definitely have a play with useExperimentalPipelinedTransformer @kevcam4891 this is exactly what we need. Would you have more details on how did you write your custom transformer plugin? Would you be kind enough to share some of the code for it? |
Hi @ronaldocpontes - correct we're tracking this as part of aws-amplify/amplify-cli#6217 and here's the documentation to author a custom transformer plugin: https://docs.amplify.aws/cli/plugins/authoring#authoring-custom-graphql-transformers--directives |
Thanks @renebrandel and @kevcam4891 I am planning to copy/expanding the graphql-auth-transformer to implement @mikeparisstuff solution here #449 But this would end up just like the PR below, which wasn't merged: Would you recommend a different approach? |
@ronaldocpontes I'd recommend NOT trying to introduce something into the Amplify codebase. Create a standalone plugin that sits in your project. Amplify CLI will look for npm packages that are installed local to your user, your project, or are installed globally on your machine. I'll try and follow up soon with an article of what I did in particular (its not a trivial writeup), but for sure, you create the npm package for your own consumption and then Amplify CLI finds it and uses it during the transform process if it is listed in @renebrandel I will say, I used the docs link references above for inspiration, but the docs were far from a step-by-step guide, and in fact only about 1/3 of it is represented accurately. If you debug the project during an |
Agreed @kevcam4891. I would rather have an external plugin providing additional features than having to fork, replace and maintain parts of the framework codebase. Your solution seems like quite a good way to use pipeline resolvers when adding business logic is needed on top of the basic operations provided by @model directive and something we would use a lot. Can't wait to see your writeup. |
Any updates? |
this was never implemented right? |
Is there any update on this? |
Any update here? |
Bump. I requested this feature via a support ticket with AWS and they sent me to this ticket. This feature is pretty critical -- currently, pushing to Amplify overrides the pipeline resolver configuration set in the AppSync console and defeats the purpose of the custom pipeline features if its settings cannot be preserved between Amplify cli deployments. Thanks! |
Hi folks - just wanted to let you know that we've released the Pipeline Resolver support now with GraphQL Transformer v2. With the new Transformer, all Amplify-generated resolvers are going to be pipeline resolvers. We also provide you mechanisms to "override" Amplify-generated resolvers OR "slot in"/"extend" the Amplify-generated resolvers. More details here:
@-mention me if there's a particular scenario that you believe isn't covered. Would love to chat more! |
@renebrandel is there a way to automatically create a AppSync data source as well? I can easily slot in custom business logic into the pipeline by create a VTL template (e.g. preDataLoad or postDataLoad), but these custom resolver templates will be executed against the Let's say I have schema with a It seems to me that this is not possible yet, except to create a custom mutation like I think there are few options this could be made possible in a user friendly way:
type User @model {
id: ID!
name: String!
password: String!
@function(name: "hash-password-function", onMutations: [{ field: "createUser", slot: pre}, { field: "updateUser", slot: pre }])
}
# extend the generated Mutations from @model with a @function directive
extend type Mutation {
createUser(input: CreateUserInput!, condition: ModelUserConditionInput): User
@function(name: "hash-password-function")
updateUser(input: UpdateUserInput!, condition: ModelUserConditionInput): User
@function(name: "hash-password-function")
}
@datasource(type: LAMBDA, name: "hash-password-function")
type User @model @auth(rules: [{ allow: public }])
{
id: ID!
name: String!
password: String!
} ## Mutation.createUser.preDataLoad.1.req.vtl
## [Start] Invoke AWS Lambda data source: HashPasswordLambdaDataSource. **
{
"version": "2018-05-29",
"operation": "Invoke",
"datasource": "HashPasswordLambdaDataSource"
"payload": {
"typeName": $util.toJson($ctx.stash.get("typeName")),
"fieldName": $util.toJson($ctx.stash.get("fieldName")),
"arguments": $util.toJson($ctx.arguments),
"identity": $util.toJson($ctx.identity),
"source": $util.toJson($ctx.source),
"request": $util.toJson($ctx.request),
"prev": $util.toJson($ctx.prev)
}
}
## [End] Invoke AWS Lambda data source: HashPasswordLambdaDataSource. ** Would love to hear your option! :-) |
Hi @zirkelc - just catching up after all the holiday madness! Some comments on the approaches:
I had a couple of 1:1 conversations with AWS Community Builders. One of the DX suggestions, that I like and had good traction internally is: Option 4:
|
@renebrandel thank you for taking the time to respond to my remarks. Option 4 sounds like a good compromise as the data source name can be used for defining a new data source and invoking the correct data source by the resolver. Do you have any information about the timeline for this feature? |
Are there any further updates on this? |
+1 @renebrandel Any updates on this? |
Hi - I'm going to be closing this issue out as we've now launched additional guides on how to extend the GraphQL API via Within the CDK approach, the experience is built-in to the construct, see the Please @-mention me if this doesn't fully resolve the issues outlined here. |
Thanks @renebrandel are we also planning on adding guides with best practices on on how to support multi-tenancy in amplify apps? This might be the key to resolving #449 (comment) |
Pipeline Resolvers Support
This RFC will document a process to transition the Amplify CLI to use AppSync pipeline resolvers. The driving use case for this feature is to allow users to compose their own logic with the logic that is generated by the GraphQL Transform. For example, a user might want to authorize a mutation that creates a message by first verifying that the user is enrolled in the message's chat room. Other examples include adding custom input validation or audit logging to @model mutations. This document is not necessarily final so please leave your comments so we can address any concerns.
Github Issues
Proposal 1: Use pipelines everywhere
Back in 2018, AppSync released a feature called pipeline resolvers. Pipeline resolvers allow you to serially execute multiple AppSync functions within the resolver for a single field (not to be confused with AWS Lambda functions). AppSync functions behave similarly to old style AppSync resolvers and contain a request mapping template, a response mapping template, and a data source. A function may be referenced by multiple AppSync resolvers allowing you to reuse the same function for multiple resolvers. The AppSync resolver context ($ctx in resolver templates) has also received a new
stash
map that lives throughout the execution of a pipeline resolver. You may use the$ctx.stash
to store intermediate results and pass information between functions.The first step towards supporting pipeline resolvers is to switch all existing generated resolvers to use pipeline resolvers. To help make the generated functions more reusable, each function defines a set of arguments that it expects to find in the stash. The arguments for a function are passed by setting a value in the
$ctx.stash.args
under a key that matches the name of the function. Below you can read the full list of functions that will be generated by different directives.Generated Functions
Function: CreateX
Generated by
@model
and issues a DynamoDB PutItem operation with a condition expression to create records if they do not already exist.Arguments
The CreateX function expects
Function: UpdateX
Generated by
@model
and issues a DynamoDB UpdateItem operation with a condition expression to update if the item exists.Arguments
The UpdateX function expects
Function: DeleteX
Generated by
@model
and issues a DynamoDB DeleteItem operation with a condition expression to delete if the item exists.Arguments
The UpdateX function expects
Function: GetX
Generated by
@model
and issues a DynamoDB GetItem operation.Arguments
The UpdateX function expects
Function: ListX
Generated by
@model
and issues a DynamoDB Scan operation.Arguments
The ListX function expects
Function: QueryX
Generated by @model and issues a DynamoDB Query operation.
Arguments
The QueryX function expects
Function: AuthorizeCreateX
Generated by
@auth
when used on an OBJECT.Arguments
The AuthorizeCreateX function expects no additional arguments. The AuthorizeCreateX function will look at
$ctx.stash.CreateX.input
and validate it against the$ctx.identity
. The function will manipulate$ctx.stash.CreateX.condition
such that the correct authorization conditions are added.Function: AuthorizeUpdateX
Generated by
@auth
when used on an OBJECT.Arguments
The AuthorizeUpdateX function expects no additional arguments. The AuthorizeUpdateX function will look at
$ctx.stash.UpdateX.input
and validate it against the$ctx.identity
. The function will manipulate$ctx.stash.UpdateX.condition
such that the correct authorization conditions are added.Function: AuthorizeDeleteX
Generated by
@auth
when used on an OBJECT.Arguments
The AuthorizeDeleteX function expects no additional arguments. The AuthorizeDeleteX function will look at
$ctx.stash.DeleteX.input
and validate it against the$ctx.identity
. The function will manipulate$ctx.stash.DeleteX.condition
such that the correct authorization conditions are added.Function: AuthorizeGetX
Generated by
@auth
when used on an OBJECT.Arguments
The AuthorizeGetX function expects no additional arguments. The AuthorizeGetX function will look at
$ctx.stash.GetX.result
and validate it against the$ctx.identity
. The function will return null and append an error if the user is unauthorized.Function: AuthorizeXItems
Filters a list of items based on
@auth
rules placed on the OBJECT. This function can be used by top level queries that return multiple values (list, query) as well as by @connection fields.Arguments
The AuthorizeXItems function expects
$ctx.prev.result
to contain a list of "items" that should be filtered. This function returns the filtered results.Function: HandleVersionedCreate
Created by the @versioned directive and sets the initial value of an objects version to 1.
Arguments
The HandleVersionedCreate function augments the
$ctx.stash.CreateX.input
such that it definitely contains an initial version.Function: HandleVersionedUpdate
Created by the @versioned directive and updates the condition expression with version information.
Arguments
The HandleVersionedUpdate function uses the
$ctx.stash.UpdateX.input
to append a conditional update expression to$ctx.stash.UpdateX.condition
such that the object is only updated if the versions match.Function: HandleVersionedDelete
Created by the @versioned directive and updates the condition expression with version information.
Arguments
The HandleVersionedDelete function uses the
$ctx.stash.DeleteX.input
to append a conditional update expression to$ctx.stash.DeleteX.condition
such that the object is only deleted if the versions match.Function: SearchX
Created by the @searchable directive and issues an Elasticsearch query against your Elasticsearch domain.
Arguments
The SearchX function expects a single argument "params".
Generated Resolvers
The @model, @connection, and @searchable directives all add resolvers to fields within your schema. The @versioned and @auth directives will only add functions to existing resolvers created by the other directives. This section will look at the resolvers generated by the @model, @connection, and @searchable directives.
@model resolvers
This schema will create the following resolvers:
Mutation.createPost
The Mutation.createPost resolver uses its own RequestMappingTemplate to setup the
$ctx.stash
such that it's pipeline is parameterized to return the correct results.Mutation.createPost.req.vtl
Function 1: CreatePost
The function will insert the value provided via
$ctx.stash.CreatePost.input
and return the results.Mutation.createPost.res.vtl
Return the result of the last function in the pipeline.
Mutation.updatePost
The Mutation.updatePost resolver uses its own RequestMappingTemplate to setup the
$ctx.stash
such that it's pipeline is parameterized to return the correct results.Mutation.updatePost.req.vtl
Function 1: UpdatePost
The function will update the value provided via
$ctx.stash.UpdatePost.input
and return the results.Mutation.updatePost.res.vtl
Return the result of the last function in the pipeline.
Mutation.deletePost
The Mutation.deletePost resolver uses its own RequestMappingTemplate to setup the
$ctx.stash
such that it's pipeline is parameterized to return the correct results.Mutation.deletePost.req.vtl
Function 1: DeletePost
The function will delete the value designated via
$ctx.stash.DeletePost.input.id
and return the results.Mutation.deletePost.res.vtl
Return the result of the last function in the pipeline.
Query.getPost
The Query.getPost resolver uses its own RequestMappingTemplate to setup the
$ctx.stash
such that it's pipeline is parameterized to return the correct results.Query.getPost.req.vtl
Function 1: GetPost
The function will get the value designated via
$ctx.stash.GetPost.id
and return the results.Query.getPost.res.vtl
Return the result of the last function in the pipeline.
Query.listPosts
The Query.listPosts resolver uses its own RequestMappingTemplate to setup the
$ctx.stash
such that it's pipeline is parameterized to return the correct results.Query.listPosts.req.vtl
Function 1: ListPosts
The function will get the value designated via
$ctx.stash.ListPosts.id
and return the results.Query.listPosts.res.vtl
Return the result of the last function in the pipeline.
@connection resolvers
The example above would create the following resolvers
Post.comments
The Post.comments resolver uses its own RequestMappingTemplate to setup the
$ctx.stash
such that it's pipeline is parameterized to return the correct results.Post.comments.req.vtl
Function 1: QueryPosts
The function will get the values designated via
$ctx.stash.QueryPosts
and return the results.Post.comments.res.vtl
Return the result of the last function in the pipeline.
Comment.post
The Comment.post resolver uses its own RequestMappingTemplate to setup the
$ctx.stash
such that it's pipeline is parameterized to return the correct results.Comment.post.req.vtl
Function 1: GetPost
The function will get the values designated via
$ctx.stash.GetPost
and return the results.Comment.post.res.vtl
Return the result of the last function in the pipeline.
@searchable resolvers
Query.searchPosts
The Query.searchPosts resolver uses its own RequestMappingTemplate to setup the
$ctx.stash
such that it's pipeline is parameterized to return the correct results.Query.searchPosts.req.vtl
Function 1: SearchPosts
The function will get the values designated via
$ctx.stash.GetPost
and return the results.Comment.post.res.vtl
Return the result of the last function in the pipeline.
@auth resolvers
The @auth directive does not add its own resolvers but will augment the behavior of existing resolvers by manipulating values in the
$ctx.stash
.Mutation.createX
- @auth will add logic to the request mapping template of the resolver that injects a condition into$ctx.stash.CreateX.condition
Mutation.updateX
- @auth will add logic to the request mapping template of the resolver that injects a condition into$ctx.stash.UpdateX.condition
Mutation.deleteX
- @auth will add logic to the request mapping template of the resolver that injects a condition into$ctx.stash.DeleteX.condition
Query.getX
- @auth will add logic to the response mapping template of the resolver that will return the value if authorized.Query.listX
- @auth will add logic to the response mapping template of the resolver that will filter$ctx.prev.result.items
based on the auth rules.Query.searchX
- @auth will add logic to the response mapping template of the resolver that will filter$ctx.prev.result.items
based on the auth rules.Query.queryX
- @auth will add logic to the response mapping template of the resolver that will filter$ctx.prev.result.items
based on the auth rules.Model.connectionField
- @auth will add logic to the response mapping template of the resolver that will filter$ctx.prev.result.items
based on the auth rules.@versioned resolvers
The @versioned directive does not add its own resolver but will augment the behavior of existing resolvers by manipulating values in the
$ctx.stash
.Mutation.createX
- @versioned will add logic to the request mapping template of the resolver that injects a condition into$ctx.stash.CreateX.condition
Mutation.updateX
- @versioned will add logic to the request mapping template of the resolver that injects a condition into$ctx.stash.UpdateX.condition
Mutation.deleteX
- @versioned will add logic to the request mapping template of the resolver that injects a condition into$ctx.stash.DeleteX.condition
Proposal 2: The @before and @after directives
There are many possibilities for how to expose pipeline functions via the transform. Defining a function of your own requires a request mapping template, response mapping template, and a data source. Using a function requires that you place that function, in order, within a pipeline resolver. Any directive(s) introduced would need to be able to accomodate both of these requirements. Here are a few options for discussion.
Before & After directives for adding logic to auto-generated model mutations
The main use case for this approach is to add custom authorization/audit/etc logic to mutations that are generated by the Amplify CLI. For example, you might want to lookup that a user is a member of a chat room before they can create a message. Currently this design only supports mutations but if you have suggestions for how to generalize this for read operations, comment below.
Which would be used like so:
To implement your function logic, you would drop two files in
resolvers/
calledAuthorizeUserIsChatMember.req.vtl
&AuthorizeUserIsChatMember.res.vtl
:The @before directive specifies which data source should be called and the order of the functions could be determined by the order of the @before directives on the model. The @after directive would work similarly except the function would run after the generated mutation logic.
Audit mutations with a single AppSync function
You could then use function templates like this:
Request for comments
The goal is to provide simple to use and effective abstractions. Please leave your comments with questions, concerns, and use cases that you would like to see covered.
The text was updated successfully, but these errors were encountered: