Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RFC - Pipeline Resolver Support #430

Closed
mikeparisstuff opened this issue Mar 18, 2019 · 70 comments
Closed

RFC - Pipeline Resolver Support #430

mikeparisstuff opened this issue Mar 18, 2019 · 70 comments
Labels

Comments

@mikeparisstuff
Copy link
Contributor

mikeparisstuff commented Mar 18, 2019

Pipeline Resolvers Support

This RFC will document a process to transition the Amplify CLI to use AppSync pipeline resolvers. The driving use case for this feature is to allow users to compose their own logic with the logic that is generated by the GraphQL Transform. For example, a user might want to authorize a mutation that creates a message by first verifying that the user is enrolled in the message's chat room. Other examples include adding custom input validation or audit logging to @model mutations. This document is not necessarily final so please leave your comments so we can address any concerns.

Github Issues

Proposal 1: Use pipelines everywhere

Back in 2018, AppSync released a feature called pipeline resolvers. Pipeline resolvers allow you to serially execute multiple AppSync functions within the resolver for a single field (not to be confused with AWS Lambda functions). AppSync functions behave similarly to old style AppSync resolvers and contain a request mapping template, a response mapping template, and a data source. A function may be referenced by multiple AppSync resolvers allowing you to reuse the same function for multiple resolvers. The AppSync resolver context ($ctx in resolver templates) has also received a new stash map that lives throughout the execution of a pipeline resolver. You may use the $ctx.stash to store intermediate results and pass information between functions.

The first step towards supporting pipeline resolvers is to switch all existing generated resolvers to use pipeline resolvers. To help make the generated functions more reusable, each function defines a set of arguments that it expects to find in the stash. The arguments for a function are passed by setting a value in the $ctx.stash.args under a key that matches the name of the function. Below you can read the full list of functions that will be generated by different directives.

Generated Functions

Function: CreateX

Generated by @model and issues a DynamoDB PutItem operation with a condition expression to create records if they do not already exist.

Arguments

The CreateX function expects

{
    "stash": {
        "args": {
            "CreateX": {
                "input": {
                    "title": "some title",
                },
                "condition": {
                    "expression": "attribute_not_exists(#id)",
                    "expressionNames": {
                        "#id": "id"
                    },
                    "expressionValues": {}
                }
            }
        }
    }
}

Function: UpdateX

Generated by @model and issues a DynamoDB UpdateItem operation with a condition expression to update if the item exists.

Arguments

The UpdateX function expects

{
    "stash": {
        "args": {
            "UpdateX": {
                "input": {
                    "title": "some other title",
                },
                "condition": {
                    "expression": "attribute_exists(#id)",
                    "expressionNames": {
                        "#id": "id"
                    },
                    "expressionValues": {}
                }
            }
        }
    }
}

Function: DeleteX

Generated by @model and issues a DynamoDB DeleteItem operation with a condition expression to delete if the item exists.

Arguments

The UpdateX function expects

{
    "stash": {
        "args": {
            "DeleteX": {
                "input": {
                    "id": "123",
                },
                "condition": {
                    "expression": "attribute_exists(#id)",
                    "expressionNames": {
                        "#id": "id"
                    },
                    "expressionValues": {}
                }
            }
        }
    }
}

Function: GetX

Generated by @model and issues a DynamoDB GetItem operation.

Arguments

The UpdateX function expects

{
    "stash": {
        "args": {
            "GetX": {
                "id": "123"
            }
        }
    }
}

Function: ListX

Generated by @model and issues a DynamoDB Scan operation.

Arguments

The ListX function expects

{
    "stash": {
        "args": {
            "ListX": {
                "filter": {
                    "expression": "",
                    "expressionNames": {},
                    "expressionValues": {}
                },
                "limit": 20,
                "nextToken": "some-next-token"
            }
        }
    }
}

Function: QueryX

Generated by @model and issues a DynamoDB Query operation.

Arguments

The QueryX function expects

{
    "stash": {
        "args": {
            "QueryX": {
                "query": {
                    "expression": "#hashKey = :hashKey",
                    "expressionNames": {
                        "#hashKey": "hashKeyAttribute",
                        "expressionValues": {
                            ":hashKey": {
                                "S": "some-hash-key-value"
                            }
                        }
                    }
                },
                "scanIndexForward": true,
                "filter": {
                    "expression": "",
                    "expressionNames": {},
                    "expressionValues": {}
                },
                "limit": 20,
                "nextToken": "some-next-token",
                "index": "some-index-name"
            }
        }
    }
}

Function: AuthorizeCreateX

Generated by @auth when used on an OBJECT.

Arguments

The AuthorizeCreateX function expects no additional arguments. The AuthorizeCreateX function will look at $ctx.stash.CreateX.input and validate it against the $ctx.identity. The function will manipulate $ctx.stash.CreateX.condition such that the correct authorization conditions are added.


Function: AuthorizeUpdateX

Generated by @auth when used on an OBJECT.

Arguments

The AuthorizeUpdateX function expects no additional arguments. The AuthorizeUpdateX function will look at $ctx.stash.UpdateX.input and validate it against the $ctx.identity. The function will manipulate $ctx.stash.UpdateX.condition such that the correct authorization conditions are added.


Function: AuthorizeDeleteX

Generated by @auth when used on an OBJECT.

Arguments

The AuthorizeDeleteX function expects no additional arguments. The AuthorizeDeleteX function will look at $ctx.stash.DeleteX.input and validate it against the $ctx.identity. The function will manipulate $ctx.stash.DeleteX.condition such that the correct authorization conditions are added.


Function: AuthorizeGetX

Generated by @auth when used on an OBJECT.

Arguments

The AuthorizeGetX function expects no additional arguments. The AuthorizeGetX function will look at $ctx.stash.GetX.result and validate it against the $ctx.identity. The function will return null and append an error if the user is unauthorized.


Function: AuthorizeXItems

Filters a list of items based on @auth rules placed on the OBJECT. This function can be used by top level queries that return multiple values (list, query) as well as by @connection fields.

Arguments

The AuthorizeXItems function expects $ctx.prev.result to contain a list of "items" that should be filtered. This function returns the filtered results.


Function: HandleVersionedCreate

Created by the @versioned directive and sets the initial value of an objects version to 1.

Arguments

The HandleVersionedCreate function augments the $ctx.stash.CreateX.input such that it definitely contains an initial version.


Function: HandleVersionedUpdate

Created by the @versioned directive and updates the condition expression with version information.

Arguments

The HandleVersionedUpdate function uses the $ctx.stash.UpdateX.input to append a conditional update expression to $ctx.stash.UpdateX.condition such that the object is only updated if the versions match.


Function: HandleVersionedDelete

Created by the @versioned directive and updates the condition expression with version information.

Arguments

The HandleVersionedDelete function uses the $ctx.stash.DeleteX.input to append a conditional update expression to $ctx.stash.DeleteX.condition such that the object is only deleted if the versions match.


Function: SearchX

Created by the @searchable directive and issues an Elasticsearch query against your Elasticsearch domain.

Arguments

The SearchX function expects a single argument "params".

{
    "stash": {
        "args": {
            "SearchX": {
                "params": {
                    "body": {
                        "from": "",
                        "size": 10,
                        "sort": ["_doc"],
                        "query": {
                            "match_all": {}
                        }
                    }
                }
            }
        }
    }
}

Generated Resolvers

The @model, @connection, and @searchable directives all add resolvers to fields within your schema. The @versioned and @auth directives will only add functions to existing resolvers created by the other directives. This section will look at the resolvers generated by the @model, @connection, and @searchable directives.

@model resolvers

type Post @model {
    id: ID!
    title: String
}

This schema will create the following resolvers:


Mutation.createPost

The Mutation.createPost resolver uses its own RequestMappingTemplate to setup the $ctx.stash such that it's pipeline is parameterized to return the correct results.

Mutation.createPost.req.vtl

#set($ctx.stash.args = {})
#set($ctx.stash.args.CreatePost = {
    "input": $ctx.args.input
})

Function 1: CreatePost

The function will insert the value provided via $ctx.stash.CreatePost.input and return the results.

Mutation.createPost.res.vtl

Return the result of the last function in the pipeline.

$ctx.prev.result

Mutation.updatePost

The Mutation.updatePost resolver uses its own RequestMappingTemplate to setup the $ctx.stash such that it's pipeline is parameterized to return the correct results.

Mutation.updatePost.req.vtl

#set($ctx.stash.args = {})
#set($ctx.stash.args.UpdatePost = {
    "input": $ctx.args.input
})

Function 1: UpdatePost

The function will update the value provided via $ctx.stash.UpdatePost.input and return the results.

Mutation.updatePost.res.vtl

Return the result of the last function in the pipeline.

$ctx.prev.result

Mutation.deletePost

The Mutation.deletePost resolver uses its own RequestMappingTemplate to setup the $ctx.stash such that it's pipeline is parameterized to return the correct results.

Mutation.deletePost.req.vtl

#set($ctx.stash.args = {})
#set($ctx.stash.args.DeletePost = {
    "input": $ctx.args.input
})

Function 1: DeletePost

The function will delete the value designated via $ctx.stash.DeletePost.input.id and return the results.

Mutation.deletePost.res.vtl

Return the result of the last function in the pipeline.

$ctx.prev.result

Query.getPost

The Query.getPost resolver uses its own RequestMappingTemplate to setup the $ctx.stash such that it's pipeline is parameterized to return the correct results.

Query.getPost.req.vtl

#set($ctx.stash.args = {})
#set($ctx.stash.args.GetPost = {
    "id": $ctx.args.id
})

Function 1: GetPost

The function will get the value designated via $ctx.stash.GetPost.id and return the results.

Query.getPost.res.vtl

Return the result of the last function in the pipeline.

$ctx.prev.result

Query.listPosts

The Query.listPosts resolver uses its own RequestMappingTemplate to setup the $ctx.stash such that it's pipeline is parameterized to return the correct results.

Query.listPosts.req.vtl

#set($ctx.stash.args = {})
#set($ctx.stash.args.ListPosts = {
    "filter": $util.transform.toDynamoDBFilterExpression($ctx.args.filter),
    "limit": $ctx.args.limit,
    "nextToken": $ctx.args.nextToken
})

Function 1: ListPosts

The function will get the value designated via $ctx.stash.ListPosts.id and return the results.

Query.listPosts.res.vtl

Return the result of the last function in the pipeline.

$ctx.prev.result

@connection resolvers

type Post @model {
    id: ID!
    title: String
    comments: [Comment] @connection(name: "PostComments")
}
type Comment @model {
    id: ID!
    content: String
    post: Post @connection(name: "PostComments")
}

The example above would create the following resolvers


Post.comments

The Post.comments resolver uses its own RequestMappingTemplate to setup the $ctx.stash such that it's pipeline is parameterized to return the correct results.

Post.comments.req.vtl

#set($ctx.stash.args = {})
#set($ctx.stash.args.QueryComments = {
    "query": {
        "expression": "#connectionAttribute = :connectionAttribute",
        "expressionNames": {
            "#connectionAttribute": "commentPostId"
        },
        "expressionValues": {
            ":connectionAttribute": {
                "S": "$ctx.source.id"
            }
        }
    },
    "scanIndexForward": true,
    "filter": $util.transform.toDynamoDBFilterExpression($ctx.args.filter),
    "limit": $ctx.args.limit,
    "nextToken": $ctx.args.nextToken,
    "index": "gsi-PostComments"
})

Function 1: QueryPosts

The function will get the values designated via $ctx.stash.QueryPosts and return the results.

Post.comments.res.vtl

Return the result of the last function in the pipeline.

$ctx.prev.result

Comment.post

The Comment.post resolver uses its own RequestMappingTemplate to setup the $ctx.stash such that it's pipeline is parameterized to return the correct results.

Comment.post.req.vtl

#set($ctx.stash.args = {})
#set($ctx.stash.args.GetPost = {
    "id": "$ctx.source.commentPostId"
})

Function 1: GetPost

The function will get the values designated via $ctx.stash.GetPost and return the results.

Comment.post.res.vtl

Return the result of the last function in the pipeline.

$ctx.prev.result

@searchable resolvers

type Post @model @searchable {
    id: ID!
    title: String
}

Query.searchPosts

The Query.searchPosts resolver uses its own RequestMappingTemplate to setup the $ctx.stash such that it's pipeline is parameterized to return the correct results.

Query.searchPosts.req.vtl

#set($ctx.stash.args = {})
#set($ctx.stash.args.SearchPosts = {
    "query": $util.transform.toElasticsearchQueryDSL($ctx.args.filter),
    "sort": [],
    "size": $context.args.limit,
    "from": "$context.args.nextToken"
})

Function 1: SearchPosts

The function will get the values designated via $ctx.stash.GetPost and return the results.

Comment.post.res.vtl

Return the result of the last function in the pipeline.

$ctx.prev.result

@auth resolvers

The @auth directive does not add its own resolvers but will augment the behavior of existing resolvers by manipulating values in the $ctx.stash.

  • Mutation.createX - @auth will add logic to the request mapping template of the resolver that injects a condition into $ctx.stash.CreateX.condition
  • Mutation.updateX - @auth will add logic to the request mapping template of the resolver that injects a condition into $ctx.stash.UpdateX.condition
  • Mutation.deleteX - @auth will add logic to the request mapping template of the resolver that injects a condition into $ctx.stash.DeleteX.condition
  • Query.getX - @auth will add logic to the response mapping template of the resolver that will return the value if authorized.
  • Query.listX - @auth will add logic to the response mapping template of the resolver that will filter $ctx.prev.result.items based on the auth rules.
  • Query.searchX - @auth will add logic to the response mapping template of the resolver that will filter $ctx.prev.result.items based on the auth rules.
  • Query.queryX - @auth will add logic to the response mapping template of the resolver that will filter $ctx.prev.result.items based on the auth rules.
  • Model.connectionField - @auth will add logic to the response mapping template of the resolver that will filter $ctx.prev.result.items based on the auth rules.

@versioned resolvers

The @versioned directive does not add its own resolver but will augment the behavior of existing resolvers by manipulating values in the $ctx.stash.

  • Mutation.createX - @versioned will add logic to the request mapping template of the resolver that injects a condition into $ctx.stash.CreateX.condition
  • Mutation.updateX - @versioned will add logic to the request mapping template of the resolver that injects a condition into $ctx.stash.UpdateX.condition
  • Mutation.deleteX - @versioned will add logic to the request mapping template of the resolver that injects a condition into $ctx.stash.DeleteX.condition

Proposal 2: The @before and @after directives

There are many possibilities for how to expose pipeline functions via the transform. Defining a function of your own requires a request mapping template, response mapping template, and a data source. Using a function requires that you place that function, in order, within a pipeline resolver. Any directive(s) introduced would need to be able to accomodate both of these requirements. Here are a few options for discussion.

Before & After directives for adding logic to auto-generated model mutations

The main use case for this approach is to add custom authorization/audit/etc logic to mutations that are generated by the Amplify CLI. For example, you might want to lookup that a user is a member of a chat room before they can create a message. Currently this design only supports mutations but if you have suggestions for how to generalize this for read operations, comment below.

directive @before(mutation: ModelMutation!, function: String!, datasource: String!) ON OBJECT
directive @after(mutation: ModelMutation!, function: String!, datasource: String!) ON OBJECT
enum ModelMutation {
    create
    update
    delete
}

Which would be used like so:

# Messages are only readable via @connection fields.
# Message mutations are pre-checked by a custom function.
type Message 
  @model(queries: null)
  @before(mutation: create, function: "AuthorizeUserIsChatMember", datasource: "ChatRoomTable")
{
    id: ID!
    content: String
    room: Room @connection(name: "ChatMessages")
}
type ChatRoom @model @auth(rules: [{ allow: owner, ownerField: "members" }]) {
    id: ID!
    messages: [Message] @connection(name: "ChatMessages")
    members: [String]
}

To implement your function logic, you would drop two files in resolvers/ called AuthorizeUserIsChatMember.req.vtl & AuthorizeUserIsChatMember.res.vtl:

## AuthorizeUserIsChatMember.req.vtl **
{
    "operation": "GetItem",
    "key": {
        "id": "$ctx.args.input.messageRoomId"
    }
}

## AuthorizeUserIsChatMember.res.vtl **
#if( ! $ctx.result.members.contains($ctx.identity.username) )
  ## If the user is not a member do not allow the CreatePost function to be called next. ** 
  $util.unauthorized()
#else
  ## Do nothing and allow the CreatePost function to be called next. **
  $ctx.result
#end

The @before directive specifies which data source should be called and the order of the functions could be determined by the order of the @before directives on the model. The @after directive would work similarly except the function would run after the generated mutation logic.

Audit mutations with a single AppSync function

type Message
  @model(queries: null)
  @after(mutation: create, function: "AuditMutation", datasource: "AuditTable")
{
    id: ID!
    content: String
}
# The Audit model is not exposed via the API but will create a table 
# that can be used by your functions.
type Audit @model(queries: null, mutations: null, subscriptions: null) {
    id: ID!
    ctx: AWSJSON
}

You could then use function templates like this:

## AuditMutation.req.vtl **
## Log the entire resolver ctx to a DynamoDB table **
#set($auditRecord = {
    "ctx": $ctx,
    "timestamp": $util.time.nowISO8601()
})
{
    "operation": "PutItem",
    "key": {
        "id": "$util.autoId()"
    },
    "attributeValues": $util.dynamodb.toMapValuesJson($auditRecord)
}

## AuditMutation.res.vtl **
## Return the same value as the previous function **
$util.toJson($ctx.prev.result)

Request for comments

The goal is to provide simple to use and effective abstractions. Please leave your comments with questions, concerns, and use cases that you would like to see covered.

@DanielCender
Copy link

Hi @mikeparisstuff,
I am presuming that these proposed features will allow more granular control of overwriting auto-generated AppSync resolvers, as currently outlined in the docs here. Curious if any of this is currently in the works.

My team is going to be implementing pipeline resolvers heavily on a current project, and I am looking forward to updates like these to allow us to control the whole pipeline from within our codebase.

I realize that there is support for constructing custom resolvers locally, per this section of the Amplify docs.
Since we have been implementing pipelines to access Delta tables for use with DeltaSync, it would incredible if we could not just create our custom resolvers locally, but also easily specify those pipelines, their contents, function ordering, etc.

I might be overlooking a way to accomplish this that combines multiple techniques from the docs/issues, but if not, just wanted to put this out there, given the nature of this RFC.

Thanks!

@laura-sainz-mojix-com
Copy link

+1!

@YikSanChan
Copy link

According to Proposal 1 Generated Functions section, does it mean amplify-cli will auto generate queries look like:

// queries.js
const getPost = `query GetPost($id: ID!) {
    "stash": {
        "args": {
            "GetPost": {
                "id": $id
            }
        }
    }
}
`

and we can execute graphql operation by

API.graphql(graphqlOperation(queries.getPost, { id: postId }))
  .then(response => response.data.xxx)

What will be the response different from current version? How can we access stashed data from previous graphql operation?

@YikSanChan
Copy link

YikSanChan commented Apr 14, 2019

  1. An easy to debug pipeline resolver is also very important. The very good first step would be to allow printing in vtl such that developers can tell what happen inside pipeline resolvers by looking into cloudwatch log. See How do you debug resolvers? amplify-cli#652.

  2. Can we add custom pipeline resolvers by adding files under resolvers/ folder? See Custom pipeline resolver amplify-cli#1271. If not, this is a good feature to have.

@ambientlight
Copy link

ambientlight commented Apr 17, 2019

@mikeparisstuff:

Looking at proposal 2, @before and @after doesn't allow to chain arbitrary multiples of pipeline resolver functions.

What do you think about making @before and @after to become generic pipeline resolver @function, directive but taking extra argument(mutation) and also position in relation to @model?

Then

type Message
  @before(mutation: create, function: "Authorize", datasource: "AuditTable")
  @model(queries: null)
  @after(mutation: create, function: "AuditMutation", datasource: "AuditTable")
{
    id: ID!
    content: String
}

becomes:

type Message
  @function(mutation: create, name: "Authorize", datasource: "AuditTable")
  @model(queries: null)
  @function(mutation: create, name: "AuditMutation", datasource: "AuditTable")
{
    id: ID!
    content: String
}

Then multiples of pipeline resolver function can be chained before and after @model's mutation resolvers providing us greater flexibility.

We can treat @function as a generic pipeline resolver building block that allows us to compose any arbitrary pipeline resolver hierarchy. In this context we can render aws-amplify/amplify-cli#83 in as @function's with lambda datasource:

@function(name: "ComplexCompose", datasource: "complex_compose_some_magic_api_request")
@http(url: "https://somemagicapi/:dataType/:postId/:secondType")
@function(name: "ComplexResponseTransform", datasource: "complex_response_transform")
@function(name: "AddSomeSeasoning", datasource: "add_some_seasoning")

One example for use case when we might need multiple @functions after 'primary' resolver: say we need some custom business logic to run inside VPC, and those lambdas take pretty long cold starts, we can factor out logic from multiple @function to this single one, attach a keep alive cloudwatch event that will keep periodically calling this function to avoid cold starts on it and probably another @function transform past this function to transform the result into a desired form.

@hisham
Copy link

hisham commented Apr 17, 2019

I like the Audit use case. How would you search through that audit log model you made? Hook it up to elastic search? How would you protect this search through @auth rules? Would the graphql transformer support this use case fully.

@tafelito
Copy link

If I have to query some data before doing a mutation in the same operation, Is the pipeline the solution for this too? Any plans on implementing transactional pipelines?

@hisham
Copy link

hisham commented Apr 18, 2019

Also for the audit use case, from performance perspective, it seems like recording to audit table should happen in parallel to the primary mutation rather than before or after. It might be better to implement it as a lambda stream from a dynamodb table for example. I assume pipeline resolvers currently don't yet allow for async or parallel operations..

@timrchavez
Copy link

@mikeparisstuff

Currently this design only supports mutations but if you have suggestions for how to generalize this for read operations

I don't know enough about the implementation, but what are the challenges with making proposal 2 work for query/read operations?

Ideally we'd want to support the isFriend scenario outlined here https://docs.aws.amazon.com/appsync/latest/devguide/tutorial-pipeline-resolvers.html

@davekiss
Copy link

Just tried to create and run a pipeline function that depends a result of an autogenerated resolver and realized the @after is what I'm looking for. I think adding as many functions in the SDL as you might need to complete the pipeline would be ideal as @ambientlight suggests.

@ajhool
Copy link

ajhool commented Jun 4, 2019

Both proposals look good.

Currently this design only supports mutations but if you have suggestions for how to generalize this for read operations, comment below.

I'm not sure why @before and @after approach wouldn't also be useful for get operations?


In Proposal 1, it appears that authentication is only provided for dynamo resolvers (ie. AuthorizeCreateX, AuthorizeUpdateX). However, now that the @function directive has been added to the API ( aws-amplify/amplify-cli#83 ) there should also be an (AuthorizeInvokeX or AuthorizeFunctionX). Custom resolver lambda functions can add security in code but preventing invocation would provide an additional layer of security that conforms to the auth groups defined throughout the schema. It would also be easier to add group logic into AuthorizeInvokeX than in lambda code.

@hisham
Copy link

hisham commented Jun 5, 2019

Any estimate for when any of this RFC would start getting implemented and relative priority to other RFCs? It's seriously needed for any multi-tenancy app.

@artista7
Copy link

@mikeparisstuff any updates?

@artista7
Copy link

@mikeparisstuff , can we have tenatative deadline so as to decide if we should wait for the future or go for alternate approach? :)

@kaustavghosh06
Copy link
Contributor

@artista7 Could you take a look at the "Chaining functions" section out here - https://aws-amplify.github.io/docs/cli/graphql#function and see if it solves for your use-case?

@artista7
Copy link

artista7 commented Jun 26, 2019

@kaustavghosh06 , Point is I want to put a filter in case of some mutations, for eg. createUser (but only on monday), thus I wanted to override createUser resolver to pipeline resolver instead of creating new mutation fields with different names.

@nino-moreton
Copy link

Creating a logger is a PITA right now until we have some form of this RFC. Current workaround we're thinking of is to create a custom mutation for each CRUD action with the help of @function directive. So basically overriding all of our models' gql mutation.

@andrewbtp
Copy link

andrewbtp commented Aug 13, 2019

I'm a big fan of prop 1. Prop 2 adds a lot to the schema if you have bigger pipelines, and it seems to offer less customization. If we the users are going to be messing with resolver pipelines, we have to dive into the pipelineFunctions folder (or whatever) anyway, so the config is fine to go there rather than built-in as directives like prop 2.

To expand on this more, I'm making pipeline resolvers for queries, deletes, updates, and creates. I don't want to have 10+ lines in my schema per model just for pipeline functions.

@idanlo
Copy link

idanlo commented Aug 28, 2019

can we have any updates? It has been 5 months and this is a much needed feature for an app that is not a todo list

@andrewbtp
Copy link

I have a Python script handling this I can stick on Github if ya want @idanlo

@ronaldocpontes
Copy link

@hisham thanks for referencing aws-amplify/amplify-cli#6217 ... will definitely have a play with useExperimentalPipelinedTransformer

@kevcam4891 this is exactly what we need. Would you have more details on how did you write your custom transformer plugin? Would you be kind enough to share some of the code for it?

@renebrandel
Copy link
Contributor

Hi @ronaldocpontes - correct we're tracking this as part of aws-amplify/amplify-cli#6217 and here's the documentation to author a custom transformer plugin: https://docs.amplify.aws/cli/plugins/authoring#authoring-custom-graphql-transformers--directives

@ronaldocpontes
Copy link

Thanks @renebrandel and @kevcam4891

I am planning to copy/expanding the graphql-auth-transformer to implement @mikeparisstuff solution here #449

But this would end up just like the PR below, which wasn't merged:
https://github.com/aws-amplify/amplify-cli/pull/3123/files

Would you recommend a different approach?

@kevcam4891
Copy link

@ronaldocpontes I'd recommend NOT trying to introduce something into the Amplify codebase. Create a standalone plugin that sits in your project. Amplify CLI will look for npm packages that are installed local to your user, your project, or are installed globally on your machine. I'll try and follow up soon with an article of what I did in particular (its not a trivial writeup), but for sure, you create the npm package for your own consumption and then Amplify CLI finds it and uses it during the transform process if it is listed in backend/api/[name]/transform.conf.json``transformers array property.

@renebrandel I will say, I used the docs link references above for inspiration, but the docs were far from a step-by-step guide, and in fact only about 1/3 of it is represented accurately. If you debug the project during an amplify push you can a little more clearly where the proper hooks are. When time allows, I can offer suggested updates to the page to make it easier to follow.

@ronaldocpontes
Copy link

Agreed @kevcam4891. I would rather have an external plugin providing additional features than having to fork, replace and maintain parts of the framework codebase.

Your solution seems like quite a good way to use pipeline resolvers when adding business logic is needed on top of the basic operations provided by @model directive and something we would use a lot. Can't wait to see your writeup.

@volkanunsal
Copy link

Any updates?

@UXDart
Copy link

UXDart commented Jul 20, 2021

this was never implemented right?

@rahul-insight
Copy link

Is there any update on this?

@maziarzamani
Copy link

Any update here?

@brienpafford
Copy link

Bump. I requested this feature via a support ticket with AWS and they sent me to this ticket. This feature is pretty critical -- currently, pushing to Amplify overrides the pipeline resolver configuration set in the AppSync console and defeats the purpose of the custom pipeline features if its settings cannot be preserved between Amplify cli deployments. Thanks!

@renebrandel
Copy link
Contributor

Hi folks - just wanted to let you know that we've released the Pipeline Resolver support now with GraphQL Transformer v2. With the new Transformer, all Amplify-generated resolvers are going to be pipeline resolvers. We also provide you mechanisms to "override" Amplify-generated resolvers OR "slot in"/"extend" the Amplify-generated resolvers.

More details here:

@-mention me if there's a particular scenario that you believe isn't covered. Would love to chat more!

@zirkelc
Copy link

zirkelc commented Dec 11, 2021

@renebrandel is there a way to automatically create a AppSync data source as well? I can easily slot in custom business logic into the pipeline by create a VTL template (e.g. preDataLoad or postDataLoad), but these custom resolver templates will be executed against the None data source and therefore, can basically only set static values. It would be good if I can write a VTL template to invoke a Lambda function and Amplify creates the corresponding AppSync data source as well.

Let's say I have schema with a User type with @model annotation. Amplify creates the DynamoDB table and the CRUD queries and mutations (and their DynamoDB resolvers) for me. Now the createUser and updateUser mutation should hash the password before saving it into DynamoDB. The new "slot-in" mechanism would make it possible to create two VTL templates Mutation.createUser.preDataLoad.1.req.vtl and Mutation.updateUser.preDataLoad.1.req.vtl which are going to get called right before the request to DynamoDB to put the item. It would be ideal to invoke a Lambda function here, hash the password and return the result. The subsequent template Mutation.createUser.req.vtl receives the modified input (hashed password) and executes the PutItem operation.

It seems to me that this is not possible yet, except to create a custom mutation like signUpUser and create all resolvers on our own like described in this post.

I think there are few options this could be made possible in a user friendly way:

  1. extend the @function directive parameter to define when and where the functions gets invoked based on the parent field (e.g. createUser mutation) which is available as source from the event object
type User @model  {
  id: ID!
  name: String!
  password: String! 
    @function(name: "hash-password-function", onMutations: [{ field: "createUser", slot: pre}, { field: "updateUser", slot: pre }])
}
  1. allow overwriting/extending of generated mutations like createUser without the need to create the all other types (input, condition, etc.) as well.
# extend the generated Mutations from @model with a @function directive
extend type Mutation {
  createUser(input: CreateUserInput!, condition: ModelUserConditionInput): User
    @function(name: "hash-password-function")
  updateUser(input: UpdateUserInput!, condition: ModelUserConditionInput): User
    @function(name: "hash-password-function")
}
  1. create a @datasource directive to easily define new data sources from schema.graphql and use data source field in the VTL template to specify the data source
@datasource(type: LAMBDA, name: "hash-password-function")
type User @model @auth(rules: [{ allow: public }])
{
  id: ID!
  name: String!
  password: String! 
}
## Mutation.createUser.preDataLoad.1.req.vtl
## [Start] Invoke AWS Lambda data source: HashPasswordLambdaDataSource. **
{
  "version": "2018-05-29",
  "operation": "Invoke",
  "datasource": "HashPasswordLambdaDataSource"
  "payload": {
      "typeName": $util.toJson($ctx.stash.get("typeName")),
      "fieldName": $util.toJson($ctx.stash.get("fieldName")),
      "arguments": $util.toJson($ctx.arguments),
      "identity": $util.toJson($ctx.identity),
      "source": $util.toJson($ctx.source),
      "request": $util.toJson($ctx.request),
      "prev": $util.toJson($ctx.prev)
  }
}
## [End] Invoke AWS Lambda data source: HashPasswordLambdaDataSource. **

Would love to hear your option! :-)

@renebrandel
Copy link
Contributor

renebrandel commented Jan 14, 2022

Hi @zirkelc - just catching up after all the holiday madness! Some comments on the approaches:

  1. I think this might create a tiny bit of confusion for customers because @function is currently heavily associated with "Lambda". This will be harder to extend into other use cases where pure VTL (NONE), HTTP, or DDB datasources would be used.
  2. I think this is a good approach for overriding completely but will be harder or too verbose in terms of "extending". Overall my favorite from the 3!
  3. We tried this approach with other directives but unfortunately, the GraphQL spec does not allow "global" `@directives". (I know..., right?)

I had a couple of 1:1 conversations with AWS Community Builders. One of the DX suggestions, that I like and had good traction internally is:

Option 4:
Given all resolvers are now pipeline resolvers and the new approach on overriding/extending resolvers, we could make the top line of the resolver file define the data source. For example:

  • Mutation.createUser.preDataLoad.1.req.vtl
## DataSource=DynamoDB:Todo-{env}
......... rest of resolvers
  • Mutation.createUser.preDataLoad.1.req.vtl. For Lambda, maybe a short-hand, where the payload is what it is auto-populated.
## DataSource=Lambda:myfunction-{env}
  • Mutation.createUser.preDataLoad.1.req.vtl
## DataSource=HTTP:www.example.com-{env}
......... rest of resolvers

@zirkelc
Copy link

zirkelc commented Jan 18, 2022

@renebrandel thank you for taking the time to respond to my remarks. Option 4 sounds like a good compromise as the data source name can be used for defining a new data source and invoking the correct data source by the resolver.

Do you have any information about the timeline for this feature?

@rbby201
Copy link

rbby201 commented Mar 17, 2022

Are there any further updates on this?

@alexboulay
Copy link

alexboulay commented Apr 19, 2023

+1

@renebrandel Any updates on this?

@renebrandel
Copy link
Contributor

Hi - I'm going to be closing this issue out as we've now launched additional guides on how to extend the GraphQL API via amplify add custom https://docs.amplify.aws/cli/graphql/custom-business-logic/#vtl-resolver and also we've now launched the GraphQL API as a first-class CDK construct. https://aws.amazon.com/blogs/mobile/announcing-aws-amplifys-graphql-api-cdk-construct-deploy-real-time-graphql-api-and-data-stack-on-aws/

Within the CDK approach, the experience is built-in to the construct, see the .add*() methods we've created: https://aws.amazon.com/blogs/mobile/announcing-aws-amplifys-graphql-api-cdk-construct-deploy-real-time-graphql-api-and-data-stack-on-aws/

Please @-mention me if this doesn't fully resolve the issues outlined here.

@ronaldocpontes
Copy link

Thanks @renebrandel are we also planning on adding guides with best practices on on how to support multi-tenancy in amplify apps? This might be the key to resolving #449 (comment)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests