Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 9 additions & 2 deletions .spelling
Original file line number Diff line number Diff line change
Expand Up @@ -362,8 +362,7 @@ PAY_PER_REQUEST
DynamoDBModelTableWriteIOPS
DynamoDBModelTableReadIOPS
ssn

# cli/graphql.md
event.prev.result
DataSource
DataSources
onCreate
Expand All @@ -389,6 +388,14 @@ versionInput
versioning
PAY_PER_REQUEST
MySQL
UserPoolId
COGNITO_USERPOOL_ID
GraphiQL
ClientId
AppClientIDWeb
typeName
fieldName
prev

# js/vue.md
confirmSignInConfig
Expand Down
322 changes: 322 additions & 0 deletions cli/graphql.md
Original file line number Diff line number Diff line change
Expand Up @@ -899,6 +899,328 @@ The generated resolvers would be protected like so:
- `@connection` resolver: In the response mapping template filter the result's **items** such that only items with a
**groups** attribute that contains at least one of the caller's claimed groups via `$ctx.identity.claims.get("cognito:groups")`. This is not enabled when using the `queries` argument.

### @function

The `@function` directive allows you to quickly & easily configure AWS Lambda resolvers within your AWS AppSync API.

#### Definition

```
directive @function(name: String!, region: String) on FIELD_DEFINITION
```

#### Usage

The @function directive allows you to quickly connect lambda resolvers to an AppSync API. You may deploy the AWS Lambda functions via the Amplify CLI, AWS Lambda console, or any other tool. To connect an AWS Lambda resolver, add the `@function` directive to a field in your `schema.graphql`.

Let's assume we have deployed an *echo* function with the following contents:

```javascript
exports.handler = function (event, context) {
context.done(null, event.arguments.msg);
};
```

**If you deployed your function using the 'amplify function' category**

The Amplify CLI provides support for maintaining multiple environments out of the box. When you deploy a function via `amplify add function`, it will automatically add the environment suffix to your Lambda function name. For example if you create a function named **echofunction** using `amplify add function` in the **dev** environment, the deployed function will be named **echofunction-dev**. The `@function` directive allows you to use `${env}` to reference the current Amplify CLI environment.

```
type Query {
echo(msg: String): String @function(name: "echofunction-${env}")
}
```

**If you deployed your function without amplify**

If you deployed your API without amplify then you must provide the full Lambda function name. If we deployed the same function with the name **echofunction** then you would have:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it only name or also function ARN?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As of now it is name and region and the arn will be formatted for you. This does not support cross account access yet but can be added later.


```
type Query {
echo(msg: String): String @function(name: "echofunction")
}
```

**Example: Return custom data and run custom logic**

You can use the `@function` directive to write custom business logic in an AWS Lambda function. To get started, use
`amplify add function`, the AWS Lambda console, or other tool to deploy an AWS Lambda function with the following contents.

For example purposes assume the function is named `GraphQLResolverFunction`:

```javascript
const POSTS = [
{ id: 1, title: "AWS Lambda: How To Guide." },
{ id: 2, title: "AWS Amplify Launches @function and @key directives." },
{ id: 3, title: "Serverless 101" }
];
const COMMENTS = [
{ postId: 1, content: "Great guide!" },
{ postId: 1, content: "Thanks for sharing!" },
{ postId: 2, content: "Can't wait to try them out!" }
];

// Get all posts. Write your own logic that reads from any data source.
function getPosts() {
return POSTS;
}

// Get the comments for a single post.
function getCommentsForPost(postId) {
return COMMENTS.filter(comment => comment.postId === postId);
}

/**
* Using this as the entry point, you can use a single function to handle many resolvers.
*/
const resolvers = {
Query: {
posts: ctx => {
return getPosts();
},
},
Post: {
comments: ctx => {
return getCommentsForPost(ctx.source.id);
},
},
}

// event
// {
// "typeName": "Query", /* Filled dynamically based on @function usage location */
// "fieldName": "me", /* Filled dynamically based on @function usage location */
// "arguments": { /* GraphQL field arguments via $ctx.arguments */ },
// "identity": { /* AppSync identity object via $ctx.identity */ },
// "source": { /* The object returned by the parent resolver. E.G. if resolving field 'Post.comments', the source is the Post object. */ },
// "request": { /* AppSync request object. Contains things like headers. */ },
// "prev": { /* If using the built-in pipeline resolver support, this contains the object returned by the previous function. */ },
// }
exports.handler = async (event) => {
const typeHandler = resolvers[event.typeName];
if (typeHandler) {
const resolver = typeHandler[event.fieldName];
if (resolver) {
return await resolver(event);
}
}
throw new Error("Resolver not found.");
};
```

**Example: Get the logged in user from Amazon Cognito User Pools**

When building applications, it is often useful to fetch information for the current user. We can use the `@function` directive to quickly add a resolver that uses AppSync identity information to fetch a user from Amazon Cognito User Pools. First make sure you have added Amazon Cognito User Pools enabled via `amplify add auth` and a GraphQL API via `amplify add api` to an amplify project. Once you have created the user pool, get the **UserPoolId** from **amplify-meta.json** in the **backend/** directory of your amplify project. You will provide this value as an environment variable in a moment. Next, using the Amplify function category, AWS console, or other tool, deploy a AWS Lambda function with the following contents.

For example purposes assume the function is named `GraphQLResolverFunction`:

```javascript
const { CognitoIdentityServiceProvider } = require('aws-sdk');
const cognitoIdentityServiceProvider = new CognitoIdentityServiceProvider();

/**
* Get user pool information from environment variables.
*/
const COGNITO_USERPOOL_ID = process.env.COGNITO_USERPOOL_ID;
if (!COGNITO_USERPOOL_ID) {
throw new Error(`Function requires environment variable: 'COGNITO_USERPOOL_ID'`);
}
const COGNITO_USERNAME_CLAIM_KEY = 'cognito:username';

/**
* Using this as the entry point, you can use a single function to handle many resolvers.
*/
const resolvers = {
Query: {
echo: ctx => {
return ctx.arguments.msg;
},
me: async ctx => {
var params = {
UserPoolId: COGNITO_USERPOOL_ID, /* required */
Username: ctx.identity.claims[COGNITO_USERNAME_CLAIM_KEY], /* required */
};
try {
// Read more: https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/CognitoIdentityServiceProvider.html#adminGetUser-property
return await cognitoIdentityServiceProvider.adminGetUser(params).promise();
} catch (e) {
throw new Error(`NOT FOUND`);
}
}
},
}

// event
// {
// "typeName": "Query", /* Filled dynamically based on @function usage location */
// "fieldName": "me", /* Filled dynamically based on @function usage location */
// "arguments": { /* GraphQL field arguments via $ctx.arguments */ },
// "identity": { /* AppSync identity object via $ctx.identity */ },
// "source": { /* The object returned by the parent resolver. E.G. if resolving field 'Post.comments', the source is the Post object. */ },
// "request": { /* AppSync request object. Contains things like headers. */ },
// "prev": { /* If using the built-in pipeline resolver support, this contains the object returned by the previous function. */ },
// }
exports.handler = async (event) => {
const typeHandler = resolvers[event.typeName];
if (typeHandler) {
const resolver = typeHandler[event.fieldName];
if (resolver) {
return await resolver(event);
}
}
throw new Error("Resolver not found.");
};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should show a full Lambda example where we hardcode the returned data like we do in this Post example: https://docs.aws.amazon.com/appsync/latest/devguide/tutorial-lambda-resolvers.html

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, I will update the example.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I updated this with an even more worth while example that shows how to fetch the current user from a cognito user pools which is a question that has come up multiple times in the github issues.

```

You can connect this function to your AppSync API deployed via Amplify using this schema:

```
type Query {
posts: [Post] @function(name: "GraphQLResolverFunction")
}
type Post {
id: ID!
title: String!
comments: [Comment] @function(name: "GraphQLResolverFunction")
}
type Comment {
postId: ID!
content: String
}
```

This simple lambda function shows how you can write your own custom logic using a language of your choosing. Try enhancing the example with your own data and logic.

> When deploying the function make sure you supply an environment variable named COGNITO_USERPOOL_ID with the value defined under the key **UserPoolId** in **amplify-meta.json**

When deploying your function make sure you have provided an execution role with permission to call the Amazon Cognito User Pools admin APIs. Attaching this policy to your function execution role will give you the permissions you need.

```json
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"cognito-idp:*",
],
"Resource": "*"
}
]
}
```

After deploying our function, we can connect it to AppSync by defining some types and using the @function directive. Add this to your schema, to connect the
`Query.echo` and `Query.me` resolvers to our new function.

```
type Query {
me: User @function(name: "ResolverFunction")
echo(msg: String): String @function(name: "ResolverFunction")
}
# These types derived from https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/CognitoIdentityServiceProvider.html#adminGetUser-property
type User {
Username: String!
UserAttributes: [Value]
UserCreateDate: String
UserLastModifiedDate: String
Enabled: Boolean
UserStatus: UserStatus
MFAOptions: [MFAOption]
PreferredMfaSetting: String
UserMFASettingList: String
}
type Value {
Name: String!
Value: String
}
type MFAOption {
DeliveryMedium: String
AttributeName: String
}
enum UserStatus {
UNCONFIRMED
CONFIRMED
ARCHIVED
COMPROMISED
UNKNOWN
RESET_REQUIRED
FORCE_CHANGE_PASSWORD
}
```

Next run `amplify push` and wait as your project finishes deploying. To test that everything is working as expected run `amplify api console` to open the GraphiQL editor for your API. You are going to need to open the Amazon Cognito User Pools console to create a user if you do not yet have any. Once you have created a user go back to the AppSync console's query page and click "Login with User Pools". You can find the **ClientId** in **amplify-meta.json** under the key **AppClientIDWeb**. Paste that value into the modal and login using your username and password. You can now run this query:

```
query {
me {
Username
UserStatus
UserCreateDate
UserAttributes {
Name
Value
}
MFAOptions {
AttributeName
DeliveryMedium
}
Enabled
PreferredMfaSetting
UserMFASettingList
UserLastModifiedDate
}
}
```

which will return user information related to the current user directly from your user pool.

**Structure of the AWS Lambda function event**

When writing lambda function's that are connected via the `@function` directive, you can expect the following structure for the AWS Lambda event object.

| Key | Description |
|---|---|
| typeName | The name of the parent object type of the field being resolver. |
| fieldName | The name of the field being resolved. |
| arguments | A map containing the arguments passed to the field being resolved. |
| identity | A map containing identity information for the request. Contains a nested key 'claims' that will contains the JWT claims if they exist. |
| source | When resolving a nested field in a query, the source contains parent value at runtime. For example when resolving `Post.comments`, the source will be the Post object. |
| request | The AppSync request object. Contains header information. |
| prev | When using pipeline resolvers, this contains the object returned by the previous function. You can return the previous value for auditing use cases. |

**Calling functions in different regions**

By default, we expect the function to be in the same region as the amplify project. If you need to call a function in a different (or static) region, you can provide the **region** argument.

```
type Query {
echo(msg: String): String @function(name: "echofunction", region: "us-east-1")
}
```

Calling functions in different AWS accounts is not supported via the @function directive but is supported by AWS AppSync.

**Chaining functions**

The @function directive supports AWS AppSync pipeline resolvers. That means, you can chain together multiple functions such that they are invoked in series when your field's resolver is invoked. To create a pipeline resolver that calls out to multiple AWS Lambda functions in series, use multiple `@function` directives on the field.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you put a note about what happens under the covers? E.g. it's an AppSync Function corresponding to a Lambda function which are part of a Pipeline Resolver and called in left to right sequence based on how you use @function inline in your schema.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Provided a bit more detail.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe tangential, but would there be a way to create a pipeline resolver with the default implementation as one of the steps? Maybe I want to audit with a lambda after the normal VTL template has finished, or maybe do some related work with a lambda before allowing the default template to do it's thing.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would also make use of a "do your default thing, then run this lambda" or "run this lambda, then do your default thing" behavior

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mikeparisstuff @undefobj Curious if there were any thoughts on this?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mwarger @davekiss I think this is a good idea and is one we have discussed before. This issue discusses handling pipeline support more generally and it would be great to get your feedback on how this can work: https://github.com/aws-amplify/amplify-cli/issues/1055.


```
type Mutation {
doSomeWork(msg: String): String @function(name: "worker-function") @function(name: "audit-function")
}
```

In the example above when you run a mutation that calls the `Mutation.doSomeWork` field, the **worker-function** will be invoked first then the **audit-function** will be invoked with an event that contains the results of the **worker-function** under the **event.prev.result** key. The **audit-function** would need to return **event.prev.result** if you want the result of **worker-function** to be returned for the field. Under the hood, Amplify creates an `AppSync::FunctionConfiguration` for each unique instance of `@function` in a document and a pipeline resolver containing a pointer to a function for each `@function` on a given field.

#### Generates

The `@function` directive generates these resources as necessary:

1. An AWS IAM role that has permission to invoke the function as well as a trust policy with AWS AppSync.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note that if the customer wishes to interact with other AWS resources using this function they will need to edit the IAM Execution Policy for that Lambda manually in the AWS Console.

BTW we should think about that in the future - it would be nice if we can let them augment in a friendly way within the CLI so that subsequent amplify push events don't blow away the updated policy. cc @kaustavghosh06

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@undefobj Can you please explain what you mean by

if the customer wishes to interact with other AWS resources using this function they will need to edit the IAM Execution Policy for that Lambda manually in the AWS Console.

If I have created the function within Amplify, am I not able to edit {functionName}-cloudformation-template.json to attach a PolicyDocument to the function there? Or does the @function directive somehow override this policy?

2. An AWS AppSync data source that registers the new role and existing function with your AppSync API.
3. An AWS AppSync pipeline function that prepares the lambda event and invokes the new data source.
4. An AWS AppSync resolver that attaches to the GraphQL field and invokes the new pipeline functions.

### @connection

Expand Down
Loading