-
Notifications
You must be signed in to change notification settings - Fork 78
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RFC - @auth directive improvements #455
Comments
Loving it.
Fantastic!
I'm drooling a bit at this point...
Thank you please. Fantastic proposals! I think the level of granular control this will enable will make it easier to maintain regulatory compliance (e.g. HIPAA, PCI) with the graphql-transformer and generally make schema design simpler. Looking forward to seeing it in production. |
Great write up and love the proposals. One big area is the idea of multi-tenancy and not relying on Cognito Groups but "tenantId" or some other arbitrary field to determine tenant ownership. This might be covered by enabling custom Auth logic with pipeline functions. This is how I'm solving it currently, but it's a manual process and a directive to cover this case would be nice! |
The "Deny by default" mode would be perfect. For a secure multi-tenant app I would rather have all of my queries/mutations protected by default and need to be explicitly opened up for a user/group, so that a schema/resolver mistake on my end doesn't automatically mean compromised user data. |
These are all fantastic and I can't wait for them to be implemented. I think proposal 1 needs to get implemented asap so we don't have to refactor too much. I also think proposal 5 is very important feature that's lacking from Amplify. Having to write pipeline resolvers + CFN for every operation because you need to verify the tenant sucks the joy out of my life! |
Please consider the aws-amplify/amplify-cli#805 issue. To summarize: it would be nice if the GraphQL endpoint created by Amplify could have cognito-level auth (eg. owner, groups/admin, etc.), but also be accessible by other AWS resources like Lambda. Perhaps the auth directive could include a flag that would generate an IAM policy or IAM role that would allow other resources (eg. Lambda) to perform mutations and queries? The other role (eg. Lambda) would need to be responsible for setting the owner field correctly. On one hand, I recognize that Amplify prefers to keep everything within its own ecosystem, however, it would be nice if there were simple hooks to allow other resources to interact with the Amplify stack and leverage the graphQL endpoint. It seems like a common use-case that the user might create a DynamoDB object and then a backend system would update it or delete it. |
Proposal 3 to secure subscriptions is a must have for us (and I’m really curious to know if I could as of today us the console to attach a particular resolver to a subscription that would secure it). Finally, regarding @ajhool remarks right above, I would like to add that making the auth directive compatible with a setup with AWS_IAM would be great ! (for apps that have an auth/unauth setup) Thank you @mikeparisstuff and the team, please keep going like this, we love amplify :) ! |
Allow Owner not working On IAM ... |
…auth support via the @auth dire Adding support for field level authorization via the @auth directive. re #1043 Cleaning up tests
Proposals 1 & 5 implemented in aws-amplify/amplify-cli#1262 |
Proposal 3 would help companies working with Amplify/AppSync subscriptions maintain compliance (PCI, HIPAA, privacy, etc) more easily. Please consider prioritizing it for your next iteration! |
…rective (#1262) * feat(@auth directive transformer and e2e tests.): Adding field level auth support via the @auth dire Changes: - Added the 'operations' argument to the AuthRule input used by the @auth directive. - Made backwards compatible changes such that @auth directives that use the 'operations' argument protect @connection resolvers. - Added support for field level @auth directives. Protect query resolvers on any type and mutations on @models. - Many new e2e tests for field level authorization checks. - Refactor existing tests to make debugging and coverage easier to understand. re #1043
@mikeparisstuff If it's not too late, you REALLY need to change auth owner validation to be based on the cognito sub and not the username. Usernames can be reassigned, and if that happens, you could have a potential situation where person can access a record owned by the person who used to own that record. The sub value does not change, ever, and should be the unique identifier for validation of ownership. AWS response explains this pretty clearly: |
@mikeparisstuff - I'm wondering if Proposal 5 could be expanded to support conditions more complex than just and/or? A particular use case I'd love to see For example, i have a model (let's call it "node") which has various states: Rather than a static
This is pretty rough - but just sketching out a possible solution. I also can see the case for this creating too much complexity, but the only alternative right now (that I see) seems to be writing a custom resolver for every model, which is complex in it's own way. |
A common use case that is currently a little bit awkward is a DynamoDB object where each user should have only one (eg. UserSettings). I usually implement that setting the For instance:
I believe the 2 current ways to achieve this query are:
b.
Neither of those are bad at all and this definitely isn't high priority, but it would be nice to have in the backend and it is common enough that it would make sense to support natively IMHO |
Two comments related to rule 6. Both comments advocate for an explicit First: The For instance:
Second: Another common use case that I don't think is currently possible with the @auth directive (please correct me if I'm wrong, I'd really like to use it!) is to explicitly provide public/unauthenticated/authenticated read permissions. I would consider this to be a better implementation of Rule 6 by simply making everything explicit. A persisted object that an Admin creates/updates, but that everybody should be able to get/list. It would be nice if there were default groups named
It seems unnecessarily complex that we would create a Cognito group called "everybody" or "customers" and automatically add all users to that group when they sign up, but maybe that is the better design pattern? |
…rective (aws-amplify#1262) * feat(@auth directive transformer and e2e tests.): Adding field level auth support via the @auth dire Changes: - Added the 'operations' argument to the AuthRule input used by the @auth directive. - Made backwards compatible changes such that @auth directives that use the 'operations' argument protect @connection resolvers. - Added support for field level @auth directives. Protect query resolvers on any type and mutations on @models. - Many new e2e tests for field level authorization checks. - Refactor existing tests to make debugging and coverage easier to understand. re #1043
Hi folks, might be the wrong thread or place for this but somewhat related - can anyone explain to me the rationale for choosing Cognito:username as the implicit "owner" if none is provided, as implemented in the auto gen'ed resolvers, as opposed to the identity's "sub"? My guess is one's as good as the other and the former is more readable, but I do wonder about the impact of username changing/reassignment etc. Googling a little on this pulled up this thread for example, which suggests that "sub" would be a much more resilient identifier for auth? For example, if a user is deleted and another created with the same username, the current auth means they'll get access to all the first user's entities..? |
@jmorecroft it's purely a mistake on the Amplify team's part. The sub should always be used to uniquely identify a user. I already posted about this here in this same thread. |
@jmorecroft @et304383 I had posted a similar bug/bad design decision (in the amplify js repo), where the Storage object is connected to a transient id: aws-amplify/amplify-js#1787 , not sure if that is still the case. |
Protecting a @connection field by queries is done by using this pattern: type User @model {
id: ID!
username: String
posts: [Post]
@connection(name: "UserPosts")
@auth(rules: [{ allow: owner, ownerField: "username" }])
}
type Post @model(queries: null) { ... } How can we protect, that someone else adds a new Post to any other User? Say I'm logged in as 'Cersei' and do the following mutation: mutation addPostToOtherUser {
createPost(input: { title:"My evil Post", userPostId:"Daenerys"} ) {...}
} I would like to protect against this. Edit: Never mind it works quite well when I do something like this: type User @model @auth(rules: [{ allow: owner, ownerField: "id" }]) {
id: ID!
posts: [Post]
@connection(name: "UserPosts")
@auth(rules: [{ allow: owner, ownerField: "id" }])
}
type Post @model(queries: null) {
title: String!
user: User!
@connection(name: "UserPosts")
@auth(rules: [{ allow: owner, ownerField: "userPostId" }])
} If I save the username as id in the type User, it will be used as the userPostId in the @connection of the user field in type Post and block the creation of the Post when the username does not match the identity. |
I'm using the new @auth 'operations' argument to achieve:
my setup: Outcome:
Since `{allow: owner} will impose OwnerStrategy for ALL operations, I would expect my setup to exclude the read operation from the OwnerStrategy. Because it is excluded I would expect it to function just as it would without the OwnerStrategy. Regardless of a user or not, it should produce the read. |
I am interested in Proposal 5 (and/or auth rules), as I would like to have a group membership override the owner priveleges. For example, a banned user (still owner of a post, but not currently allowed to edit/delete due to being in the "Banned Users" group). |
Can @auth support be extended to S3Objects (storage) category? Last time I checked the storage category (https://github.com/aws-amplify/amplify-cli/blob/master/packages/amplify-category-storage/Readme.md) creates 3 folders - private, protected, and public. Objects in private are only accessible to the user who uploaded them. Protected is accessible to any authenticated user, and public is public access. The authentication for this is also through identity management I believe, not cognito sub. It seems kind of divorced from the @auth rules set in the GraphQL schema, though access to the S3Object stored in dynamodb is controlled by the @auth rules. The actual objects stored in S3 are not. This is a point of confusion and perhaps can be considered as part of this RFC or aws-amplify/amplify-cli#766. From my high level point of view, data in S3 should be controlled under the same access rules set by this @auth directive. |
+1 for this. I have a use case that I need to control S3 access based on the same Cognito/Auth Groups that govern my API access. I need Admin level access, group level access, or individual level access based on logged in user. I'm trying to set up S3 Bucket policies based on Cognito User Pool groups but I'm having to do it manually at the moment. |
Wait a second, per item number 3, where "Currently subscriptions are not protected automatically." does this mean that protected data can leak to anybody who subscribes to a data type? (issue also referenced, here: aws-amplify/amplify-cli#1766 ) Let's say the owner of the website defines their auth module as follows:
And then I go to their website and run this code from their site (that developer has to be running the code from the client because amplify-js doesn't support server side execution).
Do I now have access to every single Todo object that gets created on their site? Even though the developer added the @auth rule for "owner" only with the clear goal of keeping that user's Todo (or, say, healthcare records) information private? Please tell me that I'm misunderstanding something here because this makes a mockery of the efforts I've taken to secure my website if the Amplify team is aware of such a terrible vulnerability in the default configuration of the framework and doesn't make any effort to document it (or demonstrate any urgency to make it the non-default configuration), aside from a proposal in an RFC. If amplify were actually used by any serious companies in production, this would effectively be a zero-day vulnerability -- this isn't the "absence of a feature", it's an undocumented vulnerability in the default configuration Why is there even an @auth directive provided if it doesn't currently prevent subscriptions from providing a firehose of data to anybody who wants it when used in the default configuration? And why is there no documentation suggesting that a non-default configuration is required? Proper documentation would have given me the opportunity to not use this framework before realizing these shortcomings so late in the game; I will be far more hesitant to depend on future AWS products if this is the amount of thought given to such a clear and ridiculous vulnerability. Would anybody who uses amplify in production and protects valuable information please send me their website url? |
I'm still waiting to see if we can get Proposal 5 and have "And" in @auth rules. This is very important for multi-tenancy where you always need to check for tenant id AND other auth rules. |
@amirmishani Same here. This is the last thing I am waiting for to release my app. |
Hi there, any update on Proposal 5 "And" in @auth rules? @mikeparisstuff ? |
Hi, is there anything new about granting read access for non-authenticated users? |
In proposal 3 - add a new sub-proposal or something -
Single and group owner should behave as expected but sometimes we need a if-found condition. Add a new if-in-group owner. Allow subscriptions if a user is present in a member array is very frequently needed use case especially when the members list could be dynamic (users can be added or removed ) Example whatsapp application |
I'm trying to create a YouTube-like resources hub where some sources are public and others are private. If I use the current public For example: type Resource
@model
@auth(
rules: [
{ allow: groups, groupField: "visibleTo", operations: [read] }
{ allow: groups, groupField: "editors", operations: [update, read] }
{ allow: groups, groups: ["admin"], operations: [create, update, read, delete] }
]
) {
id: ID!
title: String!
description: String!
videoUrl: String
presentationUrl: String
editors: [String]!
visibleTo: [String]!
} The above would look at the With the above example, how does one enable public access without writing countless custom resolvers/lambda functions? There is no Any suggestions on how we can enable public/private access that can change? It would be great if there was a way add a Maybe the @auth directive can be updated so that: @auth(rules: [{ allow: groups, groupField: "visibleTo", operations: [read], includePublicGroup: "true"}]) If |
@kylekirkby I'm sorry I don't have time for more thorough research or a better answer. You should look at both the "custom claims" in the amplify cli graphql transformer documentation, and pre-token trigger in cognito. I used the latter to inject group claims for federated users that are linked to cognito identity pool users. I haven't tried this with public users, I suspect one of these features might be your best path. |
@ryanhollander - Thanks for your reply! I think the pre token trigger method would work as long as unauthenticated requests sent to app sync respect the custom identity claims. The pre token trigger function would have to inject a cognito:groups value of public for unauth users. I’ll have a play with it and see if I can get something working. |
@kylekirkby Good luck. FYI, You can use the Amplify CLI to create the trigger w/Lambda through the Auth Flow. |
@ryanhollander cheers! I used the cli to add a pre sign up trigger hoping to be able to link External Provider accounts with Cognito Users only to find that the AdminLinkUsers api function is broken :/ |
@kylekirkby briefly, I just implemented this using the pre-sign-up trigger and adminLinkProviderForUser. I did this using the Auth Flow and a few lambdas. There were a lot of gotcha's! What I thought might take a morning took me almost 3 days. I did not experience this particular issue, though I read about it alot, it's possible the amplify js eats the issue or cognito was fixed, not sure. FWIW: I'm using Amplify's FederatedLogin. What I ended up doing was a tad convoluted. The issue is that you can't give the trigger lambda the permissions for the user pool. This is because it would create a circular dependency in the build. The workaround is to create a second lambda, give it the auth permissions via the cli, and manually add the execute permissions to the second lambda for the trigger via the cloudformation json (because there is no dependency for this as IAM doesn't care if your resource exists). Let the trigger call the second lambda for any cognito functions. What I found was I needed two of these, a pre-sign-up to link the federated user to the pool user, and a pre-token to inject the group assignments from the linked cognito user into the token of the federated user. For this project I am using the user pool as the primary user store but users can only login with their domain email through a federated login (for now). Anyway, I might get around to posting more detail and code on this at some point... |
@ryanhollander nice one for the write up! Sounds familiar! I’ve done the whole create a secondary lambda to be called from a post confirmation trigger in another project. I managed to get the pre signup trigger to work with wildcard permissions without needing to add to another lambda function. I know the full cognito pool arn should be used but that was a quick work around and I assumed that since the lambda is triggered directly from Cognito itself, it should be safe some what to assume the correct pull is used. As for the @auth situation. I’m thinking it might be easier to separate out the PrivateResources into its own type. And give full public read access to a Resource type (public). As for the storage of the videos in S3 I think I’m going to need to write a custom query to fetch a pre-signed url from s3 if the user belongs to a valid group for that resource. |
@kylekirkby with the amplify cli you can give the lambda the exact pool arn for each env with 0 overhead after you do the 1 custom config. Another type/model is always an option, both paths have advantages. I've done that signed URL to S3 scheme too. It's not hard to implement and works great. I think I pulled most of the code from a blog post somewhere. |
@ryanhollander how did you arrange the bucket for user uploads? Did you use the Amplify Storage feature and then set uploads to private? |
@kylekirkby I used the Amplify Storage and amplify.js on the front end, I store the reference in DynamoDB. I then use the aws sdk in a lambda to generate a signed URL with a 24 hour expiration and a custom URL shortener (lambda/s3) to generate a short, branded link pointing to the signed url. This is what we send to the user when they request a document. The user is interacting with an Alexa skill in this case, and the skill sends the link via SMS or email immediately upon request. |
@ryanhollander Thanks for the insight! |
But Proposal 5 is still not working. I still can not implement multi tenancy with user roles as proposed in aws-amplify/amplify-cli#317. I want only the admin to have access on the type and only within the tenant. My query is
It still generates
Or am I missing something. What have I done wrong? could someone please help me? :) |
Can anyone help me in this ? |
I believe something like this might help address some of the concerns brought up in other issues. Some users - including myself - are looking at graphql interfaces as a channel for discussing granting permissions to various table items based on those implementations' enum PermissionType {
PUBLIC @auth( rules: [
{ allow: public, operations: [ read ] },
{ allow: groups, groups: ["Admins", "Editors"] }
])
PRIVATE @auth( rules: [
{ allow: groups, groups: ["Admins", "Editors"] }
])
}
type Resource @model {
id: ID! @primaryKey
status: PermissionType! @index(name: "Resources_by_Status", queryField: "resourcesByStatus")
name: String!
}
type Node @model {
id: ID! @primaryKey
status: PermissionType! @index(name: "Nodes_by_Status", queryField: "nodesByStatus")
resources: [Resource] @hasMany(indexName: "Resources_by_Node", fields: ["id"])
edges: [Edge] @manyToMany(relationName: "EdgeNode")
}
type Edge @model {
id: ID! @primaryKey
nodes: [Node] @manyToMany(relationName: "EdgeNode")
} This would allow us to toggle permissions on an item by simply updating a single item's field, rather than having to delete a private item, create a public one and reconnect them to their various connections... |
@loganpowell - this looks ideal. We've currently had to write custom resolvers to check if the user is using the unauth role, which, if they are, we don't return private resources. If the user is logged in then the permissions for resources are entirely based on their group membership. |
@kylekirkby do you have a GSI that has the resources indexed by auth role? Might you share your custom resolver? |
Hi @loganpowell , So currently our Resource type has a What I'm looking at now is updating the client-side queries to filter based on the private value so the correct number of resources are returned to unauth users. It would be much cleaner to get the req resolver filtering these based on the use of the unauth role but, as you probably know, documentation and guides are severely lacking for VTL + Amplify. So in
I've got this: #if( $ctx.identity.userArn == $ctx.stash.unauthRole )
#set( $items = [] )
#foreach( $item in $ctx.result.items )
#set( $isPrivate = $util.defaultIfNull($item.private, false) )
#if( $isPrivate == false )
$util.qr($items.add($item))
#end
#end
#set( $ctx.result.items = $items )
#end
#if( $ctx.error )
$util.error($ctx.error.message, $ctx.error.type)
#end
$util.toJson($ctx.result) If the user is using the unauth role, then private resources are not returned. You'll have to add this logic to all of your resource queries by overriding the amplify generated resolvers. There may be bits I'm missing but I hope this helps! This has been one hell of an issue to solve for me! Edit: this will only work on the v2 GraphQL transformer. |
RFC - @auth directive improvements
This document will outline designs for a number of new features relating to authentication & authorization within the GraphQL Transform. The goal of these features is too fill holes and introduce new mechanisms that make protecting your valuable information easier.
Proposal 1: Replace 'queries' and 'mutations' arguments with 'operations'
Currently an @auth directive like this:
causes these changes to the following resolvers:
In other words, the @auth directive currently protects the root level query & mutation fields that are generated for an @model type.
Problem: The 'queries' and 'mutations' arguments imply top level protection
GraphQL APIs are a graph and we need to be able to define access rules on any field, not just the top level fields.
Solution
I suggest replacing the queries and mutations arguments on the @auth directive with a single operations argument. This would be the new @auth directive definition.
This change generalizes the config such that it implies all read operations on that model will be protected. Not just the top level 'get' & 'list' queries. Auth rules that use the 'read' operation will be applied to top level query fields, @connection resolvers, top level fields that query custom indexes, and subscription fields. Auth rules that use the 'create', 'update', and 'delete' operation will be applied to createX, updateX, and deleteX mutations respectively. Those using queries & mutations will have the same behavior and those using operations will get the new behavior. The queries & mutations arguments will eventually be removed in a future major release.
Protect @connections by default
Once the change from queries/mutations -> operations has been implemented, we will want to go back and implement any missing authorization logic in @connection fields by default.
For example, given this schema:
The new code would add authorization logic to the Blog.posts resolver such that only owner's of the post would be able to see the posts for a given blog. It is important to note that the new logic will restrict access such that you cannot see records that you are not supposed to see, but it will not change any index structures under the hood. You will be able to use @connection with the new custom index features to optimize the access pattern and then use @auth to protect access within that table or index.
Proposal 2: Implement @auth on @searchable search fields
Github Issues
Problem
Currently Query.searchX resolvers generated by @searchable are not protected by @auth rules.
Solution
The Elasticsearch DSL is very powerful and will allow us inject Elasticsearch query terms and implement authorization checks within Elasticsearch. This work will need to handle static & dynamic ownership and group based authorization rules. Any auth rule that includes the 'read' operation will protect the Query.searchX field.
Proposal 3: Make @auth protect subscription fields
Problem: @auth does not protect subscription fields.
Currently subscriptions are not protected automatically.
Solution
AppSync subscription queries are authorized at connect time. That means that we need to parameterize the subscription queries in such a way that any relevant authorization logic is included in the subscription query itself. In the case of ownership @auth, this means that the client must pass an owner as a query argument and the subscription resolver should verify that the logged in user and owner are the same.
For example, given this schema:
The following subscription fields would be output:
and when running a subscription query, the client must provide a value for the owner:
The proposed change would create a new subscription resolver for each subscription field generated by the @model. Each subscription resolver would verify the provided owner matches the logged-in identity and would fail the subscription otherwise.
There are a few limitation to this approach:
onCreatePost(owner: String, groups: String, otherOther: String, anotherOwner: String, anotherListOfGroups: String): Post
has too many fields and is invalid. To handle this the CLI can emit a warning prompting you to customize your subscription field in the schema itself.As an example to point (2) above, imagine this auth rule:
Let's say that we want to subscribe to all new posts where I am a member.
AppSync messages are published to subscriptions when the result of the mutation, to which the subscription field is subscribed, contains fields that equal the values provided by the subscription arguments. That means that if I were to publish a message via a mutation,
the subscription started before would not be triggered because
["my-user-id", "my-friends-user-id"]
is not the same as["my-user-id"]
. I bring this up for clarity but I still think this feature is useful. Single owner & group based authorization will behave as expected.Proposal 4: Field level @auth
Currently an @auth directive like this:
causes these changes to the following resolvers:
In other words, the @auth directive currently protects the root level query & mutation fields.
Github Issues
Problem: You cannot protect @connection resolvers
For example, look at this schema.
Since only top level fields are protected and we do not have an @auth directive on the Task model, we are unintentionally opening access to posts via Task.notes.
Solution
We discussed having @auth rules on OBJECTs automatically protect connection fields in proposal 1, but I also suggest opening the @auth directive such that it can be placed on both FIELD_DEFINITION and OBJECT nodes. This will result in an updated definition for @auth:
You may then use the @auth directive on individual fields in addition to the object type definition. An @auth directive used on an @model OBJECT will augment top level queries & mutations while an @auth directive used on a FIELD_DEFINITION will protect that field's resolver by comparing the identity to the source object designated via
$ctx.source
.For example, you might have:
An important thing to notice is that the @auth directives compares the logged-in identity to the object exposed by
$ctx.source
in the resolver of that field. A side effect of (1) is that an @auth directive on a field in the top level query type doesn't have much meaning since $ctx.source will be an empty object. This is ok since @auth rules on OBJECT types handle protecting top level query/mutation fields.Also note that the queries and mutations arguments on the @auth directive are invalid but the operations argument is allowed. The transform will validate this and fail at compile time with an error message pointing you to the mistake. E.G. this is invalid:
The implementation for allowing operations in field level @auth directives is a little different.
Proposal 5: And/Or in @auth rules
Github Issues
Problem
Currently all @auth rules are joined via a top level OR operation. For example, the schema below results in rules where you can access Post objects if you are the owner OR if you are member of the "Admin" group.
It would be useful if you could organize these auth rules using more complex rules combined with AND and OR.
Solution
We can accomplish this by adding to the the @auth definition.
This would allow users to define advanced auth configurations like:
The generated resolver logic will need to be updated to evaluate the expression tree.
Proposal 6: Deny by default mode.
Github Issues
Problem: There is currently no way to specify deny access by default for Amplify APIs.
If you create an API using a schema:
then the generated create, update, delete, get, and list resolvers allow access to any request that includes a valid user pool token (for USER_POOL auth). This proposal will introduce a flag that specifies that all operations should be denied by default and thus all fields that do not contain an explicit auth rule will be denied. This will also change the behavior of create mutations such that the logged in user identity is never added automatically when creating objects with ownership auth.
Solution: Provide a flag that enables deny by default
By adding a DenyByDefault flag to parameters.json or transform.conf.json will allow users to specify whether fields without an @auth directive will allow access or not. When deny by default is enabled the following changes will be made.
For example, with deny by default enabled
This mutation would fail:
This mutation would succeed:
This top level query would succeed but Post.comments would fail.
More details coming soon
Request for comments
This document details a road map for authorization improvements in the Amplify CLI's API category. If there are use cases that are not covered or you have a suggestion for one of the proposals above please comment below.
The text was updated successfully, but these errors were encountered: