Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Automatic query complexity #612

Merged
merged 8 commits into from
Sep 12, 2023
Merged

Conversation

oprypkhantc
Copy link
Contributor

@oprypkhantc oprypkhantc commented Aug 4, 2023

Closes #610

This is based on Hot Chocolate's query complexity. The way this works is it sets a complexityFn on each field, which webonyx/graphql-php uses to determine each field's complexity.

I felt like including Cost information in the schema is important so clients can actually estimate a cost for a specific query before even thinking of writing it. Unfortunately, with the current state of GraphQL, there's no other way to add "meta" information to fields other than the description. This is why I'm adding cost information to the field "comments", although I'd much prefer something like a @cost directive. Sadly, that's not possible.

# Conflicts:
#	src/Http/Psr15GraphQLMiddlewareBuilder.php
#	tests/Integration/EndToEndTest.php
Copy link
Collaborator

@oojacoboo oojacoboo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is awesome @oprypkhantc - thanks! I'll be testing an implementation as well.

@oojacoboo oojacoboo merged commit d286d55 into thecodingmachine:master Sep 12, 2023
6 checks passed
@oprypkhantc oprypkhantc deleted the cost branch September 12, 2023 08:27
@oojacoboo
Copy link
Collaborator

So, one thing I am noticing is that, without the ability to base the multiplier off the actual number of objects retrieved in a collection, you're forced to set your default limits in such a way that's already calculating the upper bounds of the overall complexity limit. This is mostly a concern if the client isn't always passing the limit (take) and it's defaulting (int $limit = 50). It does make implementation a bit more thoughtful, to achieve optimal complexity calculations. The benefit is that it encourages more thoughtful default limits and you get better test coverage as it's calculating against the assumed upper bounds.

Regardless, there isn't a way around this to still achieve static analysis - just an observation.

@oojacoboo
Copy link
Collaborator

@oprypkhantc I don't see anything around mutations. Will this work on mutations, input type fields?

@oprypkhantc
Copy link
Contributor Author

@oojacoboo It's designed only to work on selections; input fields are not calculated towards query complexity. Is there a reason for it to work for input fields? In our case mutations are generally small, so too many input fields is definitely not a problem for us.

@oojacoboo
Copy link
Collaborator

oojacoboo commented Sep 14, 2023

The issue with not having cost complexity associated with mutations is that you're left with needing to resolve mutation abuse another way - especially considering that write operations are expensive. By having costs associated with mutations, you can mitigate this concern. And by adding cost complexity to fields of an input type, you have more control over the accuracy of the cost calculations for a mutation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Automatic query complexity
2 participants