Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

A new way to write ObjectType with python3's annotation #729

Closed
ocavue opened this issue May 20, 2018 · 29 comments
Closed

A new way to write ObjectType with python3's annotation #729

ocavue opened this issue May 20, 2018 · 29 comments

Comments

@ocavue
Copy link

ocavue commented May 20, 2018

This issue is a feature discussion.

For now, the way to write a ObjectType is as below:

class Hero(ObjectType):
    name = Field(List(String), only_first_name=Boolean())
    age = Int()

    def resolve_name(self, info, only_first_name=False):
        if only_first_name:
            return [self._first_name]
        return [self._first_name, self._last_name]

    def resolve_age(self, info):
        return self._age

I define name twice (Hero.name and Hero.resolve_name) because I have to define types of arguments and return value. This will cause some reading and writing problems.

Since python3, annotation feature bring a native way to describe types. By using annotation, we can rewrite this class with a clearer way:

class Heroine(AnnotationObjectType):

    def name(self, info, only_first_name: Boolean() = False) -> List(String):
        if only_first_name:
            return [self._first_name]
        return [self._first_name, self._last_name]

    def age(self, info) -> Int():
        return self._age


print(Heroine.name.__annotations__)
# {'only_first_name': <graphene.types.scalars.Boolean object at 0x104cb8550>, 'return': <graphene.types.structures.List object at 0x105742f28>}

AnnotationObjectType shouldn't be difficult to write if we somehow transform it into ObjectType. But I think a looooot of cases should be tested before being released.

Of cause even if we write an AnnotationObjectType class, ObjectType class will not be dropped since python2.7 doesn't support annotation.

I would like to hear you guys' comments and suggestions before doing this.

@ekampf
Copy link
Contributor

ekampf commented May 21, 2018

Very interesting!

@jkimbo
Copy link
Member

jkimbo commented May 21, 2018

I really like this @ocavue ! I had no idea you could do this python annotations. I don't know much about annotations actually but I have a couple of questions:

  • How would ObjectTypes work?
  • Could you use both ObjectTypes and AnnotatedObjectType's in the same schema?
  • Is there anything that can be expressed in the python type system that can't be expressed in GraphQL (or vice versa)?

I think keeping the ObjectType and AnnotatedObjectType separate is essential, not just for keeping python 2.7 compatibility, but also supporting both ways of writing ObjectTypes.

@ocavue
Copy link
Author

ocavue commented May 24, 2018

@ekampf @jkimbo Thanks for your like!


@jkimbo Here are my answers for your questions. I'm not very familiar with Graphene nor Annotation. So any corrections are welcome.

How would ObjectTypes work?

When Hero inherit ObjectType, ObjectType and it's base classes' __init_subclass_with_meta__ methods will be called. The main purpose of those __init_subclass_with_meta__ methods is create Hero._meta, who stores almost all information about Hero, like this one:

In [5]: Hero._meta.fields
Out[5]: OrderedDict([
            ('name', <graphene.types.field.Field at 0x10e6e11d0>),
            ('age', <graphene.types.field.Field at 0x10e6e1278>)
        ])

Could you use both ObjectTypes and AnnotatedObjectType's in the same schema?

if AnnotatedObjectType is a subclass of ObjectType, it should be easy to let them in the same schema.

Is there anything that can be expressed in the python type system that can't be expressed in GraphQL (or vice versa)?

The only thing that I can find is that python type can do static check by tools like mypy or pyre. For example:

➜  cat -n test_type.py
     1	from typing import List
     2
     3
     4	def process_user(user_ids: List[int]):
     5	    pass
     6
     7
     8	process_user([1, 2, 3])
     9	process_user([4, 5, "NOT_INT"])
➜  mypy .
test_type.py:9: error: List item 2 has incompatible type "str"; expected "int"

But I don't think it's a big deal because of two reasons:

  1. typing only support python 3.5+
  2. Graphene's type checker is made for request and response. There are not very "static".

I think keeping the ObjectType and AnnotatedObjectType separate is essential, not just for keeping python 2.7 compatibility, but also supporting both ways of writing ObjectTypes.

Agree! A downward compatibility API is very important. We all know the story about python 2.7 😂.

@jkimbo
Copy link
Member

jkimbo commented May 25, 2018

Thanks for the answers.

How would ObjectTypes work?

I meant could you do something like this:

class Heroine(AnnotationObjectType):

    def best_friend(root, info) -> Person:
        if only_first_name:
            return [self._first_name]
        return [self._first_name, self._last_name]

where Person is a normal ObjectType or an Interface


Also I thought of another question: How would you define fields that don't need a resolver?

Currently if you have:

class User(ObjectType):
	name = graphene.String(required=True)

and it gets passed an object with an attribute name it will just use grab that data and use it without the need for a resolver. How would you do that with AnnotatedObjectType?

@ocavue
Copy link
Author

ocavue commented May 27, 2018

@jkimbo

I do write a simple AnnotationObjectType implementing:

class AnnotationObjectType(ObjectType):

    def __init_subclass__(cls, *args, **kwargs):
        fields = []
        for name, func in cls.__dict__.items():
            if name != "Meta" and not name.startswith("__"):  # __init__ etc ...
                fields.append((name, func))
        for name, func in fields:
            setattr(cls, "resolve_{}".format(name), func)
            setattr(
                cls,
                name,
                Field(func.__annotations__.pop("return"), **func.__annotations__),
            )
        super().__init_subclass__(*args, **kwargs)

And it works fine for "mix" of AnnotationObjectType, ObjectType and Interface:

query {
    hero {
        name
        wife {  # AnnotationObjectType
            name ( only_first_name: true )
        }
        best_friend {  # Interface
            name ( only_first_name: true )
        }
    }
    heroine {
        name
        husband {  # normal ObjectType
            name ( only_first_name: true )
        }
        best_friend {  # Interface
            name ( only_first_name: true )
        }
    }
}
{"hero": {"best_friend": {"name": ["Hope"]},
          "name": ["Scott", "Lang"],
          "wife": {"name": ["Hope"]}},
 "heroine": {"best_friend": {"name": ["Scott"]},
             "husband": {"name": ["Scott"]},
             "name": ["Hope", "Van", "Dyne"]}}

You can find the entire example in here.


How would you define fields that don't need a resolver?

class User(AnnotationObjectType):
    def name(self, info) -> graphene.String(required=True):
        pass
class User(ObjectType):
    name = graphene.String(required=True)

    def resolve_name(self, info):
        pass

For my implementing of AnnotationObjectType, these two definitions are equivalent. In other words, we can define this kind of fields with an "empty" function. Not beautiful but it works.

Update: A easier way for writting is as below:

class AnnotationObjectType(ObjectType):
    def __init_subclass__(cls, *args, **kwargs):
        ...
        for name, func_or_field in fields:
            if is_field(func_or_field):
                pass
            else:
                setattr(cls, "resolve_{}" ...
                setattr(cls, name, ...
class User(AnnotationObjectType):
    name = graphene.String(required=True)

@jlowin
Copy link
Contributor

jlowin commented May 31, 2018

This is great! I'd love to see first-class support for annotation-based schemas.

One thought: AnnotationObjectType should only take fields that (1) are functions and (2) have return annotations corresponding to Graphene classes. Otherwise, people will be unable to add helper objects/classes to their ObjectTypes. Perhaps functions starting with '_' would be ignored.

@syrusakbary
Copy link
Member

syrusakbary commented May 31, 2018

@ocavue thanks for opening the discussion.

The new resolution syntax on Graphene 2 root, info, **options was took specially with typing in mind, so it could be very easy to annotate the resolver arguments in the future.

Personally, I do really like the way NamedTuple works in the typing package for defining the attributes for the named tuple: https://docs.python.org/3/library/typing.html#typing.NamedTuple
In Python 3.7 onwards, the @dataclass attribute will let the user create a data class very easily: https://www.python.org/dev/peps/pep-0557/

Inspired in all this I think the next version of Graphene (Graphene 3.0) could also allow developers to type their schemas like the following:
(while maintaining 100% compatibility with the previous syntax)

@ObjectType
class Person:
    id: str
    name: str
    last_name: str
    def full_name(self, info) -> str:
        return f'{self.name} {self.last_name}'

persons_by_id = {
  '1': Person(id='1', name='Alice'),
  '2': Person(id='2', name='Bob')
}

@ObjectType
class Query:
   def get_person(self, info, id: str) -> Person:
       '''Get a person by their id'''
       return persons_by_id.get(id)

There are some reasons on why I think this is a very powerful syntax:

  • The decorator syntax permits to extend the normal data types, and will make easier to reason about self in resolvers (currently, on Graphene 2, self is referring to the root object, automatically marking the resolver as a @staticmethod... and sometimes this might be confusing)
  • It permits a fully typed and testable schema, end to end
  • The layering needed between native Python types and GraphQL types will be minimal

However, there are some cons of this approach:

  • field types will be required by default. As an optional type is always a super set of the normal type Optional[T] = Union[T, None] *, Optional types will need to be explicitly defined and being required will be the default.
  • If the ObjectType have other attributes annotated (that we don't want to expose to GraphQL) they will be exposed automatically. We can solve this in various ways:
    • do not include attributes starting _ (private) to GraphQL
    • Add a explicit way of skipping attributes, such as:
@ObjectType(skip: ['password'])
class Person:
   # ...
   password: str
  • We will need to rethink about connections and how are resolved, so we can do things like:
@Connection(node=People)
class ConnectionPeople:
    pass

@ObjectType
class Query:
    def all_people(self, info, first: int = 10) -> ConnectionPeople:
        return ConnectionPeople.get_from_iterable(...)
  • Will be challenging to keep the previous syntax 100% compatible (but still possible, it will be just a bit harder and might complicate the logic for a bit).

What are your thoughts?

@ocavue
Copy link
Author

ocavue commented May 31, 2018

@syrusakbary Thank's for your comment. It's a beautify syntax. I really like it.


  • do not include attributes starting _ (private) to GraphQL
  • Add a explicit way of skipping attributes, such as:

I'm ok with those two rules since graphene_sqlalchemy has aleady something similar:

class User(Base):
    id = Column(Integer, primary_key=True)
    username = Column(String(30), nullable=False, unique=True)
    password = Column(String(128), nullable=False)

class User(SQLAlchemyObjectType):
    class Meta:
        model = User
        only_fields = ("id", "username")

We should also consider that someone may do want to add attributes starting _ into GraphQL.


The layering needed between native Python types and GraphQL types will be minimal

if we use native python types, mypy may raise an error when using DataLoader because what DataLoader.load return is a Promise object. How to solve this?

def get_student_names(self, info) -> typing.List(str):
    return student_names_dataloader.load(self.__class_id)

python<=3.5 don't support variable annotation syntax. So maybe we can support both syntaxes below:

@ObjectType
class Person:
    id: str

@ObjectType
class Person:
    id = graphene.String()  # current syntax

@avivey
Copy link

avivey commented Jan 28, 2019

(Coming from #886)

I like the idea in general, but I have a problem in trying to glue together the type systems of GraphQL and Python - or rather, with writing a single statement to describe both at the same time:

@ObjectType
class Collection:
    def items(self, min: int = 0, max: int = 10) -> List[Item]:
        return get_items(....)

Python type system doesn't let me describe some important aspects of GraphQL type system:

  • Description (For fields, arguments, and ObjectTypes)
  • Directives (@deprecated, etc)

More importantly, there's a certain level of hackyness here: it requires that Python and GQL type systems play very nice together. They probably mostly do, but type systems are always more complicated than they appear, and any little incompatibility will break the abstraction.

@DrPyser
Copy link

DrPyser commented Jan 28, 2019

@avivey Yeah, those are also my concerns. Using python annotations to define the schema is cute, but will conflict with the intended usage for static type checking. Using typing types to define graphql fields means usurping that API for something it's not intended for. That API might evolve in a direction that is not compatible with GraphQL usage.

That being said, I think a middle ground would be to require explicit flagging of resolvers, and still rely on annotations as much as they remain valid type hints:

@ObjectType
class Collection:
    @Field.from_resolver
    def items(self, min: int = 0, max: int = 10) -> List[Item]:
        """"Some description"""
        return get_items(....)

We still need to have a separate way to define any extra information for arguments, fields, types, etc.

  • The Field.from_resolver decorator could extract a description from the resolver docstring. We could expect the docstring to include argument descriptions using a well-defined format.
  • Additional field metadata, like deprecated, could either be specified through decorator arguments(e.g. @Field.from_resolver(deprecated=True)), or through separate decorators, i.e.
      @Field.deprecate
      @Field.from_resolver
      def items(self, min: int = 0, max: int = 10) -> List[Item]: ...
    
    Same for arguments, e.g. @Field.deprecate_args("name").
    This might become verbose though if many options are specified that way.

@stale
Copy link

stale bot commented Jul 29, 2019

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the wontfix label Jul 29, 2019
@stale stale bot closed this as completed Aug 5, 2019
@DrPyser
Copy link

DrPyser commented Aug 6, 2019

@ekampf @jlowin @ocavue Was there any progress somewhere on all these great ideas? Should this issue be reopened?

@thomascobb
Copy link

Python type system doesn't let me describe some important aspects of GraphQL type system:

  • Description (For fields, arguments, and ObjectTypes)
  • Directives (@deprecated, etc)

I discovered this limitation in another project, and I got round it by defining the type within a context manager which then added the extra information at runtime, but didn't interfere at type checking time:
https://github.com/dls-controls/annotypes

I could imagine using a similar trick to write something like:

@ObjectType
class Person:
    with Anno("The ID of the person"):
        id: str
    with Anno("The first name of the person"):
        name: str
    with Anno("The last name of the person", deprecated=True):
        last_name: str

    def full_name(self, info) -> str:
        return f'{self.name} {self.last_name}'

id_annotation = Person.__annotations__["id"]
print(id_annotation.description)  # prints: The ID of the person
print(id_annotation.type)  # prints: <class 'str'>

I've written a gist to check this works:
https://gist.github.com/thomascobb/74a09d993172cf9151c576add8062c27

@DrPyser
Copy link

DrPyser commented Aug 10, 2019

Pydantic is a library for defining data models, used in API frameworks(fastapi for example). It relies heavily on type annotations. Extra information that cannot be put in the type can be added through actual value assignments or through a Config nested class.

@thomascobb
Copy link

@DrPyser this looks very interesting. I assume you would add the description and the deprecated items to a Schema() object? Do Schema() objects and dataclasses work together?

@DrPyser
Copy link

DrPyser commented Aug 14, 2019

@thomascobb That's a good question: https://pydantic-docs.helpmanual.io/#id1 . Would have to try or look into the code.

Either using something like Pydantic's Schema, or equivalently using dataclass.Field.metadata to store description and deprecation information, would accomplish the same thing.
I prefer that to using context managers, simpler and lighter visually.

@syrusakbary syrusakbary reopened this Mar 14, 2020
@stale stale bot removed the wontfix label Mar 14, 2020
@syrusakbary
Copy link
Member

Reopening the issue. This seems still useful for the future of Graphene

@thomascobb
Copy link

Looks like someone already thought of combining Pydantic and Graphene...
https://github.com/upsidetravel/graphene-pydantic

@Eraldo
Copy link

Eraldo commented Jun 14, 2020

Does the @ObjectType decorator already exist in the current version?
Can I turn dataclasses into ObjectTypes with it?
Did I get that right?

@skewty
Copy link

skewty commented Jun 18, 2020

I am totally hoping the approach going forward is very much in-line with how pydantic works.

Some of the reasons for this include:

  • pydantic has already solved most of the issues that would likely be encountered
  • pydantic has done a fantastic (is anything better?) job of reducing the boilerplate required
  • pydantic is already compatible with dataclasses
  • pydantic already has a PyCharm plug-in; easy adaption for graphene
  • pydantic is already heavily leveraged in projects like FastAPI (similar code structure)

Additionally the graphene-pydantic project could likely get absorbed into the future work.

@jkimbo
Copy link
Member

jkimbo commented Jun 24, 2020

@Eraldo the @ObjectType decorator doesn't exist yet. Anyone is welcome to try and implement it though.

Also the graphene-pydantic project looks very cool! https://github.com/strawberry-graphql/strawberry also uses dataclasses to define a graphql server.

@devdoomari3
Copy link

wow... I didn't know about https://github.com/strawberry-graphql/strawberry ...

( shameless plug) I was thinking about using python-type-extractor to make code-gen that spits out graphene-specialized code...

(it's a lib for creating "type-nodes" from python type-annotations... this provides a nice compat-layer for python 3.6 ~ 3.8)

(test fixtures
tests

to see what it does, you can just see how test-fixtures are converted in the tests (too lazy to write docs...)

and

@achimnol
Copy link

achimnol commented Dec 27, 2020

Just dropping by, but I think PEP-593 Annotated type (now available with Python 3.9) could be a huge gamechanger here, because it can combine the Python's intrinsic type annotation with several library-specific annotations, as demonstrated with struct in the PEP document.

@wyfo
Copy link

wyfo commented Jan 23, 2021

As pydantic and strawberry has been mentioned, I would like to mention apischema too (I'm the author of this library).

It generates GraphQL schema directly from your models using type annotations; models can be dataclasses, but not only (for example SqlAlchemy tables.
It has also Relay compliant facilities, for example connections and edges types are created just by annotating a resolver with Connection[MyNode]

However, it's not a Graphene schema but a graphql-core graphql.GraphQLSchema that is generated, but this is not an issue as the first one can be built from the second. The schema can thus be used with libraries like starlette which expect a Graphene schema.

@thomascobb
Copy link

@wyfo thanks for the interesting link.

Is there any way to use apischema to validate function arguments like https://pydantic-docs.helpmanual.io/usage/validation_decorator/ ?

Also, I see that you support Unions for GraphQL output types. GraphQL seem to be heading towards a tagged union input type (graphql/graphql-spec#733). Are there any tools in apischema that would help create tagged inputs and corresponding outputs from a subclass Union such as https://wyfo.github.io/apischema/examples/subclasses_union/ ?

@wyfo
Copy link

wyfo commented Jan 25, 2021

@thomascobb

Is there any way to use apischema to validate function arguments like https://pydantic-docs.helpmanual.io/usage/validation_decorator/ ?

No, for several reasons, the first being that apischema encourage using typing annotations and thus static type checking instead of runtime checks. But this question being quite off-topic here, so I invite you to start a discussion in apischema repository if you are interested, i'm always open to argument.

Also, I see that you support Unions for GraphQL output types. GraphQL seem to be heading towards a tagged union input type (graphql/graphql-spec#733). Are there any tools in apischema that would help create tagged inputs and corresponding outputs from a subclass Union such as https://wyfo.github.io/apischema/examples/subclasses_union/ ?

There aren't yet, because this feature is under discussion concerning the specifications, and because I need to see how it will be handled by graphql-core (depending on how graphql-js will implement it). But as soon it will be released, apischema will support it.
In fact, I've started to think about it with your message, and I've drafted an implementation. As it can also be integrated into the JSON part, I will maybe release a provisional feature before the finalization of GraphQL spec.

@thomascobb
Copy link

@wyfo thanks, I've moved the discussion to wyfo/apischema#56

@j3pic
Copy link

j3pic commented Dec 14, 2021

I have an implementation that allows me to write functional definitions for query resolvers and mutations using Python type annotations. I do not need to wrap these functions in a class.

It isn't necessary to define an AnnotationObjectType. Instead, I have two functions that require globals() as an argument. It collects the functions in the module that have type annotations and construct either a Mutation or a Query class that has the required fields and inner classes.

With this function, you can write queries like this:

def my_helper_function(some_param):
    '''Won't become a query because there are no annotations.'''
    return 'No Param' if some_param is None else some_param

def my_query(root, info,
		   some_param: graphene.String(required=False)=None,
		   some_other_param: graphene.List(required=False)=[]) -> graphene.String:
    return f'some_param = {my_helper_function(some_param)}, some_other_param = {some_other_param}'

Queries = make_query_class(globals())

...and mutations in basically the same way except the class is built with make_mutations_class instead
of make_query_class.

Queries and mutations defined this way can consume and return ObjectTypes without difficulty.

One advantage of doing it this way is that it makes it obvious that query resolvers can be called
from other query resolvers. The normal way of doing it gives the confusing impression that you need to instantiate
a Query object to call methods on (but I suspect Graphene doesn't actually use methods this way
because I've seen the self argument have the value None).

There is one drawback: A source file can either contain queries or it can contain mutations. It cannot contain both.

Also, there is no support for doing this with ObjectType methods because at my company all the ObjectTypes
are "plain old data" objects.

I've been running mutations and queries written this way in production for some time.

If it interests you, I could ask my employer if I can contribute the make_query_class and make_mutations_class functions.

@erikwrede
Copy link
Member

erikwrede commented Jul 26, 2023

This issue is more than four years old and this is already well-supported by other libraries. If you're looking for a library supporting this syntax, please check out strawberry-graphql. There are no plans to implement this in Graphene, as all Graphene users are using the current syntax. Migrating to a new Syntax does not make sense at this stage, neither considering person-hours available to maintain Graphene nor other modern alternatives to Graphene. The effort, again given the available contributors, is better-invested in providing all users with an up to date GraphQL framework that supports their project on a long term.

I'll be closing this issue for now. If you feel like this is the wrong choice, please feel free to start a discussion on our Discord server 😊

@erikwrede erikwrede closed this as not planned Won't fix, can't repro, duplicate, stale Jul 26, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests