Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ready for production? #24

Open
DaniGuardiola opened this issue Jun 10, 2019 · 10 comments

Comments

Projects
None yet
2 participants
@DaniGuardiola
Copy link

commented Jun 10, 2019

Hello there, thanks for your work on Nameko :)

We want to use nameko + grpc in our company and we're wondering about the state of this project. We are still far from a production version ourselves so we don't really need nameko-grpc to be production ready right now, but we would like to know about the roadmap so that we can decide whether to use it or not.

If we do decide to use it we could report any bugs and suggestions we find along the way, possibly even contribute with PRs at some point.

Thank you!

@mattbennett

This comment has been minimized.

Copy link
Member

commented Jun 10, 2019

Hi @DaniGuardiola,

I put this project together as a prototype, but it quickly became more than that. The test coverage is extensive and the "equivalence" tests demonstrate that nameko-grpc and the official python client behave in the same way -- at least at the application level (i.e. grpc messages sent/received, status codes etc; there may be differences in terms of bytes on the wire)

I have not had an opportunity to use it in production yet, but it is likely that my company will be doing so in the next couple of months, so if you do decide to use it you will have someone along on the journey with you.

I think gRPC is fantastic and conceptually pairs very well with Nameko. I have a (draft) blog post on the subject that you might like to read: https://gist.github.com/mattbennett/1fdc9df9ccde3cd4798af5e47a714fce

@DaniGuardiola

This comment has been minimized.

Copy link
Author

commented Jun 10, 2019

Nevermind, typo in my code

@DaniGuardiola

This comment has been minimized.

Copy link
Author

commented Jun 10, 2019

I'm gonna post the little issues I find here to avoid spamming the issues tab, also because probably most of them are gonna be little mistakes on my side as I'm not very familiar with the project (if that's alright with you).

@DaniGuardiola

This comment has been minimized.

Copy link
Author

commented Jun 10, 2019

Some questions:

  1. Is TLS supported?
  2. Can you add some examples of asynchrony to the README?
  3. AMQP is replaced by gRPC, so there's no need for it anymore, am I right?
  4. What's stopping the nameko-tracer PR from being merged?
  5. Do you think it is feasible to have a more abstract interface (closer to the standard nameko) as an optional decorator like, idk, @grpc_simple? This could allow declaring the request fields as keyword arguments (making type annotations and defaults possible) and giving the response as a dictionary, or potentially a single value if the response message contains a single field. In fact, this could be the default decorator @grpc and a different one such as @grpc_context could become the current one, as the context message is not gonna be used most of the time. Of course, I didn't take streams into account, I'm just brainstorming, but maybe there's a nice way to fit them in.
  6. Could type annotations and RST docstrings be added to the nameko-grpc interface for easier development in IDEs and possibly also mypy static type-checking?
  7. What can we help with? I can personally help with the simpler decorator idea, maybe TLS integration, types/docstrings and overall testing.

Thanks a lot!!

@mattbennett

This comment has been minimized.

Copy link
Member

commented Jun 10, 2019

  1. Is TLS supported?

Not yet. This would be a great PR :)

  1. Can you add some examples of asynchrony to the README?

Will do 👍

  1. AMQP is replaced by gRPC, so there's no need for it anymore, am I right?

AMQP-RPC isn't necessarily replaced by gRPC. You can use one or both or neither, up to you. If your service isn't using any Nameko's "built-in" AMQP extensions, you don't need a broker.

  1. What's stopping the nameko-tracer PR from being merged?

@iky and I went around the houses on this one. gRPC is different to all the other Nameko extensions in that it supports streaming requests and responses. The approach I chose to take in #20 was to emit a new record for every message in a stream. In contract, the opentracing standard (which supports gRPC) emits one span for every request/response (i.e. all messages in a stream end up in the same span). Since opentracing is a standard, we decided that this was a better approach, and #20 kind of died on the vine.

I think the next step for tracing is to expand nameko-tracer to support the opentracing standard, and then nameko-grpc can do whatever it needs to do emit opentracing-compatible grpc traces.

  1. Do you think it is feasible to have a more abstract interface (closer to the standard nameko) as an optional decorator like, idk, @grpc_simple? This could allow declaring the request fields as keyword arguments (making type annotations and defaults possible) and giving the response as a dictionary, or potentially a single value if the response message contains a single field. In fact, this could be the default decorator @grpc and a different one such as @grpc_context could become the current one, as the context message is not gonna be used most of the time. Of course, I didn't take streams into account, I'm just brainstorming, but maybe there's a nice way to fit them in.

You'll have to expand on this a bit for me.

Nameko's AMQP-RPC is completely dynamic, whereas gRPC uses a predefined protobuffer specification. I don't think there's any scope for dynamically defining the definition. Perhaps I'm misunderstanding you?

  1. Could type annotations and RST docstrings be added to the nameko-grpc interface for easier development in IDEs and possibly also mypy static type-checking?

I've never actually done any work with type annotations. We aren't supporting Python 2.x so I don't see why we can't make use of this.

  1. What can we help with? I can personally help with the simpler decorator idea, maybe TLS integration, types/docstrings and overall testing.

Any help would be great. Supporting secure connections is something that would be helpful, and overall testing and use is the most significant thing that would move the project forwards.

@DaniGuardiola

This comment has been minimized.

Copy link
Author

commented Jun 11, 2019

Hi @mattbennett thanks for your answers.

  1. (TLS) I'm getting familiarized with the code, I'll try to implement it.
  2. (asynchrony examples) Thanks! :)
  3. (AMQP) Oh ok, got it.
  4. (tracer) Got it.
  5. (decorator) I'll include an example of what I mean below.
  6. (type annotations) Alright cool. I'll see if I can help with this as well.
  7. (helping out) Alright. I'll start with the TLS PR and testing and see where else I can contribute.

About the decorator, here's what I mean. Currently, you would declare a (unary-unary) method in a service like this:

from nameko_grpc.entrypoint import Grpc
from proto.service_pb2 import MultiplyResponse
from proto.service_pb2_grpc import exampleStub

grpc = Grpc.implementing(exampleStub)


class ExampleService:
    name = "example"

    @grpc
    def multiply(self, request, context):
        result = request.number1 * (request.number2 or 2)
        return MultiplyResponse(result=result)

What I'm proposing is something like the following:

from nameko_grpc.entrypoint import Grpc
from proto.service_pb2_grpc import exampleStub

grpc = Grpc.implementing(exampleStub)


class ExampleService:
    name = "example"

    @grpc
    def multiply(self, number1: int, number2: int = 2) -> Dict[str, int]:
		"""		
        Multiplies two numbers

        :param number1: The first number to multiply.
        :param number2: The second number to multiply.
        :return: The product of number1 times number2.
		"""
        result = number1 * number2
        return {"result": result}

I believe it should be achievable. The parameters can be passed by destructuring (or however this is called) the request object content (as a dict) as keyword args:

# wherever this is called...
returned_dict = example_service.multiply(**request.__dict__)

The response might be a little trickier but should be achievable as well by finding the response class from the corresponding _pb2.py file and passing the returned dictionary as keyword args to the constructor:

response_cls = get_response_class_somehow(method_name_or_something)
response_instance = response_cls.__init__(**returned_dict)
# then do whatever's done with the response instance

I see some big advantages to this method:

  • Greater abstraction. No proto-specific imports needed, except for the stub.
  • Closer to nameko's standard @rpc decorator.
  • Type hints and docstrings are feasible (great for documentation / IDE's).
  • Code is more portable as the functions themselves can be copied and pasted and be used in an almost standalone way. Even the whole class service.
  • Easier to understand.
  • Makes generating a service-specific client automatically (with all the types, docstrings, parameters and return values) a way easier task. This is something I plan on doing on our product.

What do you think?

@DaniGuardiola

This comment has been minimized.

Copy link
Author

commented Jun 11, 2019

I should add that this isn't compatible with having the current decorator as well, either with a different name (ex. @gprc_raw) or by having my suggested decorator have a different name (ex. @gprc_simple).

@DaniGuardiola

This comment has been minimized.

Copy link
Author

commented Jun 12, 2019

Another thing that could be useful is being able to specify the port(s) as well. Ports can be secure and insecure, and there's apparently no limit in gRPC.

@mattbennett

This comment has been minimized.

Copy link
Member

commented Jun 13, 2019

I see what you mean about the decorator now. You're breaking up the request schema into its components and exposing each one as a separate argument.

It does look more similar to the Nameko AMQP RPC entrypoint, but I don't think it'll bring all the advantages you list. And/or there are other ways to achieve those advantages:

  • Greater abstraction. No proto-specific imports needed, except for the stub.

Protobuf message definitions can use nested types (example) so you'd still need to import proto-defined classes. You can only really expose the first "layer" in the method signature.

  • Type hints and docstrings are feasible (great for documentation / IDE's).

You could still use typehints on the proto-defined classes. You'd need this anyway for any nested types. I haven't looked, but I would be surprised if there wasn't gRPC support in some IDEs.

  • Code is more portable

I think this is probably invalidated by the nested types.

  • Makes generating a service-specific client automatically an easier task

Perhaps I'm misunderstanding the goal here, but doesn't protobuf give you this out of the box? Given the (protobuf) definition for a service, you can generate a client in whatever language you choose. And you can inspect the definition for the types and (I assume) doctrings.

@mattbennett

This comment has been minimized.

Copy link
Member

commented Jun 13, 2019

Another thing that could be useful is being able to specify the port(s) as well. Ports can be secure and insecure, and there's apparently no limit in gRPC.

Yep, this would be great :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.