Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Events in the Actor Model world #87

Closed
rogeralsing opened this issue Feb 18, 2018 · 6 comments
Closed

Events in the Actor Model world #87

rogeralsing opened this issue Feb 18, 2018 · 6 comments
Labels

Comments

@rogeralsing
Copy link

rogeralsing commented Feb 18, 2018

I'm trying to get what context this initiative is trying to cover.
I run the Proto.Actor project, cross platform actor framework (http://proto.actor)

For us, things like batching to increase throughput and reduce overall message size is important.
e.g. we buffer messages either until the buffer is full or deadline timeout and then compress the payload by replacing message names/namespaces with identifiers unique for the message batch.

We also heavily rely on things like location transparency, we have the concept of PID (process ID, copied from Erlang, ActorRef in Akka).

Are things like this taken into consideration when designing the spec?

At what level does the spec live, is it down to the raw bits and bytes serialization level?
Or is it more of a checklist that applies above this?
e.g. you can be cloudevent compatible while using either JSON or ProtoBuf or other serializers?

@duglin
Copy link
Collaborator

duglin commented Feb 19, 2018

@rogeralsing see: #84
If adopted it'll mean that, for now, the mechanisms by which we transport events over the wire are not yet covered by the spec. Does that help?

@duglin
Copy link
Collaborator

duglin commented Mar 31, 2018

@rogeralsing given the change we've made to the spec, is this still an issue or can we close this?

@duglin
Copy link
Collaborator

duglin commented May 10, 2018

@rogeralsing lots of serializations can be written for CloudEvents. I would like to close this but I think the question of whether we should deal with batching is one that might be worth discussion - going to tag this as v1.0

@duglin duglin added the v1.0 label May 10, 2018
@duglin
Copy link
Collaborator

duglin commented Jun 5, 2018

Related to:#72 ?

@duglin duglin mentioned this issue Jun 5, 2018
@rogeralsing
Copy link
Author

Potentially, but I read #72 as an 1-1 layering/wrapping of messages.
In our case, and I assume this is true for other high throughput messaging aswell.
we map multiple in process messages targeting the same remote node into a single batch envelope.
which is later unpacked by the receiving node, redistributing each individual message to the individual receivers on that node.

 Actor1       Actor2      Actor3
    |           |            |
    v           v            v
Many messages targeting actors on Node2

--------------Node1------------------
                |
                |
                v
         Batch Envelope
                |
                |
                v
--------------Node2------------------

          Many messages 
    |           |           |
    v           v            v
 Actor4       Actor5      Actor6

@duglin
Copy link
Collaborator

duglin commented Jun 18, 2018

During the 6/15 f2f we agreed to close this for now and if it pops up as a requirement later on then we can reopen it.

@duglin duglin closed this as completed Jun 18, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants