Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add state to the group consumer #54

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

dams
Copy link
Contributor

@dams dams commented Dec 12, 2017

I need that for work. I've added an init_handler() function to define a state, which is passed around when handle_messages() is called. Of course it's not backward compatible so this should be reworked so that it's a new mode of consuming, instead of replacing the existing one. But I thouhgt I'd push that here now so I can get feedback.

Let me know if there is already a way to have a state being passed around, that I missed.

Thanks

@objectuser
Copy link
Contributor

@dams This seems like something that's best handled in the consumer and outside of Kaffe.

While I can see that there is a measure of convenience in having this maintained by Kaffe (your example fits nicely with things like Enum.reduce/3), this is something easily done in the consumer. For example, you could put your handle_messages function on a GenServer and maintain the state there.

Adding behavior like this to Kaffe would make it more complex and less focused, overall. I think we want to keep Kaffe focused on just doing the Kafka stuff.

Please let me know if I'm not understanding; happy to discuss further!

@dams
Copy link
Contributor Author

dams commented Dec 12, 2017

Thank you @objectuser for your swift reply !

I totally see your point, but I guess I'm not good enough in Elixir to see how I can achieve what you describe ("put your handle_messages function on a GenServer and maintain the state there") in an easy way.

The way I use kaffe with my patch is to do exactly as you say: at init time, init_handler is called, it starts a genserver and returns its pid as handler_state. Now each time handle_messages is called, it gets the handle_state which is the associated genserver pid, and does a call to the genserver. It's nice and simple.

What I need is one GenServer per partition, so one GenServer per group member worker. The only way I can see to implement this without my patch would be to store genservers pids in an ets table: each time handle_messages is called, check if for the current worker process (using self() ) there is a corresponding genserver in the ets table. If not, start_link a new genserver and store it in the ets table. I find this a bit clunky, but maybe it's just me. Do you have a better way of associating a worker with a genserver ?

Thanks

@objectuser
Copy link
Contributor

@dams I think that's basically the mechanics of it, yes. Although I think the ProcessRegistry would probably be the new way to do that. If you used the :via tuple, you might be able to "lazy create" your processes as well in a nifty behind the scenes way.

Also, depending on your throughput requirements, you could simply create an Agent to hold the data, or an ETS table for the same. It would be a really fast lookup, but also a bottleneck if you had high throughput and a lot of partitions.

@dams
Copy link
Contributor Author

dams commented Dec 13, 2017

Thanks for mentioning the Registry, it's something I might want to use indeed. I can't really afford having a bottleneck of an Agent or an ets table for storing the messages. The system I work with needs to aggregate messages with as little lag as possible, and the throughput is 1 TB per hour. So I need damn good parallelism.

@dams
Copy link
Contributor Author

dams commented Jan 11, 2018

So, I've been using kaffe with this modification, and it has a lot of advantages. Among them:

  • simplicity, no need to take care of maintaining state, no need to create a genserver, register the process, etc
  • more importantly, it doesn't require a new seperate process to which to send all the messages, so it reduces a lot the memory copy between process. In situations where messages are heavy and in great number, memory copy is a problem. with this patch, it's one copy of all messages between processes that is removed

So I was wondering if you'd accept this patch if I rewrite it to provide it as an option to the worker, or a different flavor. A bit like when using the simple consumer, the 'async' option makes it behave slightly differently.

feedback welcome

@objectuser
Copy link
Contributor

@dams We're discussing this a bit internally.

@dams
Copy link
Contributor Author

dams commented Jun 1, 2018

Any progress ?

@objectuser
Copy link
Contributor

I've thought about this a bit more.

We would want to maintain backward compatibility by default. I wonder if we could introduce an abstraction somewhere around Kaffe.Worker. The default implementation would be to call the configured message handler like it is now. But we could make that configurable (maybe with a behavior for the implementations) for different kinds of workers.

If the implementation could be swapped in, it could either be part of Kaffe, or just configured by the app.

What do you think?

@dams
Copy link
Contributor Author

dams commented Sep 24, 2018

I agree with keeping back compat by default. Not sure about using a behavior

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants