-
Notifications
You must be signed in to change notification settings - Fork 397
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Why the "topics" of SinkTask cannot be overridden in SinkConnector.taskConfigs(maxTasks)? #106
Comments
@tony-lijinwen This is intentional, and that config is intended to be managed by the framework. The intent is that the framework leverages the existing consumer group functionality and manages balancing work amongst tasks itself. It can be a bit confusing to a connector developer since framework-level and connector-level configs are a bit conflated, but this keeps things a lot simpler and easier to understand for users. Do you have a reason why you need to split the subscriptions up so they are different among the tasks? At a bare minimum, the framework would at least have to make sure the original subscription was covered, which could actually be quite difficult once we add support for regex subscriptions (which the underlying consumers support, but has not been extended to Kafka Connect yet). This is a request I haven't seen yet before, so I'm curious about the use case. |
@ewencp Thanks for your reply. The reason that I split the subscriptions is: I have three topics, two of them in charge of huge messages, but the messages are not urgent; one of them in charge of small messages, but the messages are urgent. And as I knew, if one consumer subscribe more than one topics, it will consume the messags by FIFO (i.e. If the other topics contain huge messages, the urgent messags will be blocked). So I want to split the subscriptions to ensure the urgent messages can be handled as soon as possible. |
I have the same requirement as the OP. Don't know how the OP resolved this issue with Kafka connect. It would be nice to have a partition assignment scheme that assigns one topic to each sink task, given that the SinkConnector cannot change the assignment among the tasks. |
@tony-lijinwen @sv2000 I'm curious as to why you would not just have separate connector instances if you need the consumption of the topic data to be handled in a different way. Thinking about this outside of the connect related concepts, if I needed to consume data with different sizes and levels of urgency, I would logically expect to have different consumer configurations to accommodate those needs. This is really more of a general Connect API question rather than specific to the HDFS connector, so I would propose moving this discussion to a KAFKA JIRA instead if you would like to continue it. Here is the place to open the JIRA to detail the needs that can't be accommodated currently https://issues.apache.org/jira/browse/KAFKA |
I found the topics cannot be overridden by Connector.taskConfigs(int maxTasks) (i.e. even if I return a config which I have already overridden the topics, the kafka connect will still use the original configuration in Worker.connectorTaskConfigs). Is this a bug? I think it should check the config from Connector.taskConfigs first, if it does not contain the configuration of topics, it tries to use the original configure.
Here is the detail implementation,
1. I wrote a new connector of mongodb, the main config options of it is as below:
2. I want to let each task only subscribe one topic, so I generate the config count equals to the topic size, and override the option topics.
The text was updated successfully, but these errors were encountered: