Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Argument to Support Server-Side Calculated Replication Factor #419

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

canhanhan
Copy link

This PR addresses issue #409.

The problem centers around self-hosted Confluent Kafka requiring the replication_factor in the kafka_topic resource to be set to -1 for the configuration of confluent.placement.constraints (used for multi-region clusters). The kafka_topic resource requires that the "replication_factor" is at least 1 or greater.

This causes a client-side error: This most likely occurs because of a request being malformed by the client library or the message was sent to an incompatible broker. See the broker logs for more details. and a server-side InvalidRequestException: Both replicationFactor and confluent.placement.constraints are set. Both cannot be used at the same time.

In this PR, I'm introducing a new argument named managed_replication_factor to resolve this issue. This new addition is an optional boolean parameter; when given a 'true' value, it indicates server-side computation of the replication_factor. It instructs the provider to disregard any changes to the replication_factor, setting the value instead to -1 when creating a topic.

This approach enables the usage of confluent.placement.constraints config on Confluent Kafka, illustrated as follows:

resource "kafka_topic" "my_topic" {
  name                       = "my_topic"
  partitions                 = 3
  replication_factor         = 1 # ignored
  managed_replication_factor = true

  config = {
    "confluent.placement.constraints" = jsonencode({
      "version" : 1,
      "replicas" : [
        { "count" : 2, "constraints" : { "rack" : "east" } },
        { "count" : 2, "constraints" : { "rack" : "west" } }
      ],
      "observers" : []
    })
  }
}

If managed_replication_factor is set on an Apache Kafka cluster, the broker will use its default.replication.factor setting, which defaults to 1.

I've also added acceptance tests for a few scenarios like switching the 'managed_replication_factor' on and off, and modifying partition/replication_factor values.

@Mongey, is this something you would consider merging into the provider? Happy to make any changes based on your feedback.

@Mongey
Copy link
Owner

Mongey commented May 22, 2024

@canhanhan could the same outcome not be achieved by ignoring changes to the replication_factor via the builtin terraform lifecycle?

resource "kafka_topic" "my_topic" {
  name               = "my_topic"
  partitions         = 3
  replication_factor = 1 # ignored

  config = {
    "confluent.placement.constraints" = jsonencode({
      "version" : 1,
      "replicas" : [
        { "count" : 2, "constraints" : { "rack" : "east" } },
        { "count" : 2, "constraints" : { "rack" : "west" } }
      ],
      "observers" : []
    })
  }

  lifecycle {
    ignore_changes = [
      replication_factor,
    ]
  }
}

@canhanhan
Copy link
Author

@canhanhan could the same outcome not be achieved by ignoring changes to the replication_factor via the builtin terraform lifecycle?

Hi @Mongey, not quite so:

  1. For new topic creation: replication_factor must be -1. Any other value will cause Both replicationFactor and confluent.placement.constraints are set. Both cannot be used at the same time errors. Currently, replication_factor validation requires a positive integer (https://github.com/Mongey/terraform-provider-kafka/blob/main/kafka/resource_kafka_topic.go#L43).
  2. For existing topic updates: The built-in ignore_changes feature works.

If you prefer, I can submit a PR which allows "-1" as a valid value for replication_factor instead of adding a new property.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants