Terraform provider for managing Apache Kafka Topics + ACLs
Branch: master
Clone or download




A Terraform plugin for managing Apache Kafka.



Download and extract the latest release to your terraform plugin directory (typically ~/.terraform.d/plugins/)


  1. Install go
  2. Clone repository to: $GOPATH/src/github.com/Mongey/terraform-provider-kafka
    mkdir -p $GOPATH/src/github.com/Mongey/terraform-provider-kafka; cd $GOPATH/src/github.com/Mongey/
    git clone https://github.com/Mongey/terraform-provider-kafka.git
  3. Build the provider make build
  4. Run the tests make test
  5. Start a TLS enabled kafka-cluster docker-compose up
  6. Run the acceptance tests make testacc

Provider Configuration


Example provider with SSL client authentication.

provider "kafka" {
  bootstrap_servers = ["localhost:9092"]
  ca_cert_file      = "../secrets/snakeoil-ca-1.crt"
  client_cert_file  = "../secrets/kafkacat-ca1-signed.pem"
  client_key_file   = "../secrets/kafkacat-raw-private-key.pem"
  skip_tls_verify   = true
Property Description Default
bootstrap_servers A list of host:port addresses that will be used to discover the full set of alive brokers Required
ca_cert_file The path to a CA certificate file to validate the server's certificate. ""
client_cert_file The path the a file containing the client certificate -- Use for Client authentication to Kafka. ""
client_key_file Path to a file containing the private key that the client certificate was issued for. ""
skip_tls_verify Skip TLS verification. false
tls_enabled Enable communication with the Kafka Cluster over TLS. false
sasl_username Username for SASL authentication. ""
sasl_password Password for SASL authentication. ""



A resource for managing Kafka topics. Increases partition count without destroying the topic.


provider "kafka" {
  bootstrap_servers = ["localhost:9092"]

resource "kafka_topic" "logs" {
  name               = "systemd_logs"
  replication_factor = 2
  partitions         = 100

  config = {
    "segment.ms"     = "20000"
    "cleanup.policy" = "compact"


Property Description
name The name of the topic
partitions The number of partitions the topic should have
replication_factor The number of replicas the topic should have
config A map of string k/v attributes

Importing Existing Topics

You can import topics with the following

terraform import kafka_topic.logs systemd_logs


A resource for managing Kafka ACLs.


provider "kafka" {
  bootstrap_servers = ["localhost:9092"]
  ca_cert_file      = "../secrets/snakeoil-ca-1.crt"
  client_cert_file  = "../secrets/kafkacat-ca1-signed.pem"
  client_key_file   = "../secrets/kafkacat-raw-private-key.pem"
  skip_tls_verify   = true

resource "kafka_acl" "test" {
  resource_name       = "syslog"
  resource_type       = "Topic"
  acl_principal       = "User:Alice"
  acl_host            = "*"
  acl_operation       = "Write"
  acl_permission_type = "Deny"


Property Description Valid values
acl_host Host from which principal listed in acl_principal will have access *
acl_operation Operation that is being allowed or denied Unknown, Any, All, Read, Write, Create, Delete, Alter, Describe, ClusterAction, DescribeConfigs, AlterConfigs, IdempotentWrite
acl_permission_type Type of permission Unknown, Any, Allow, Deny
acl_principal Principal that is being allowed or denied *
resource_name The name of the resource *
resource_type The type of resource Unknown, Any, Topic, Group, Cluster, TransactionalID