Skip to content

dapperlabs-platform/terraform-confluent-kafka-cluster

Repository files navigation

NOTE: This is deprecated and no longer being used on new projects.

We're using this module now:

https://github.com/dapperlabs-platform/terraform-confluent-official-kafka-cluster

Confluent Kafka cluster

https://www.confluent.io/confluent-cloud/

https://registry.terraform.io/providers/Mongey/confluentcloud/latest/docs

https://registry.terraform.io/providers/Mongey/kafka/latest/docs

What does this do?

Creates a Confluent Cloud Kafka cluster, topics, service accounts, ACLs and optionally metric exporter K8S deployments as recommended by Confluent Cloud on this blog post (see parts 1 and 2).

How to use this module?

module "confluent-kafka-cluster" {
  source                            = "github.com/dapperlabs-platform/terraform-confluent-kafka-cluster?ref=tag"
  confluent_cloud_username          = "<username>"
  confluent_cloud_password          = "<password>"
  name                              = "cluster-name"
  environment                       = "staging"
  gcp_region                        = "us-west1"
  enable_metric_exporters           = true
  kafka_lag_exporter_image_version  = "<lookup>"
  metric_exporters_namespace        = "sre"
  create_grafana_dashboards         = true
  grafana_datasource                = "Default Datasource"
  topics = {
    "topic-1" = {
      replication_factor = 3
      partitions         = 1
      config = {
        "cleanup.policy" = "delete"
      }
      acl_readers = ["user1"]
      acl_writers = ["user2"]
    }
  }
}

Resources created

  • 1 Confluent Cloud environment
  • 1 Kafka cluster
  • 1 Service account for each distinct entry in acl_readers and acl_writers variables
  • Topics

If enable_metric_exporters is set to true

Kafka-lag-exporter and ccloud-exporter resources:

  • 1 K8S Service account
  • 1 K8S Secret with credentials and configs
  • 1 K8S Deployment

Additional information

The module outputs a map of service account credentials, keyed by the names provided to the acl_ variables. Use this output as input to a separate module or resource that saves it for application use.

reader service accounts are granted read access to all groups. See group_readers resource.

Requirements

Name Version
confluentcloud >= 0.0.12
grafana ~> 1.20
kafka >= 0.2.11
kubernetes >= 2

Providers

Name Version
confluentcloud >= 0.0.12
grafana ~> 1.20
kafka >= 0.2.11
kubernetes >= 2
random n/a

Modules

No modules.

Resources

Name Type
confluentcloud_api_key.admin_api_key resource
confluentcloud_api_key.ccloud_exporter_api_key resource
confluentcloud_api_key.kafka_lag_exporter_api_key resource
confluentcloud_api_key.service_account_api_keys resource
confluentcloud_environment.environment resource
confluentcloud_kafka_cluster.cluster resource
confluentcloud_service_account.kafka_lag_exporter resource
confluentcloud_service_account.service_accounts resource
grafana_dashboard.ccloud_exporter resource
grafana_dashboard.kafka_lag_exporter resource
grafana_folder.confluent_cloud resource
kafka_acl.group_readers resource
kafka_acl.kafka_lag_exporter_describe_consumer_group resource
kafka_acl.kafka_lag_exporter_describe_topic resource
kafka_acl.kafka_lag_exporter_read_topic resource
kafka_acl.readers resource
kafka_acl.writers resource
kafka_topic.topics resource
kubernetes_deployment.ccloud_exporter_deployment resource
kubernetes_deployment.lag_exporter_deployment resource
kubernetes_secret.ccloud_exporter_config resource
kubernetes_secret.ccloud_exporter_config_file resource
kubernetes_secret.lag_exporter_config resource
kubernetes_service_account.ccloud_exporter_service_account resource
kubernetes_service_account.lag_exporter_service_account resource
random_pet.pet resource

Inputs

Name Description Type Default Required
add_service_account_suffix Add pet name suffix to service account names to avoid collision bool false no
availability Cluster availability. LOW or HIGH string "LOW" no
ccloud_exporter_annotations CCloud exporter annotations map(string) {} no
ccloud_exporter_container_resources Container resource limit configuration map(map(string))
{
"limits": {
"cpu": "500m",
"memory": "2Gi"
},
"requests": {
"cpu": "250m",
"memory": "1Gi"
}
}
no
ccloud_exporter_image_version Exporter Image Version string "latest" no
cku Number of CKUs number null no
cluster_tier Cluster tier string "BASIC" no
create_grafana_dashboards Whether to create grafana dashboards with default metric exporter panels bool false no
enable_metric_exporters Whether to deploy kafka-lag-exporter and ccloud-exporter in a kubernetes cluster bool false no
environment Application environment that uses the cluster string n/a yes
exporters_node_selector K8S Deployment node selector for metric exporters map(string) null no
gcp_region GCP region in which to deploy the cluster. See https://docs.confluent.io/cloud/current/clusters/regions.html string n/a yes
grafana_datasource Name of Grafana data source where Kafka metrics are stored string null no
kafka_lag_exporter_annotations Lag exporter annotations map(string) {} no
kafka_lag_exporter_container_resources Container resource limit configuration map(map(string))
{
"limits": {
"cpu": "500m",
"memory": "2Gi"
},
"requests": {
"cpu": "250m",
"memory": "1Gi"
}
}
no
kafka_lag_exporter_image_version See https://github.com/seglo/kafka-lag-exporter/releases string n/a yes
kafka_lag_exporter_log_level Lag exporter log level string "INFO" no
metric_exporters_namespace Namespace to deploy exporters to string "sre" no
name Kafka cluster identifier. Will be prepended by the environment value in Confluent cloud string n/a yes
network_egress Network egress limit(MBps) number 100 no
network_ingress Network ingress limit(MBps) number 100 no
service_provider Confluent cloud service provider. AWS, GCP, Azure string "gcp" no
storage Storage limit(GB) number 5000 no
topics Kafka topic definitions.
Object map keyed by topic name with topic configuration values as well as reader and writer ACL lists.
Values provided to the ACL lists will become service accounts with { key, secret } objects output by service_account_credentials
map(
object({
replication_factor = number
partitions = number
config = object({})
acl_readers = list(string)
acl_writers = list(string)
})
)
n/a yes

Outputs

Name Description
admin_api_key Admin user api key and secret
cluster_id Cluster ID
kafka_url URL to connect your Kafka clients to
rest_api_endpoint REST API endpoint to manage the cluster
service_account_credentials Map containing service account credentials.
Keys are service account names provided to topics as readers and writers.
Values are objects with key and secret values.