Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add custom terraform and bash script to the setup #1020

Merged
merged 3 commits into from
Oct 22, 2020

Conversation

sumo-drosiek
Copy link
Contributor

@sumo-drosiek sumo-drosiek commented Oct 19, 2020

Description
  • Add custom terraform and bash script to the setup
  • Add option to skip source creation via trerraform
Testing performed
  • ci/build.sh
  • Redeploy fluentd and fluentd-events pods
  • Confirm events, logs, and metrics are coming in

@sumo-drosiek
Copy link
Contributor Author

@vsinghal13 Does it meets the given requirements for additional terraform support?

  • you can disable terraform for every source (but it's going to be add as fluentd ENV)
  • you can put your own terraform and bash script, so probably there is possibility to extend the sumologic secret, but it has to be tested

@sumo-drosiek
Copy link
Contributor Author

sumo-drosiek commented Oct 20, 2020

I updated the behavior and here is example configuration for the feature


sumologic:
  setup:
    additionalFiles:
      custom: # It defines directory in which the files will be stored. It's for grouping and separating files (e.g. same tf definitions for multiple organisations)
        locals.tf: |
          locals {
            default_events_source                       = "events"
            default_logs_source                         = "logs"
            apiserver_metrics_source                    = "apiserver-metrics"
            control_plane_metrics_source                = "control-plane-metrics"
            controller_metrics_source                   = "kube-controller-manager-metrics"
            default_metrics_source                      = "(default-metrics)"
            kubelet_metrics_source                      = "kubelet-metrics"
            node_metrics_source                         = "node-exporter-metrics"
            scheduler_metrics_source                    = "kube-scheduler-metrics"
            state_metrics_source                        = "kube-state-metrics"
          }
        main.tf: |
          terraform {
            required_providers {
              sumologic  = "= 2.3.0"
              kubernetes = "~> 1.11.3"
            }
          }
        providers.tf: |-

          provider "sumologic" {
            access_id   = "dummy"
            access_key  = "dummy"
            base_url = "http://receiver-mock.receiver-mock:3000/terraform/api/"
          }
        
          provider "kubernetes" {
          
              cluster_ca_certificate    = file("/var/run/secrets/kubernetes.io/serviceaccount/ca.crt")
              host                      = "https://kubernetes.default.svc"
              load_config_file          = "false"
              token                     = file("/var/run/secrets/kubernetes.io/serviceaccount/token")
          }
        resources.tf: |
          resource "sumologic_collector" "collector" {
              name  = var.collector_name
              fields  = {
                cluster = var.cluster_name
              }
          }
          
          resource "sumologic_http_source" "default_events_source" {
              name         = local.default_events_source
              collector_id = sumologic_collector.collector.id
              category     = "${var.cluster_name}/${local.default_events_source}"
          }
          
          resource "sumologic_http_source" "default_logs_source" {
              name         = local.default_logs_source
              collector_id = sumologic_collector.collector.id
          }
          
          resource "sumologic_http_source" "apiserver_metrics_source" {
              name         = local.apiserver_metrics_source
              collector_id = sumologic_collector.collector.id
          }
          
          resource "sumologic_http_source" "control_plane_metrics_source" {
              name         = local.control_plane_metrics_source
              collector_id = sumologic_collector.collector.id
          }
          
          resource "sumologic_http_source" "controller_metrics_source" {
              name         = local.controller_metrics_source
              collector_id = sumologic_collector.collector.id
          }
          
          resource "sumologic_http_source" "default_metrics_source" {
              name         = local.default_metrics_source
              collector_id = sumologic_collector.collector.id
          }
          
          resource "sumologic_http_source" "kubelet_metrics_source" {
              name         = local.kubelet_metrics_source
              collector_id = sumologic_collector.collector.id
          }
          
          resource "sumologic_http_source" "node_metrics_source" {
              name         = local.node_metrics_source
              collector_id = sumologic_collector.collector.id
          }
          
          resource "sumologic_http_source" "scheduler_metrics_source" {
              name         = local.scheduler_metrics_source
              collector_id = sumologic_collector.collector.id
          }
          
          resource "sumologic_http_source" "state_metrics_source" {
              name         = local.state_metrics_source
              collector_id = sumologic_collector.collector.id
          }
        
          resource "kubernetes_secret" "sumologic_collection_secret" {
            metadata {
              name = "sumologic-second-org"
              namespace = var.namespace_name
            }
        
            data = {
              endpoint-events                           = sumologic_http_source.default_events_source.url
              endpoint-logs                             = sumologic_http_source.default_logs_source.url
              endpoint-metrics-apiserver                = sumologic_http_source.apiserver_metrics_source.url
              endpoint-control_plane_metrics_source     = sumologic_http_source.control_plane_metrics_source.url
              endpoint-metrics-kube-controller-manager  = sumologic_http_source.controller_metrics_source.url
              endpoint-metrics                          = sumologic_http_source.default_metrics_source.url
              endpoint-metrics-kubelet                  = sumologic_http_source.kubelet_metrics_source.url
              endpoint-metrics-node-exporter            = sumologic_http_source.node_metrics_source.url
              endpoint-metrics-kube-scheduler           = sumologic_http_source.scheduler_metrics_source.url
              endpoint-metrics-kube-state               = sumologic_http_source.state_metrics_source.url
            }
        
            type = "Opaque"
          }
        setup.sh: |-
          #!/bin/sh
          terraform init
        
          # Sumo Collector and HTTP sources
          terraform import sumologic_collector.collector "sumologic"
          terraform import sumologic_http_source.default_events_source "sumologic/events"
          terraform import sumologic_http_source.default_logs_source "sumologic/logs"
          terraform import sumologic_http_source.apiserver_metrics_source "sumologic/apiserver-metrics"
          terraform import sumologic_http_source.control_plane_metrics_source "sumologic/control-plane-metrics"
          terraform import sumologic_http_source.controller_metrics_source "sumologic/kube-controller-manager-metrics"
          terraform import sumologic_http_source.default_metrics_source "sumologic/(default-metrics)"
          terraform import sumologic_http_source.kubelet_metrics_source "sumologic/kubelet-metrics"
          terraform import sumologic_http_source.node_metrics_source "sumologic/node-exporter-metrics"
          terraform import sumologic_http_source.scheduler_metrics_source "sumologic/kube-scheduler-metrics"
          terraform import sumologic_http_source.state_metrics_source "sumologic/kube-state-metrics"
        
        
          # Kubernetes Secret
          terraform import kubernetes_secret.sumologic_collection_secret sumologic/sumologic-second-org
        
          terraform apply -auto-approve
        variables.tf: |-
          variable "cluster_name" {
            type  = string
            default = "$CLUSTER_NAME"
          }
        
          variable "collector_name" {
            type  = string
            default = "sumologic"
          }
        
          variable "namespace_name" {
            type  = string
            default = "sumologic"
          }
fluentd:
  logs:
    extraEnvVars:
    - name: VALUE_FROM_SECRET
      valueFrom:
        secretKeyRef:
          name: sumologic-second-org
          key: endpoint-metrics-kubelet

@sumo-drosiek sumo-drosiek force-pushed the drosiek-custom-tf branch 2 times, most recently from 984a05e to 70e3bd6 Compare October 20, 2020 15:56
@sumo-drosiek sumo-drosiek changed the title [WIP] Add custom terraform and bash script to the setup Add custom terraform and bash script to the setup Oct 20, 2020
Copy link
Contributor

@pmalek-sumo pmalek-sumo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall looks good, although my head is too small for this PR 🤯

deploy/helm/sumologic/conf/setup/custom.sh Outdated Show resolved Hide resolved
target="/scripts/${dir}"
mkdir "${target}"
# shellcheck disable=SC2010
for file in $(ls "/customer-scripts/${dir}_"* | grep -oE '_.*' | sed 's/_//g'); do
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ditto here

Suggested change
for file in $(ls "/customer-scripts/${dir}_"* | grep -oE '_.*' | sed 's/_//g'); do
for file in $(ls -1 "/customer-scripts/${dir}_"* | grep -oE '_.*' | sed 's/_//g'); do

for file in $(ls "/customer-scripts/${dir}_"* | grep -oE '_.*' | sed 's/_//g'); do
cp "/customer-scripts/${dir}_${file}" "${target}/${file}"
done
cd "${target}" && ls -al && bash setup.sh && cd ..
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pushd and popd instead of using cds?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

changed to cd and running the script, no need to revert cwd

for file in $(ls -1 "/customer-scripts/${dir}_"* | grep -oE '_.*' | sed 's/_//g'); do
cp "/customer-scripts/${dir}_${file}" "${target}/${file}"
done
cd "${target}" && bash setup.sh
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok so this will work but it will keep the changed CWD from the last dir in the loop hence the directory for the caller will be changed.

To fix that we can e.g. save the dir before the loop and then popd or cd to it at the end

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm operating on absolute paths, so CWD inside the custom.sh doesn't impact on anything

Copy link
Contributor

@perk-sumo perk-sumo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

@sumo-drosiek sumo-drosiek force-pushed the drosiek-custom-tf branch 2 times, most recently from 131cf77 to 5dcd7be Compare October 21, 2020 12:55
@sumo-drosiek sumo-drosiek added this to the v2.0 milestone Oct 21, 2020
@sumo-drosiek sumo-drosiek self-assigned this Oct 21, 2020
@sumo-drosiek sumo-drosiek merged commit 010786e into master Oct 22, 2020
@sumo-drosiek sumo-drosiek deleted the drosiek-custom-tf branch October 22, 2020 10:18
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants