-
Notifications
You must be signed in to change notification settings - Fork 108
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Add generic helm_releases
variable for provisioning any number of Helm charts
#169
Conversation
…r of Helm charts
Need to deploy and validate things are working as advertised - will switch from draft once validated |
Lots-o-pods k get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
amazon-cloudwatch aws-cloudwatch-metrics-5smth 1/1 Running 0 33m
amazon-cloudwatch aws-cloudwatch-metrics-6dmnr 1/1 Running 0 21m
amazon-cloudwatch aws-cloudwatch-metrics-mxglj 1/1 Running 0 33m
amazon-cloudwatch aws-cloudwatch-metrics-v6tfg 1/1 Running 0 32m
amazon-guardduty aws-guardduty-agent-58wjx 1/1 Running 0 22m
amazon-guardduty aws-guardduty-agent-cz2xm 1/1 Running 0 32m
amazon-guardduty aws-guardduty-agent-kcbpd 1/1 Running 0 32m
amazon-guardduty aws-guardduty-agent-ssg8s 1/1 Running 0 32m
argocd argo-cd-argocd-application-controller-0 1/1 Running 0 21m
argocd argo-cd-argocd-applicationset-controller-678d85f77b-sbzfg 1/1 Running 0 21m
argocd argo-cd-argocd-dex-server-7b6c9b5969-cpggn 1/1 Running 1 (21m ago) 21m
argocd argo-cd-argocd-notifications-controller-6d489b99c9-dlw8h 1/1 Running 0 21m
argocd argo-cd-argocd-redis-59dd95f5b5-lfqf9 1/1 Running 0 21m
argocd argo-cd-argocd-repo-server-7b9bd88c95-bbc69 1/1 Running 0 21m
argocd argo-cd-argocd-server-6f9cfdd4d5-5tjwj 1/1 Running 0 21m
aws-node-termination-handler aws-node-termination-handler-6d9656f4-cx2v6 1/1 Running 0 36m
cert-manager cert-manager-5989bcc87-p72pn 1/1 Running 0 37m
cert-manager cert-manager-cainjector-9b44ddb68-jlsn4 1/1 Running 0 37m
cert-manager cert-manager-webhook-776b65456-ck25v 1/1 Running 0 37m
external-dns external-dns-849b89c675-vsdkl 1/1 Running 0 22m
external-secrets external-secrets-67bfd5b47c-wbzjc 1/1 Running 0 21m
external-secrets external-secrets-cert-controller-8f75c6f79-jw9gg 1/1 Running 0 21m
external-secrets external-secrets-webhook-78f6bd456-l74st 1/1 Running 0 21m
gatekeeper-system gatekeeper-update-crds-hook-7s8zd 0/1 Completed 0 42m
gpu-operator gpu-operator-7cfc9fb796-rplqn 1/1 Running 0 6m55s
gpu-operator gpu-operator-node-feature-discovery-master-7bc679897-lkgwh 1/1 Running 0 6m55s
gpu-operator gpu-operator-node-feature-discovery-worker-6vlxv 1/1 Running 0 6m25s
gpu-operator gpu-operator-node-feature-discovery-worker-7tfs8 1/1 Running 0 6m25s
gpu-operator gpu-operator-node-feature-discovery-worker-c9vc8 1/1 Running 0 6m55s
gpu-operator gpu-operator-node-feature-discovery-worker-pzc6r 1/1 Running 0 5m55s
ingress-nginx ingress-nginx-controller-f6c55fdc8-6td4z 1/1 Running 0 21m
karpenter karpenter-7b4fdd77df-5jk48 1/1 Running 0 21m
karpenter karpenter-7b4fdd77df-r66zg 1/1 Running 0 21m
kube-prometheus-stack alertmanager-kube-prometheus-stack-alertmanager-0 2/2 Running 1 (18m ago) 18m
kube-prometheus-stack kube-prometheus-stack-grafana-5c6cf88fd9-cc5kj 3/3 Running 0 18m
kube-prometheus-stack kube-prometheus-stack-kube-state-metrics-584d8b5d5f-6dkbg 1/1 Running 0 18m
kube-prometheus-stack kube-prometheus-stack-operator-c74ddccb5-6db55 1/1 Running 0 18m
kube-prometheus-stack kube-prometheus-stack-prometheus-node-exporter-68b82 1/1 Running 0 19m
kube-prometheus-stack kube-prometheus-stack-prometheus-node-exporter-6qs5j 1/1 Running 0 18m
kube-prometheus-stack kube-prometheus-stack-prometheus-node-exporter-dvj9p 1/1 Running 0 17m
kube-prometheus-stack kube-prometheus-stack-prometheus-node-exporter-lgqjp 1/1 Running 0 18m
kube-prometheus-stack prometheus-kube-prometheus-stack-prometheus-0 2/2 Running 0 18m
kube-system aws-for-fluent-bit-6256k 1/1 Running 0 33m
kube-system aws-for-fluent-bit-999mz 1/1 Running 0 21m
kube-system aws-for-fluent-bit-ng2xd 1/1 Running 0 33m
kube-system aws-for-fluent-bit-tm8hb 1/1 Running 0 32m
kube-system aws-load-balancer-controller-7585d98bf8-chbn9 1/1 Running 0 37m
kube-system aws-load-balancer-controller-7585d98bf8-s87jc 1/1 Running 0 37m
kube-system aws-node-fdnhx 1/1 Running 0 22m
kube-system aws-node-jsm8x 1/1 Running 0 32m
kube-system aws-node-rlvs5 1/1 Running 0 31m
kube-system aws-node-zfj4q 1/1 Running 0 32m
kube-system aws-privateca-issuer-7f94fd59c4-6dlmp 1/1 Running 0 22m
kube-system cluster-autoscaler-aws-cluster-autoscaler-7ff79bc484-pljdn 1/1 Running 0 22m
kube-system coredns-558bbc98f8-lt2bs 1/1 Running 0 32m
kube-system coredns-558bbc98f8-thzbc 1/1 Running 0 32m
kube-system ebs-csi-controller-5cbf889bc7-xqrdb 6/6 Running 0 32m
kube-system ebs-csi-controller-5cbf889bc7-zzm7q 6/6 Running 0 32m
kube-system ebs-csi-node-2s582 3/3 Running 0 32m
kube-system ebs-csi-node-6rfgg 3/3 Running 0 32m
kube-system ebs-csi-node-bj6tg 3/3 Running 0 32m
kube-system ebs-csi-node-n7t94 3/3 Running 0 22m
kube-system efs-csi-controller-5c5dbd74c-549w6 3/3 Running 0 22m
kube-system efs-csi-controller-5c5dbd74c-vck9r 3/3 Running 0 22m
kube-system efs-csi-node-hj2zk 3/3 Running 0 22m
kube-system efs-csi-node-ndkhn 3/3 Running 0 22m
kube-system efs-csi-node-vkcrm 3/3 Running 0 22m
kube-system efs-csi-node-wjlhc 3/3 Running 0 22m
kube-system fsx-csi-controller-85c7dbb7db-b7x8x 4/4 Running 0 22m
kube-system fsx-csi-controller-85c7dbb7db-dmnq7 4/4 Running 0 22m
kube-system fsx-csi-node-92csm 3/3 Running 0 22m
kube-system fsx-csi-node-glwzq 3/3 Running 0 22m
kube-system fsx-csi-node-jr5jb 3/3 Running 0 22m
kube-system fsx-csi-node-q2pt2 3/3 Running 0 21m
kube-system kube-proxy-4gn25 1/1 Running 0 32m
kube-system kube-proxy-5crkq 1/1 Running 0 33m
kube-system kube-proxy-gfsnt 1/1 Running 0 22m
kube-system kube-proxy-n6wtc 1/1 Running 0 33m
kube-system metrics-server-6f9cdd486c-2pmsx 1/1 Running 0 22m
kube-system secrets-store-csi-driver-5wsc7 3/3 Running 0 21m
kube-system secrets-store-csi-driver-gt6pp 3/3 Running 0 21m
kube-system secrets-store-csi-driver-provider-aws-cxhmc 1/1 Running 0 33m
kube-system secrets-store-csi-driver-provider-aws-gvwgq 1/1 Running 0 21m
kube-system secrets-store-csi-driver-provider-aws-hbk5p 1/1 Running 0 32m
kube-system secrets-store-csi-driver-provider-aws-xm8tm 1/1 Running 0 33m
kube-system secrets-store-csi-driver-r4kp6 3/3 Running 0 21m
kube-system secrets-store-csi-driver-whntb 3/3 Running 0 21m
prometheus-adapter prometheus-adapter-7cc7fd5644-76c9v 1/1 Running 0 22m
prometheus-adapter prometheus-adapter-7cc7fd5644-hjdh6 1/1 Running 0 22m
velero velero-7b8994d56-r5gjr 1/1 Running 0 21m
vpa vpa-admission-controller-55f649f57f-2fjsk 1/1 Running 0 20m
vpa vpa-recommender-8489b6dddc-nz6sz 1/1 Running 0 20m
vpa vpa-updater-9dd675fbb-f7z79 1/1 Running 0 20m |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
❤️
How does this resolve #154 (external-shapshotter)? I don't see it in the tests posted above or in any examples. Also, according to the addon docs, this appears to be a supported addon? Please advise |
CSI Snapshotter is an EKS addon and can be deployed today similar to how |
So, it looks like this snippet will net the below config: module "eks_blueprints_addons" {
source = "aws-ia/eks-blueprints-addons/aws"
version = "~> 1.12.0" #ensure to update this to the latest/desired version
eks_addons = {
...
vpc-cni = {
most_recent = true
}
# This block is all that's required to get the below TF config
snapshot-controller = {
most_recent = true
}
}
...
} # tf state show module.eks_blueprints_addons.aws_eks_addon.this["snapshot-controller"]
resource "aws_eks_addon" "this" {
addon_name = "snapshot-controller"
addon_version = "v6.3.2-eksbuild.1"
arn = "arn:aws:eks:us-east-1:010101010101:addon/my-project/snapshot-controller/1ec82d78-73de-a6a9-3278-b4ada3x40eb1"
cluster_name = "my-project"
created_at = "2023-12-11T22:13:06Z"
id = "my-project:snapshot-controller"
modified_at = "2023-12-11T22:13:48Z"
preserve = true
resolve_conflicts_on_create = "OVERWRITE"
resolve_conflicts_on_update = "OVERWRITE"
tags = {}
tags_all = {
"env" = "stage"
"project" = "my-project"
}
timeouts {}
} I now see the controller running on the cluster. Thanks for the assist 🙏 Follow-up:There are 3 components to this:
Is the remaining configuration (for snapshotter/webhook) the
|
Knative doesn't have an official helm chart. Is it possible integrate it as add-on? |
What does this PR do?
helm_releases
variable for provisioning any number of Helm chartsMotivation
More
pre-commit run -a
with this PRFor Moderators
Additional Notes