New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Suggestion for new Kubernetes resource(s) #120
Comments
@bkircher This sounds a good idea to me. However, I would like the gscloud command to be like this:
The user will copy the corresponding UUID of the k8s template, and put it in the tf file. The reason is that we will not need to call the api to get all the template, and search for the UUID of the |
@bkircher I like it. UX will benefit from this. Some notes:
|
Thanks for the feedback @twiebe!
We'll do. Does this also imply that |
👍 But let's first determine how the schema would look like before writing code. With @twiebe suggestions and @itakouna we came up with this: resource "gridscale_kubernetes_cluster" "mycluster" {
name = "mycluster"
release = "1.19"
labels = ["foo", "staging"]
timeouts {
create = "15m"
}
# Node pool attached to this k8s resource; possibly multiple with different properties
node_pool {
name = "my_node_pool"
node_count = 3
cores = 2 # cores per node
memory = 4 # mem per node
storage = 80 # … uses a default storage type
}
node_pool {
name = "my_io_intensive_node_pool"
node_count = 6
cores = 2
memory = 4
storage = 30
storage_type = "insane"
}
} Note:
|
|
@nvthongswansea what are your thoughts on above schema proposal? You think we can start implementing this? |
Yes, I'm on it. The thing is the current api only allows a single node_pool. Correct me if I'm wrong @bkircher @twiebe @itakouna |
Add a new k8s resource to TF provider. See #120. What changes: - Add new tf resource "gridscale_k8s". - Add "gridscale_k8s" resource's docs. What it does: - Manage k8s cluster resources in gridscale. - Validate gsk (gridscale k8s cluster) parameters. Much better than what we had before. - Allow user to input k8s_release (e.g. 1.19), instead of inputting service_template_uuid. Optionally, the values of `release` can be retrieved via gscloud (since 0.10 release).
The kube-scheduler of k8s cluster assigns workloads to a specific node pool based on some labels and taints that the k8s administrator/user adds. However, they are out of the scope of this issue. |
Done in #140. |
Current example on how to spin up a k8s cluster using
gridscale_paas
resource: terraform-examples/managed-k8s/cluster.tf.I find this hard to use. (Why is
k8s_worker_node_storage
exposed when I am not allowed to change it? (see #118) Particularly, errors happen very late after apply at the API layer. (IMO, getting a 400 back from the API after doingterrform plan && terraform apply
should be treated as a bug in this provider.)Proposal: add new resources dedicated only to k8s (and possibly later more for other PaaS offerings)
To get available versions we could point the user to gscloud tool:
(Given that gscloud has implemented issue #113.)
This version slug could be subsequently used in the
version
parameter (see below). Behind the scenes we could use this to find the currentservice_template_uuid
before planning.That might prevent binding a
service_template_uuid
directly to a resource even, freeing us from the problems when clusters are updated on gridscale's side without being reflected in the local TF state.Example:
Additionally (possibly), we could allow for a "provider" here that can help retrieving the kubeconfig from the API in the aftermath and make variables directly available in TF.
Resource limits could be transparently encoded in the
"worker_pool"
.Not sure yet how to bring in the security zone here.
If we someday implement auto-scaling boundaries, this could also handled in the
"worker_pool"
as max, min or so.Thoughts?
The text was updated successfully, but these errors were encountered: