Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for container resources for each component #99

Merged
merged 5 commits into from
Jul 18, 2022

Conversation

prometherion
Copy link
Member

These changes are changing the CRD definition by adding new fields in order to specify specific components container limits and requests as follows.

$: kubectl explain tcp.spec.controlPlane.deployment.resources
KIND:     TenantControlPlane
VERSION:  kamaji.clastix.io/v1alpha1

RESOURCE: resources <Object>

DESCRIPTION:
     Resources defines the amount of memory and CPU to allocate to each
     component of the Control Plane (kube-apiserver, controller-manager, and
     scheduler).

FIELDS:
   apiServer    <Object>
     ResourceRequirements describes the compute resource requirements.

   controllerManager    <Object>
     ResourceRequirements describes the compute resource requirements.

   scheduler    <Object>
     ResourceRequirements describes the compute resource requirements.

A nil value means no resources at all. Values can be specified as usual as done with Pods.

apiVersion: kamaji.clastix.io/v1alpha1
kind: TenantControlPlane
metadata:
  name: test
spec:
  controlPlane:
    deployment:
      replicas: 2
      additionalMetadata:
        annotations:
          environment.clastix.io: test
          tier.clastix.io: "0"
        labels:
          tenant.clastix.io: test
          kind.clastix.io: deployment
      resources:
        apiServer:
          limits:
            cpu: 500m
            memory: 512Mi
          requests:
            cpu: 500m
            memory: 512Mi
        controllerManager:
          limits:
            cpu: 250m
            memory: 256Mi
          requests:
            cpu: 250m
            memory: 256Mi
        scheduler:
          limits:
            cpu: 250m
            memory: 256Mi
          requests:
            cpu: 250m
            memory: 256Mi
    service:
      additionalMetadata:
        annotations:
          environment.clastix.io: test
          tier.clastix.io: "0"
        labels:
          tenant.clastix.io: test
          kind.clastix.io: service
      serviceType: LoadBalancer
    ingress:
      enabled: true
      hostname: kamaji.local
      ingressClassName: nginx
      additionalMetadata:
        annotations:
          kubernetes.io/ingress.allow-http: "false"
          nginx.ingress.kubernetes.io/secure-backends: "true"
          nginx.ingress.kubernetes.io/ssl-passthrough: "true"
  kubernetes:
    version: "v1.23.1"
    kubelet:
      cgroupfs: systemd
    admissionControllers:
      - ResourceQuota
      - LimitRanger
  networkProfile:
    address: "127.0.0.1"
    port: 6443
    certSANs:
      - "test.clastix.labs"
    serviceCidr: "10.96.0.0/16"
    podCidr: "10.244.0.0/16"
    dnsServiceIPs:
      - "10.96.0.10"
  addons:
    coreDNS: {}
    kubeProxy: {}

Since konnectivity is an addon, the the same key is available in its spec definition:

apiVersion: kamaji.clastix.io/v1alpha1
kind: TenantControlPlane
metadata:
  name: test
spec:
  ...
  addons:
    konnectivity:
      agentImage: us.gcr.io/k8s-artifacts-prod/kas-network-proxy/proxy-agent
      proxyPort: 8134
      resources:
        limits:
          cpu: 100m
          memory: 128Mi
        requests:
          cpu: 100m
          memory: 128Mi
      serverImage: us.gcr.io/k8s-artifacts-prod/kas-network-proxy/proxy-server
      serviceType: LoadBalancer
      version: v0.0.31
    coreDNS: {}
    kubeProxy: {}

Copy link
Member

@bsctl bsctl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Make container resources allocation configurable in Tenant Control Plane pods
2 participants