Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

scale down arangodb deployment for dev environments #17809

Open
giladsh1 opened this issue Dec 14, 2022 · 1 comment
Open

scale down arangodb deployment for dev environments #17809

giladsh1 opened this issue Dec 14, 2022 · 1 comment

Comments

@giladsh1
Copy link

My Environment

  • ArangoDB Version: 3.9.2
  • Deployment Mode: Cluster
  • Deployment Strategy: Kubernetes
  • Infrastructure: GKE

deactivating (Scale down) of Arangodb cluster spawned with the operator via a CRD

Hello,
I am spawning a lot of dev environment with the latest arangodb operator (CRD yaml below).
I was wondering if there is an easy way to scale the cluster at night time / weekend to save cost.
Since the operator creates pod directly, without a managing object such as deployment or statefulset, seems like my only is to update the CRD with count 0 for all components (agent, coordinators, and database servers).
However, when doing so the operator seems stuck and unable to perform this action.
Please advise.

CRD for reference:

apiVersion: database.arangodb.com/v1
kind: ArangoDeployment
metadata:
  name: arangodb-cluster
  namespace: infip-test
spec:
  agents:
    count: 3
    nodeSelector:
      cloud.google.com/gke-nodepool: default-pool
    overrideDetectedTotalMemory: true
    resources:
      limits:
        cpu: 200m
        memory: 200Mi
      requests:
        cpu: 200m
        memory: 200Mi
  auth:
    jwtSecretName: None
  coordinators:
    count: 1
    nodeSelector:
      cloud.google.com/gke-nodepool: default-pool
    overrideDetectedTotalMemory: true
    resources:
      limits:
        cpu: 200m
        memory: 200Mi
      requests:
        cpu: 200m
        memory: 200Mi
  dbservers:
    count: 2
    nodeSelector:
      cloud.google.com/gke-nodepool: default-pool
    overrideDetectedTotalMemory: true
    resources:
      limits:
        cpu: 1
        memory: 500Mi
      requests:
        cpu: 1
        memory: 500Mi
    volumeClaimTemplate:
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 10Gi
        storageClassName: ssd
  disableIPv6: true
  downtimeAllowed: false
  environment: Development
  externalAccess:
    type: None
  image: arangodb/arangodb:3.8.6
  memberPropagationMode: always
  metrics:
    enabled: false
  mode: Cluster
  networkAttachedVolumes: true
  tls:
    caSecretName: None
@DmitryZakharov
Copy link

Hi, Would like to know the solution for that as well. We also scale down the environments for the night and seems not to have a good strategy for that.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants