Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] Operator fails to update corresponding MaxScale object when modifying maxscale module parameters in MariaDB #586

Closed
1 task
luohaha3123 opened this issue Apr 25, 2024 · 6 comments
Labels
bug Something isn't working good first issue Good for newcomers help wanted Extra attention is needed maxscale stale

Comments

@luohaha3123
Copy link
Contributor

luohaha3123 commented Apr 25, 2024

Documentation

Describe the bug
I encountered an issue where the operator fails to update the corresponding MaxScale object when I modify the maxscale module parameters in MariaDB. Specifically, the switchover_on_low_disk_space field is not being updated correctly.

The MariaDB object is:

apiVersion: k8s.mariadb.com/v1alpha1
kind: MariaDB
metadata:
  name: mariadb-test
  namespace: kube-system
spec:
  affinity:
    enableAntiAffinity: true
    podAntiAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
      - labelSelector:
          matchExpressions:
          - key: app.kubernetes.io/instance
            operator: In
            values:
            - mariadb-test
        topologyKey: kubernetes.io/hostname
  database: test
  image: mariadb:10.8.2
  maxScale:
    auth:
      adminPasswordSecretKeyRef:
        key: password
        name: my-test
      adminUsername: mariadb-operator
      clientPasswordSecretKeyRef:
        key: password
        name: my-test
      clientUsername: mariadb-test-maxscale-client
      generate: true
      monitorPasswordSecretKeyRef:
        key: password
        name: my-test
      monitorUsername: mariadb-test-maxscale-monitor
      serverPasswordSecretKeyRef:
        key: password
        name: my-test
      serverUsername: mariadb-test-maxscale-server
      syncPasswordSecretKeyRef:
        key: password
        name: my-test
      syncUsername: mariadb-test-maxscale-sync
    config:
      sync:
        database: mysql
        interval: 5s
        timeout: 15s
      volumeClaimTemplate:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 100Mi
        storageClassName: yj-silver24
    connection:
      port: 3306
      secretName: mxs-test-conn
    enabled: true
    kubernetesService:
      type: NodePort
    monitor:
      cooperativeMonitoring: majority_of_all
      interval: 2s
      module: ""
      name: ""
      params:
        auto_failover: "true"
        auto_rejoin: "true"
        switchover_on_low_disk_space: "false"
    replicas: 2
    services:
    - listener:
        name: rw-listener
        params:
          connection_metadata: tx_isolation=auto
        port: 3306
        protocol: MariaDBProtocol
      name: rw-router
      params:
        master_accept_reads: "true"
        max_replication_lag: 3s
        max_slave_connections: "255"
        transaction_replay: "true"
        transaction_replay_attempts: "10"
        transaction_replay_timeout: 5s
      router: readwritesplit
    - listener:
        name: ""
        port: 3307
      name: rconn-master-router
      params:
        master_accept_reads: "true"
        max_replication_lag: 3s
        router_options: master
      router: readconnroute
    - listener:
        name: ""
        port: 3308
      name: rconn-slave-router
      params:
        max_replication_lag: 3s
        router_options: slave
      router: readconnroute
  maxScaleRef:
    name: mariadb-test-maxscale
    namespace: kube-system
  passwordSecretKeyRef:
    key: root-password
    name: mariadb
  port: 3306
  primaryService:
    type: ClusterIP
  replicas: 3
  replication:
    enabled: true
    primary:
      automaticFailover: true
      podIndex: 0
    probesEnabled: false
    replica:
      connectionRetries: 10
      connectionTimeout: 10s
      gtid: CurrentPos
      replPasswordSecretKeyRef:
        key: password
        name: mariadb
      syncTimeout: 10s
      waitPoint: AfterSync
    syncBinlog: true
  rootEmptyPassword: false
  rootPasswordSecretKeyRef:
    key: root-password
    name: mariadb
  secondaryService:
    type: ClusterIP
  service:
    type: ClusterIP
  serviceAccountName: mariadb-test
  storage:
    ephemeral: false
    resizeInUseVolumes: true
    size: 3Gi
    storageClassName: yj-silver24
    volumeClaimTemplate:
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 3Gi
      storageClassName: yj-silver24
    waitForVolumeResize: true
  tolerations:
  - effect: NoSchedule
    key: k8s.mariadb.com/ha
    operator: Exists
  username: test

The MaxScale object is:

apiVersion: k8s.mariadb.com/v1alpha1
kind: MaxScale
metadata:
  name: mariadb-test-maxscale
spec:
  admin:
    guiEnabled: true
    port: 8989
  auth:
    adminPasswordSecretKeyRef:
      key: password
      name: my-test
    adminUsername: mariadb-operator
    clientMaxConnections: 60
    clientPasswordSecretKeyRef:
      key: password
      name: my-test
    clientUsername: mariadb-test-maxscale-client
    deleteDefaultAdmin: true
    generate: true
    monitorMaxConnections: 60
    monitorPasswordSecretKeyRef:
      key: password
      name: my-test
    monitorUsername: mariadb-test-maxscale-monitor
    serverMaxConnections: 60
    serverPasswordSecretKeyRef:
      key: password
      name: my-test
    serverUsername: mariadb-test-maxscale-server
    syncMaxConnections: 60
    syncPasswordSecretKeyRef:
      key: password
      name: my-test
    syncUsername: mariadb-test-maxscale-sync
  config:
    sync:
      database: mysql
      interval: 5s
      timeout: 10s
    volumeClaimTemplate:
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 100Mi
      storageClassName: yj-silver24
  connection:
    port: 3306
    secretName: mxs-test-conn
  image: mariadb/maxscale:23.08
  kubernetesService:
    type: NodePort
  mariaDbRef:
    name: mariadb-test
    namespace: kube-system
    waitForIt: false
  monitor:
    cooperativeMonitoring: majority_of_all
    interval: 2s
    module: mariadbmon
    name: mariadbmon-monitor
    params:
      auto_failover: "true"
      auto_rejoin: "true"
      switchover_on_low_disk_space: "true"
  podSecurityContext:
    fsGroup: 996
  replicas: 2
  requeueInterval: 10s
  servers:
  - address: mariadb-test-0.mariadb-test-internal.kube-system.svc.cluster.local
    name: mariadb-test-0
    port: 3306
    protocol: MariaDBBackend
  - address: mariadb-test-1.mariadb-test-internal.kube-system.svc.cluster.local
    name: mariadb-test-1
    port: 3306
    protocol: MariaDBBackend
  - address: mariadb-test-2.mariadb-test-internal.kube-system.svc.cluster.local
    name: mariadb-test-2
    port: 3306
    protocol: MariaDBBackend
  serviceAccountName: mariadb-test-maxscale
  services:
  - listener:
      name: rw-listener
      params:
        connection_metadata: tx_isolation=auto
      port: 3306
      protocol: MariaDBProtocol
    name: rw-router
    params:
      master_accept_reads: "true"
      max_replication_lag: 3s
      max_slave_connections: "255"
      transaction_replay: "true"
      transaction_replay_attempts: "10"
      transaction_replay_timeout: 5s
    router: readwritesplit
  - listener:
      name: rconn-master-router-listener
      port: 3307
      protocol: MariaDBProtocol
    name: rconn-master-router
    params:
      master_accept_reads: "true"
      max_replication_lag: 3s
      router_options: master
    router: readconnroute
  - listener:
      name: rconn-slave-router-listener
      port: 3308
      protocol: MariaDBProtocol
    name: rconn-slave-router
    params:
      max_replication_lag: 3s
      router_options: slave
    router: readconnroute

Expected behaviour
See #586 (comment).

Steps to reproduce the bug

  1. Create a MariaDB object with Maxscale enabled.
  2. Modify the switchover_on_low_disk_space parameter in the maxscale module configuration in MariaDB.
  3. Observe the behavior of the operator and the MaxScale object.

Environment details:

  • Kubernetes version: [v1.17.2]
  • mariadb-operator version: [0.0.27]
  • Install method: [helm]
  • Install flavor: [custom]

Additional context

@luohaha3123 luohaha3123 added the bug Something isn't working label Apr 25, 2024
@mmontes11
Copy link
Member

mmontes11 commented Apr 25, 2024

Hey there @luohaha3123 ! Thanks for reporting

The embedded MaxScale inside MariaDB is limited and does not currently support updates:
https://github.com/mariadb-operator/mariadb-operator/blob/main/docs/MAXSCALE.md#maxscale-embedded-in-mariadb

This is intended to quickly provision a non mutable MaxScale instance wiithout having to create an extra resource. I encourage you to create a dedicated MaxScale resource as described here:
https://github.com/mariadb-operator/mariadb-operator/blob/main/docs/MAXSCALE.md#maxscale-cr

In our side, we will make the spec.maxscale field of MariaDB inmutable to improve the feedback to the user.

Thanks!

@mmontes11 mmontes11 added good first issue Good for newcomers help wanted Extra attention is needed maxscale labels Apr 25, 2024
@luohaha3123
Copy link
Contributor Author

@mmontes11 Thanks for your response. I have a question. If I have an external MaxScale for a replication-based MariaDB setup, will both of them handle failover and other functionalities simultaneously?

@mmontes11
Copy link
Member

This will be a good question for the MaxScale team. I encourage you to join our slack channel and ask in #help :
https://r.mariadb.com/join-community-slack

Copy link

This issue is stale because it has been open 30 days with no activity.

@github-actions github-actions bot added the stale label May 26, 2024
@mmontes11 mmontes11 removed the stale label May 28, 2024
Copy link

This issue is stale because it has been open 60 days with no activity.

@github-actions github-actions bot added the stale label Jul 28, 2024
Copy link

This issue was closed because it has been stalled for 30 days with no activity.

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Aug 27, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working good first issue Good for newcomers help wanted Extra attention is needed maxscale stale
Projects
None yet
Development

No branches or pull requests

2 participants