Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Changing nextcloud.host vars results in crashloopbackoff #617

Open
Syntax3rror404 opened this issue Aug 4, 2024 · 1 comment
Open

Changing nextcloud.host vars results in crashloopbackoff #617

Syntax3rror404 opened this issue Aug 4, 2024 · 1 comment
Labels
bug Something isn't working

Comments

@Syntax3rror404
Copy link

Syntax3rror404 commented Aug 4, 2024

Describe your Issue

Changing the helm value responsible for the ingress hostname and the nextcloud internal host results in crashloopbackoff.

After changing back the nextcloud.host to the old value the deployment come back online. But this help in this case because I and I'm sure many other needs to change the hostname sometimes because of a migration to a other network etc.

Changing

nextcloud:
  host: mycoolserver.example.com

to

nextcloud:
  host: mycoolserver.newnetwork.com

I need to change this because I want to use my other ingress controller which can be accessed from the WAN.

Logs and Errors

crashloopbackoff in the nexcloud container inside the nextcloud pod. Nothing helpfull inside the log. The log shows no indicator about this issue. It looks simply like a exit code 1

Describe your Environment

  • Kubernetes distribution: v1.30.2+rke2r1

  • Helm Version: ArgoCD version 2.11.7

  • Helm Chart Version: 5.5.2

  • values.yaml:

---
{{- if .Values.spec.nextcloud.enabled }}
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: nextcloud
  namespace: {{ .Values.spec.argocdNamespace }}
  finalizers:
  - resources-finalizer.argocd.argoproj.io
spec:
  destination:
    namespace: nextcloud
    server: 'https://kubernetes.default.svc'
  project: default
  source:
    chart: nextcloud
    path: '.'
    repoURL: {{ .Values.spec.nextcloud.repoURL }}
    targetRevision: {{ .Values.spec.nextcloud.targetRevision }}
    helm:
      values: |
        nextcloud:
          host: {{ .Values.spec.nextcloud.host }}
          username: admin
          password: {{ .Values.spec.nextcloud.nextcloudAdminPW }}
          containerPort: 80
          datadir: /var/www/html/data
          configs:
            custom-overwrite.config.php: |-
              <?php
              $CONFIG = array (
                'overwrite.cli.url' => 'https://nextcloud.nextcloud.svc.cluster.local',
                'overwriteprotocol' => 'https',
              );
            proxy.config.php: |-
              <?php
              $CONFIG = array (
                'trusted_proxies' => array(
                  0 => '127.0.0.1',
                  1 => '10.0.0.0/8',
                ),
                'forwarded_for_headers' => array('HTTP_X_FORWARDED_FOR'),
              );


        cronjob:
          enabled: true

        persistence:
          enabled: true
          size: 150Gi
          storageClass: "{{ .Values.spec.nextcloud.storageClass }}"

        image:
          flavor: fpm

        nginx:
          enabled: true

        externalDatabase:
          enabled: true
          type: mysql
          host: nextcloud-mariadb.svc
          user: nextcloud
          password: "{{ .Values.spec.nextcloud.mariadbPW }}"
          database: nextcloud

        internalDatabase:
          enabled: false
          
        mariadb:
          enabled: true
          primary:
            persistence:
              enabled: true
              storageClass: "{{ .Values.spec.nextcloud.storageClass }}"
          auth:
            database: nextcloud
            username: nextcloud
            password: "{{ .Values.spec.nextcloud.mariadbPW }}"
            existingSecret: ""

        ingress:
          enabled: true
          labels: {}
          path: /
          pathType: Prefix
          className: nginx
          annotations:
            # cert-manager.io/cluster-issuer: letsencrypt-prod
            cert-manager.io/cluster-issuer: selfsigned-issuer
            nginx.ingress.kubernetes.io/enable-cors: "true"
            nginx.ingress.kubernetes.io/cors-allow-headers: "X-Forwarded-For"
            nginx.ingress.kubernetes.io/server-snippet: |-
              server_tokens off;
              proxy_hide_header X-Powered-By;
              rewrite ^/.well-known/webfinger /index.php/.well-known/webfinger last;
              rewrite ^/.well-known/nodeinfo /index.php/.well-known/nodeinfo last;
              rewrite ^/.well-known/host-meta /public.php?service=host-meta last;
              rewrite ^/.well-known/host-meta.json /public.php?service=host-meta-json;
              location = /.well-known/carddav {
                return 301 $scheme://$host/remote.php/dav;
              }
              location = /.well-known/caldav {
                return 301 $scheme://$host/remote.php/dav;
              }
              location = /robots.txt {
                allow all;
                log_not_found off;
                access_log off;
              }
              location ~ ^/(?:build|tests|config|lib|3rdparty|templates|data)/ {
                deny all;
              }
              location ~ ^/(?:autotest|occ|issue|indie|db_|console) {
                deny all;
              }

          tls:
            - secretName: nextcloud-tls
              hosts:
                - {{ .Values.spec.nextcloud.host }}

  syncPolicy:
    automated:
      selfHeal: true
      allowEmpty: true
    syncOptions:
    - CreateNamespace=true
{{- end }}
@provokateurin
Copy link
Member

I'm not entirely sure what is going on, but after looking at the usage of nextcloud.host it seems that we only really use it for the NEXTCLOUD_TRUSTED_DOMAINS. Now the probes use the host too, so if the server was not picking up the new host then the probes will fail.
Can you check your config.php and see if there is a trusted_domains set? In that case it probably has precedence over the environment variable and is not using the new value. If you remove that it should work again ™️

@provokateurin provokateurin added the bug Something isn't working label Aug 5, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants