Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Trying to deploy newrelic-bundle and says it deployed, but is not #109

Closed
contd opened this issue May 19, 2021 · 3 comments · Fixed by #108
Closed

Trying to deploy newrelic-bundle and says it deployed, but is not #109

contd opened this issue May 19, 2021 · 3 comments · Fixed by #108

Comments

@contd
Copy link

contd commented May 19, 2021

SUMMARY

I setup a helm install task to install the new relic client bundle based on the documentation of the helm command. The helm install from CLI with the same values/options works but running the ansible helm install doesn't work. The ansible output says it was deployed and no errors but when I check the kubenetes cluster none of the PODs or DaemonSets are there as they are when helm is run from the CLI. I've checked every option and setting I can think if but cannot seem to figure out where else to look for errors or messages that might help track down the problem.

ISSUE TYPE
  • Bug Report
COMPONENT NAME

community.kubernetes.helm

ANSIBLE VERSION
ansible 2.10.5
  config file = None
  configured module search path = ['/Users/jason/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /Users/jason/.local/share/virtualenvs/ansible-hs-sfJ6i/lib/python3.9/site-packages/ansible
  executable location = /Users/jason/.local/share/virtualenvs/ansible-hs-sfJ6i/bin/ansible
  python version = 3.9.5 (default, May  4 2021, 03:36:27) [Clang 12.0.0 (clang-1200.0.32.29)]
CONFIGURATION
OS / ENVIRONMENT

Mac OS 11.3

STEPS TO REPRODUCE

Just need to make sure the ansible community.kubernetes and run the ansible-playbook with the below yaml file and the -e options for the variables below in {{xxx}}.

---
- hosts: tag_service_k3s_server
  become: yes
  tasks:

    - name: Deploy new-relic client chart inside new-relic namespace
      community.kubernetes.helm:
        kubeconfig:  "{{artifacts_path}}/{{site_id}}.yaml"
        name: newrelic-bundle
        chart_ref: newrelic/nri-bundle
        release_namespace: default
        force: True
        wait: True
        replace: True
        update_repo_cache: True
        disable_hook: True
        values:
          global.licenseKey: "{{nr_license_key}}"
          global.cluster: "{{site_name}}"
          newrelic-infrastructure.privileged: True
          ksm.enabled: True
          prometheus.enabled: True
          kubeEvents.enabled: True
          logging.enabled: True
      delegate_to: 127.0.0.1
EXPECTED RESULTS

There should be 2 DaemonSets and 7 PODs running like so (from the new relic doc):

NAME                                                     DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
daemonset.apps/newrelic-bundle-newrelic-infrastructure   1         1         1       1            1           <none>          2m53s
daemonset.apps/newrelic-bundle-newrelic-logging          1         1         1       1            1           <none>          2m53s
NAME                                                          READY   STATUS      RESTARTS   AGE
pod/newrelic-bundle-kube-state-metrics-69ff8cfb74-rgjc5       1/1     Running     0          2m53s
pod/newrelic-bundle-newrelic-infrastructure-z8ddb             1/1     Running     0          2m53s
pod/newrelic-bundle-newrelic-logging-wp22p                    1/1     Running     0          2m53s
pod/newrelic-bundle-nri-kube-events-f9d5bb944-kcxxf           2/2     Running     0          2m53s
pod/newrelic-bundle-nri-metadata-injection-66d76c868b-xrcq8   1/1     Running     0          2m53s
pod/newrelic-bundle-nri-metadata-injection-job-rszw5          0/1     Completed   0          2m53s
pod/newrelic-bundle-nri-prometheus-569689b7cb-pnddg           1/1     Running     0          2m53s
ACTUAL RESULTS

The playbook runs the 1 task and says it was deployed. The output below is for the task

TASK [Deploy new-relic client chart inside new-relic namespace] ******************************************************************************************
task path: /Users/jason/rivendel/ansible/ansible-playbooks/newrelic.yaml:6
redirecting (type: action) community.kubernetes.helm to community.kubernetes.k8s_info
redirecting (type: action) community.kubernetes.helm to community.kubernetes.k8s_info
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: root
<127.0.0.1> EXEC /bin/sh -c 'echo ~root && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /var/root/.ansible/tmp `"&& mkdir "` echo /var/root/.ansible/tmp/ansible-tmp-1621433692.332244-67531-42120530889237 `" && echo ansible-tmp-1621433692.332244-67531-42120530889237="` echo /var/root/.ansible/tmp/ansible-tmp-1621433692.332244-67531-42120530889237 `" ) && sleep 0'
Using module file /Users/jason/.ansible/collections/ansible_collections/community/kubernetes/plugins/modules/helm.py
<127.0.0.1> PUT /Users/jason/.ansible/tmp/ansible-local-67498aqeslqvk/tmp0qi6fqib TO /private/var/root/.ansible/tmp/ansible-tmp-1621433692.332244-67531-42120530889237/AnsiballZ_helm.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /var/root/.ansible/tmp/ansible-tmp-1621433692.332244-67531-42120530889237/ /var/root/.ansible/tmp/ansible-tmp-1621433692.332244-67531-42120530889237/AnsiballZ_helm.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/Users/jason/.local/share/virtualenvs/ansible-hs-sfJ6i/bin/python /var/root/.ansible/tmp/ansible-tmp-1621433692.332244-67531-42120530889237/AnsiballZ_helm.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /var/root/.ansible/tmp/ansible-tmp-1621433692.332244-67531-42120530889237/ > /dev/null 2>&1 && sleep 0'
changed: [ip-10-101-1-22.us-east-2.compute.internal] => {
    "changed": true,
    "command": "/usr/local/bin/helm install --wait --replace --no-hooks -f=/tmp/tmpfgpl0yid.yml newrelic-bundle newrelic/nri-bundle",
    "invocation": {
        "module_args": {
            "api_key": null,
            "atomic": false,
            "binary_path": null,
            "ca_cert": null,
            "chart_ref": "newrelic/nri-bundle",
            "chart_repo_url": null,
            "chart_version": null,
            "context": null,
            "create_namespace": false,
            "disable_hook": true,
            "force": true,
            "host": null,
            "kubeconfig": "/Users/jason/rivendel/terraform/artifacts/9a55bef6-e966-448e-93ff-5c49c389719c.yaml",
            "name": "newrelic-bundle",
            "purge": true,
            "release_name": "newrelic-bundle",
            "release_namespace": "default",
            "release_state": "present",
            "release_values": {
                "global.cluster": "test-new-relic",
                "global.licenseKey": "----",
                "ksm.enabled": true,
                "kubeEvents.enabled": true,
                "logging.enabled": true,
                "newrelic-infrastructure.privileged": true,
                "prometheus.enabled": true
            },
            "replace": true,
            "skip_crds": false,
            "update_repo_cache": true,
            "validate_certs": true,
            "values": {
                "global.cluster": "test-new-relic",
                "global.licenseKey": "----",
                "ksm.enabled": true,
                "kubeEvents.enabled": true,
                "logging.enabled": true,
                "newrelic-infrastructure.privileged": true,
                "prometheus.enabled": true
            },
            "values_files": [],
            "wait": true,
            "wait_timeout": null
        }
    },
    "status": {
        "app_version": "1.0",
        "chart": "nri-bundle-2.10.7",
        "name": "newrelic-bundle",
        "namespace": "default",
        "revision": "1",
        "status": "deployed",
        "updated": "2021-05-19 10:15:02.472801 -0400 EDT",
        "values": {
            "global.cluster": "test-new-relic",
            "global.licenseKey": "----",
            "ksm.enabled": true,
            "kubeEvents.enabled": true,
            "logging.enabled": true,
            "newrelic-infrastructure.privileged": true,
            "prometheus.enabled": true
        }
    },
    "stderr": "WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /Users/jason/rivendel/terraform/artifacts/9a55bef6-e966-448e-93ff-5c49c389719c.yaml\nWARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /Users/jason/rivendel/terraform/artifacts/9a55bef6-e966-448e-93ff-5c49c389719c.yaml\nW0519 10:15:00.557882   67558 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition\nW0519 10:15:02.916553   67558 warnings.go:70] admissionregistration.k8s.io/v1beta1 MutatingWebhookConfiguration is deprecated in v1.16+, unavailable in v1.22+; use admissionregistration.k8s.io/v1 MutatingWebhookConfiguration\nW0519 10:15:03.331322   67558 warnings.go:70] admissionregistration.k8s.io/v1beta1 MutatingWebhookConfiguration is deprecated in v1.16+, unavailable in v1.22+; use admissionregistration.k8s.io/v1 MutatingWebhookConfiguration\n",
    "stderr_lines": [
        "WARNING: Kubernetes configuration file is group-readable. This is insecure. Location: /Users/jason/rivendel/terraform/artifacts/9a55bef6-e966-448e-93ff-5c49c389719c.yaml",
        "WARNING: Kubernetes configuration file is world-readable. This is insecure. Location: /Users/jason/rivendel/terraform/artifacts/9a55bef6-e966-448e-93ff-5c49c389719c.yaml",
        "W0519 10:15:00.557882   67558 warnings.go:70] apiextensions.k8s.io/v1beta1 CustomResourceDefinition is deprecated in v1.16+, unavailable in v1.22+; use apiextensions.k8s.io/v1 CustomResourceDefinition",
        "W0519 10:15:02.916553   67558 warnings.go:70] admissionregistration.k8s.io/v1beta1 MutatingWebhookConfiguration is deprecated in v1.16+, unavailable in v1.22+; use admissionregistration.k8s.io/v1 MutatingWebhookConfiguration",
        "W0519 10:15:03.331322   67558 warnings.go:70] admissionregistration.k8s.io/v1beta1 MutatingWebhookConfiguration is deprecated in v1.16+, unavailable in v1.22+; use admissionregistration.k8s.io/v1 MutatingWebhookConfiguration"
    ],
    "stdout": "NAME: newrelic-bundle\nLAST DEPLOYED: Wed May 19 10:15:02 2021\nNAMESPACE: default\nSTATUS: deployed\nREVISION: 1\nTEST SUITE: None\n",
    "stdout_lines": [
        "NAME: newrelic-bundle",
        "LAST DEPLOYED: Wed May 19 10:15:02 2021",
        "NAMESPACE: default",
        "STATUS: deployed",
        "REVISION: 1",
        "TEST SUITE: None"
    ]
}
META: ran handlers
META: ran handlers

PLAY RECAP ***********************************************************************************************************************************************
ip-10-101-1-22.us-east-2.compute.internal : ok=2    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0
@Akasurde
Copy link
Member

Akasurde commented May 19, 2021

@contd Thanks for reporting this issue. I think you need to specify a dictionary for values like -

- name: Deploy new-relic client chart inside new-relic namespace
      community.kubernetes.helm:
        kubeconfig:  "{{artifacts_path}}/{{site_id}}.yaml"
        name: newrelic-bundle
        chart_ref: newrelic/nri-bundle
        release_namespace: default
        force: True
        wait: True
        replace: True
        update_repo_cache: True
        disable_hook: True
        values:
          global:
            licenseKey: "{{nr_license_key}}"
          global:
            cluster: "{{site_name}}"
          newrelic-infrastructure:
            privileged: True
          ksm:
            enabled: True
          prometheus:
            enabled: True
          kubeEvents:
            enabled: True
          logging:
            enabled: True
      delegate_to: 127.0.0.1

Let me know if it does not work for you? Thanks.

@contd
Copy link
Author

contd commented May 19, 2021

Holy cow! YES!! It worked. Thank you so much!!!!

@Akasurde
Copy link
Member

@contd Cool. I will add this example in helm docs so that values will be clear. Thanks.

@Akasurde Akasurde transferred this issue from ansible-collections/community.kubernetes May 19, 2021
Akasurde added a commit to Akasurde/kubernetes.core that referenced this issue May 19, 2021
Specifying complex values using helm module is documented.

Fixes: ansible-collections#109

Signed-off-by: Abhijeet Kasurde <akasurde@redhat.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants