Skip to content

Latest commit

 

History

History
484 lines (409 loc) · 27.2 KB

kubernetes-integration-install-configure.mdx

File metadata and controls

484 lines (409 loc) · 27.2 KB
title tags translate metaDescription redirects signupBanner
Get started with Kubernetes
Integrations
Kubernetes integration
Installation
jp
kr
New Relic's Kubernetes integration: How to install and activate the integration, and what data is reported.
/docs/integrations/kubernetes-integration/installation/kubernetes-integration-install-configure
/docs/kubernetes-monitoring-integration-beta
/docs/kubernetes-integration-beta
/docs/kubernetes-integration
/docs/kubernetes-integration-new-relic-infrastructure
/docs/kubernetes-monitoring-integration
/docs/integrations/host-integrations/host-integrations-list/kubernetes-monitoring-integration
/docs/integrations/kubernetes-integration/installation/kubernetes-monitoring-integration
/docs/integrations/kubernetes-integration/installation/kubernetes-monitoring-installation
/docs/integrations/kubernetes-integration/installation/kubernetes-installation-configuration
/docs/integrations/kubernetes-integration/installation
text
Monitor and improve your entire stack. 100GB free. Forever.

import kubernetesPivotal from 'images/kubernetes_logo_pivotal.webp'

import kubernetesAks from 'images/kubernetes_logo_aks.webp'

import kubernetesOpenshift from 'images/kubernetes_logo_openshift.webp'

import kubernetesCke from 'images/kubernetes_logo_cke.webp'

import kubernetesEks from 'images/kubernetes_logo_eks.webp'

import pixieLiveDebugging from 'images/pixie_screenshot-full_live-debugging.webp'

import pixieServiceOtelMap from 'images/pixie_screenshot-full_service-otel-map.webp'

The New Relic Kubernetes integration gives you full observability into the health and performance of your environment by leveraging the New Relic infrastructure agent. This agent collects metrics from your cluster alongside various other New Relic integrations such as the Kubernetes events integration, the Prometheus Agent, and the New Relic log plugin.

Why use the guided install? [#why-guided-install]

To install our Kubernetes integration, we recommend that you follow the instructions here for our guided install. We recommend this interactive installation tool for servers, VMs, and unprivileged environments. Here are some advantages of using the guided install:

  • It can provide either a Helm command with the required values filled, or a plain manifest if you don't want to use Helm.
  • It gives you control over which features are enabled and which data is collected.
  • It also offers a quickstart option that includes some optional, pre-built resources such as dashboards and alerts alongside the Kubernetes integration so that you can gain instant visibility into your Kubernetes clusters.
For some environments, you may need (or prefer) to do a manual install instead of the guided install. We have docs for the following: [Manual install with Helm](/docs/kubernetes-pixie/kubernetes-integration/installation/install-kubernetes-integration-using-helm/), [Windows](/docs/kubernetes-pixie/kubernetes-integration/installation/install-version2-kubernetes-integration-windows/), and [EKS Fargate](/docs/kubernetes-pixie/kubernetes-integration/advanced-configuration/install-fargate-integration/).

Before you start the guided install [#before-start]

Take a look at the following to make sure you're ready:

  • If you are installing our integration on a managed cloud, please take a look at these preliminary notes before proceeding.
  • If custom manifests have been used instead of Helm, you will need to first remove the old installation using kubectl delete -f previous-manifest-file.yml, and then proceed through the guided installer again. This will generate an updated set of manifests that can be deployed using kubectl apply -f manifest-file.yml.
  • Make sure you are using the supported Kubernetes versions and make sure to check out the preliminary notes for your managed services or platforms on our compatibility and requirements page.
  • Make sure you have New Relic . You can set up an account that's free—no credit card required.
  • Make sure you whitelist the domains (newrelic dockerhub and Google's registry registry.k8s.io) from where the installation will pull container images. Note, you may need to follow the commands to identify the additional Google registry domain to be whitelisted, because registry.k8s.io is typically redirected to your local registry domain (e.g., asia-northeast1-docker.pkg.dev) based on your region.

Start the guided install [#how-to-start]

We have some links below that will take you to the guided install that is right for you. After you start the installation process, you can use the tips in the remainder of this guide to help you make decisions about the various setup options.

Guided install option Description
[Guided install](https://one.newrelic.com/nr1-core?account=2498654&state=d1aae74b-0ad6-b0f3-093d-cc89ecf89234) Use this if your New Relic organization does **not** use the [EU](/docs/using-new-relic/welcome-new-relic/get-started/our-eu-us-region-data-centers) data center, and you don't need the bonus dashboards and alerts from the quickstart.
[Guided install (EU)](https://one.eu.newrelic.com/nr1-core?account=2498654&state=d1aae74b-0ad6-b0f3-093d-cc89ecf89234) Use this if your New Relic organization uses the [EU](/docs/using-new-relic/welcome-new-relic/get-started/our-eu-us-region-data-centers) data center, and you don't need the bonus dashboards and alerts from the quickstart.
[Guided install with quickstart](https://one.newrelic.com/launcher/catalog-pack-details.launcher/?pane=eyJuZXJkbGV0SWQiOiJjYXRhbG9nLXBhY2stZGV0YWlscy5jYXRhbG9nLXBhY2stY29udGVudHMiLCJxdWlja3N0YXJ0SWQiOiI4OGE3OWY1Mi04MWMxLTRmYTItOTlmOC0zY2I1YjAxMmYxNjAifQ==) Use this option if your New Relic organization does **not** use the [EU](/docs/using-new-relic/welcome-new-relic/get-started/our-eu-us-region-data-centers) data center, and you also want to install some bonus dashboards and alerts from the quickstart.

Navigating the Kubernetes integration guided install [#kubernetes-install-navigation]

Once you start the guided install, use the following information to help you make decisions about the configurations.

The steps that follow skip the preliminary steps for the quickstart. If you chose the guided install with the quickstart, just click through the pages **Confirm your Kubernetes quickstart installation** and **Installation plan** to reach the main guided install pages described below. On the page **Configure the Kubernetes Integration** complete the following fields:
<table>
  <thead>
    <tr>
      <th style={{ width: "200px" }}>
        Field
      </th>
      <th>
        Description
      </th>
    </tr>
  </thead>
    <tbody>
    <tr>
      <td>
        We'll send your data to this account
      </td>
      <td>
        Choose the New Relic account that you want your Kubernetes data written to.
      </td>
    </tr>
    <tr>
      <td>
        Cluster name
      </td>
      <td>
        Cluster name is the name we will use to tag your Kubernetes data with so that you can filter for the data specific to the cluster you’re installing this integration in. This is important if you choose to connect multiple clusters to your New Relic account so choose a name that you’ll recognize!
      </td>
    </tr>
    <tr>
      <td>
        Namespace for the integration
      </td>
      <td>
        Namespace for the integration is the namespace we will use to house the Kubernetes integration in your cluster. We recommend using the default namespace of `newrelic`.
      </td>
    </tr>
    </tbody>
</table>
On the page **Select the additional data you want to gather**, choose the options that are right for you:
### Scrape Prometheus endpoints [#scrape-endpoints]

By selecting this option, we will install Prometheus in agent mode to collect metrics from the Prometheus endpoints exposed in your cluster. Expand the collapsers to see details about each option:

<CollapserGroup>
  <Collapser
    className="freq-link"
    id="scrape-all-except-ksm"
    title="Scrape all Prometheus endpoints except core Kubernetes system metrics (recommended)"
  >
    We recommend this configuration because various other components of the Kubernetes integration, such as [`kube-state-metrics`, `newrelic-infrastructure`, and `nri-prometheus`](https://github.com/newrelic/helm-charts/tree/master/charts/nri-bundle#bundled-charts) will already collect these metrics and configuring Prometheus to exclude those metrics will save your data ingest costs by removing any metric redundancies.

    This configuration will filter out any metrics prefixed with [`kube_`, `container_`, `machine_`, and `cadvisor_`](https://github.com/newrelic/newrelic-prometheus-configurator/blob/64af9453f4b20d4aab88a4d1afda55cf9a6e63c4/charts/newrelic-prometheus-agent/static/lowdatamodedefaults.yaml).

    Here's an example from `newrelic-prometheus-configurator/charts/newrelic-prometheus-agent/static/lowdatamodedefaults.yaml`:

    ```yaml
    low_data_mode:
    - action: drop
      source_labels: [__name__]
      regex: "kube_.+|container_.+|machine_.+|cadvisor_.+"
    ```

  </Collapser>
  <Collapser
    className="freq-link"
    id="scrape-all-endpoints"
    title="Scrape all Prometheus endpoints"
  >
    Select **Scrape all Prometheus endpoints** if you prefer to preserve Prometheus’ metric naming conventions across all Prometheus metrics regardless of any metric redundancies.
  </Collapser>
  <Collapser
    className="freq-link"
    id="scrape-with-quickstarts"
    title="Scrape only Prometheus endpoints with quickstarts"
  >
    New Relic provides [quickstarts](https://newrelic.com/instant-observability/?category=prometheus&search=), which are pre-made dashboards, alerts, and entities for various services. Select this option to have Prometheus only scrape for [services which have a pre-made quickstart](https://github.com/newrelic/newrelic-prometheus-configurator/blob/main/charts/newrelic-prometheus-agent/values.yaml#L214-L228) and are ready to go for instant observability.

    Here's an example from `newrelic-prometheus-configurator/charts/newrelic-prometheus-agent/values.yaml` showing in the `app_values` field which services will be scraped for the Prometheus quickstart option:
    ```yaml
    kubernetes:
        # NewRelic provides a list of Dashboards, alerts and entities for several Services. The integrations_filter configuration
        # allows to scrape only the targets having this experience out of the box.
        # If integrations_filter is enabled, then the jobs scrape merely the targets having one of the specified labels matching
        # one of the values of app_values.
        # Under the hood, a relabel_configs with 'action=keep' are generated, consider it in case any custom extra_relabel_config is needed.
        integrations_filter:
          # -- enabling the integration filters, merely the targets having one of the specified labels matching
          #    one of the values of app_values are scraped. Each job configuration can override this default.
          enabled: true
          # -- source_labels used to fetch label values in the relabel config added by the integration filters configuration
          source_labels: ["app.kubernetes.io/name", "app.newrelic.io/name", "k8s-app"]
          # -- app_values used to create the regex used in the relabel config added by the integration filters configuration.
          # Note that a single regex will be created from this list, example: '.*(?i)(app1|app2|app3).*'
          app_values: ["redis", "traefik", "calico", "nginx", "coredns", "kube-dns", "etcd", "cockroachdb", "velero", "harbor", "argocd"]
    ```
  </Collapser>
  <Collapser
    className="freq-link"
    id="custom-app-labels"
    title="Scrape only the certain labels"
  >
    You'll find this option useful if you're an advanced user who has a good idea of what services you want to see Prometheus metrics from. Enter a comma-separated list of services you want Prometheus to scrape, and Prometheus will perform a wildcard match on the service name in order to find you metrics from your desired endpoint.

    This option will *only* provide metrics from the services that match the submitted list, so be careful to validate the entry for correctness. To learn more about custom app labels, see [Advanced configuration for the Prometheus agent](/docs/infrastructure/prometheus-integrations/install-configure-prometheus-agent/advanced-configuration/#enable-disable-integrations).

    The services you add to the submitted list will overwrite the data in `app_values` below, and Prometheus will **only** scrape metrics from those services.

    Here is an example from `newrelic-prometheus-configurator/charts/newrelic-prometheus-agent/values.yaml`:

    ```yaml
      kubernetes:
        # NewRelic provides a list of Dashboards, alerts and entities for several Services. The integrations_filter configuration
        # allows to scrape only the targets having this experience out of the box.
        # If integrations_filter is enabled, then the jobs scrape merely the targets having one of the specified labels matching
        # one of the values of app_values.
        # Under the hood, a relabel_configs with 'action=keep' are generated, consider it in case any custom extra_relabel_config is needed.
        integrations_filter:
          # -- enabling the integration filters, merely the targets having one of the specified labels matching
          #    one of the values of app_values are scraped. Each job configuration can override this default.
          enabled: true
          # -- source_labels used to fetch label values in the relabel config added by the integration filters configuration
          source_labels: ["app.kubernetes.io/name", "app.newrelic.io/name", "k8s-app"]
          # -- app_values used to create the regex used in the relabel config added by the integration filters configuration.
          # Note that a single regex will be created from this list, example: '.*(?i)(app1|app2|app3).*'
          app_values: ["redis", "traefik", "calico", "nginx", "coredns", "kube-dns", "etcd", "cockroachdb", "velero", "harbor", "argocd"]
    ```

  </Collapser>
</CollapserGroup>

### Gather log data [#gather-logs]

<CollapserGroup>
  <Collapser
  className="freq-link"
  id="full-enrichment"
  title="Forward all logs with full enrichment"
  >
    If you prefer more robust data, select this option to fully enrich your logs by adding label and annotation data to your log data.

    Here's an example of a enriched log data:

    ```json
    [
      {
        "cluster_name": "api-test",
        "kubernetes": {
          "annotations": {
            "kubernetes.io/psp": "eks.privileged"
          },
          "container_hash": "fryckbos/test@sha256:5b098eaf3c7d5b3585eb10cebee63665b6208bea31ef31a3f0856c5ffdda644b",
          "container_image": "fryckbos/test:latest",
          "container_name": "newrelic-logging",
          "docker_id": "134e1daf63761baa15e035b08b7aea04518a0f0e50af4215131a50c6a379a072",
          "host": "ip-192-168-17-123.ec2.internal",
          "labels": {
            "app": "newrelic-logging",
            "app.kubernetes.io/name": "newrelic-logging",
            "controller-revision-hash": "84db95db86",
            "pod-template-generation": "1",
            "release": "nri-bundle"
          },
          "namespace_name": "nrlogs",
          "pod_id": "54556e3e-719c-46b5-af69-020b75d69bf1",
          "pod_name": "nri-bundle-newrelic-logging-jxnbj"
        },
        "message": "[2021/09/14 12:30:49] [ info] [engine] started (pid=1)\n",
        "plugin": {
          "source": "kubernetes",
          "type": "fluent-bit",
          "version": "1.8.1"
        },
        "stream": "stderr",
        "time": "2021-09-14T12:30:49.138824971Z",
        "timestamp": 1631622649138
      }
    ]
    ```
  </Collapser>
  <Collapser
  className="freq-link"
  id="min-enrichment"
  title="Forward all logs with minimal enrichment (low data mode)"
  >
    If you want to prioritize data ingest costs, you can choose to gather log data with minimal enrichment. This option drops labels and annotations from their logs and only shares standard Kubernetes log data such as the name of the cluster, container, namespace, and pod, along with the message and timestamp.

    In minimal enrichment, only the following fields are retained:

    ```yaml
    [FILTER]
        Name           record_modifier
        Match          *
        Record         cluster_name ${CLUSTER_NAME}
        Allowlist_key  container_name
        Allowlist_key  namespace_name
        Allowlist_key  pod_name
        Allowlist_key  stream
        Allowlist_key  message
        Allowlist_key  log
    ```

    Here's an example of minimal log enrichment:
    ```json
    [
      {
        "cluster_name": "api-test",
        "container_name": "newrelic-logging",
        "namespace_name": "nrlogs",
        "pod_name": "nri-bundle-newrelic-logging-jxnbj",
        "message": "[2021/09/14 12:30:49] [ info] [engine] started (pid=1)\n",
        "stream": "stderr",
        "timestamp": 1631622649138
      }
    ]
    ```
</Collapser>

</CollapserGroup>

### Enable service-level insights, full-body requests, and application profiles through Pixie [#enable-pixie]

[Pixie](https://docs.px.dev/about-pixie/what-is-pixie/) is an open source observability tool for Kubernetes applications that uses eBPF to automatically collect telemetry data. If you don't have Pixie installed on your cluster, but want to leverage Pixie's powerful telemetry data collection and visualization on the [New Relic platform](https://docs.newrelic.com/docs/kubernetes-pixie/auto-telemetry-pixie/get-started-auto-telemetry-pixie/), check **Enable service-level insights, full-body requests, and application profiles through Pixie**.

If you're already using Community Cloud, select **Community Cloud hosted Pixie is already running on this cluster**. Keep the following in mind about the different ways [Pixie can be hosted](https://docs.px.dev/installing-pixie/install-guides/#title). New Relic provides a different level of integration support for each Pixie hosting option.

<CollapserGroup>
  <Collapser
    className="freq-link"
    id="community-cloud-pixie"
    title="Community Cloud Pixie"
  >
    If you're already leveraging Pixie's Community Cloud, you can provide an API key to connect Pixie to New Relic. This approach will embed Pixie's live UI into your New Relic account for easy access (via Pixie's Live Debugging tool), as well as write Pixie data into New Relic through the New Relic OpenTelemetry endpoint.

    <img
      title="service graph in live debugger"
      alt="service-graph"
      src={pixieLiveDebugging}
    />
  </Collapser>
  <Collapser
    className="freq-link"
    id="self-hosted-pixie"
    title="Self-hosted Pixie"
  >
    If you're using Pixie with a self-hosted Pixie Cloud, you can also connect Pixie to New Relic. This approach will enable the export of Pixie telemetry data into New Relic via the OpenTelemetry endpoint for long-term data retention and visibility. Unfortunately, if you're self-hosting your Pixie Cloud, New Relic does not support embedding Pixie's Live UI.

    If you are self-hosting Pixie Cloud and would like to enable the export of Pixie telemetry data into New Relic, simply enable Pixie in the Kubernetes Integration without checking the **Community Cloud hosted Pixie option**. The Kubernetes Integration will detect that Pixie is running in your cluster and enable the data export for instant data visibility and insight.

    <img
      title="The OpenTelemetry **Service map** view shows helps visualize your application's dependencies."
      alt="The OpenTelemetry **Service map** view shows helps visualize your application's dependencies."
      src={pixieServiceOtelMap}
    />
  </Collapser>
</CollapserGroup>
Finalize the Kubernetes installation setup by choosing one of the following:
* Run the CLI command
* Use the Helm chart
* Use the manifest

Preliminary notes for your managed services or platforms [#cloud-platforms]

Before starting our guided install, check out these notes for your managed services or platforms:

EKS

Amazon EKS / EKS Anywhere / EKS Anywhere on bare metal} > The Kubernetes integration only monitors worker nodes into Amazon EKS as Amazon abstracts the management of master nodes away from the Kubernetes platform.
Before using our [guided install](https://one.newrelic.com/nr1-core?state=51fbbd48-c8ca-ead9-bb90-af96e18d82a7) to deploy the Kubernetes integration in Amazon EKS,  make sure to install `eksctl`, the [command line tool](https://docs.aws.amazon.com/eks/latest/userguide/eksctl.html) for managing Kubernetes clusters on Amazon EKS.

<Collapser className="freq-link" id="install-amazon-eks-fargate" title={<><img src={kubernetesEks} alt="EKS" style={{ verticalAlign: 'middle' }}/>Amazon EKS Fargate</>}

Installation on EKS Fargate clusters requires dedicated steps, which are detailed in our [EKS Fargate installation docs](/docs/integrations/kubernetes-integration/installation/install-fargate-integration).

<Collapser className="freq-link" id="install-google-kubernetes-engine" title={<><img src={kubernetesCke} alt="CKE" style={{ verticalAlign: 'middle' }}/>Google Kubernetes Engine (GKE)</>}

The Kubernetes integration only monitors worker nodes in GKE as Google abstracts the management of master nodes away from the Kubernetes platform.

Before starting our [guided install](https://one.newrelic.com/nr1-core?state=51fbbd48-c8ca-ead9-bb90-af96e18d82a7) to deploy the Kubernetes integration on GKE, ensure you have sufficient permissions:

1. Go to [console.cloud.google.com/iam-admin/iam](https://console.cloud.google.com/iam-admin/iam) and find your username.
2. Click **edit**.
3. Ensure you have permissions to create `Roles` and `ClusterRoles`: If you are not sure, add the **Kubernetes Engine Cluster Admin** role. If you cannot edit your user role, ask the owner of the GCP project to give you the necessary permissions.

<Collapser className="freq-link" id="install-openshift-container-platform" title={<><img src={kubernetesOpenshift} alt="OpenShift" style={{ verticalAlign: 'middle' }}/> OpenShift container platform</>}

To deploy the Kubernetes integration with [OpenShift](https://learn.openshift.com):

1. Add the service accounts used by the integration to your privileged [Security Context Constraints](https://docs.openshift.com/enterprise/3.0/admin_guide/manage_scc.html):

   ```
   oc adm policy add-scc-to-user privileged system:serviceaccount:<namespace>:<release_name>-newrelic-infrastructure
   oc adm policy add-scc-to-user privileged system:serviceaccount:<namespace>:<release_name>-newrelic-infrastructure-controlplane
   oc adm policy add-scc-to-user privileged system:serviceaccount:<namespace>:<release_name>-kube-state-metrics
   oc adm policy add-scc-to-user privileged system:serviceaccount:<namespace>:<release_name>-newrelic-logging
   oc adm policy add-scc-to-user privileged system:serviceaccount:<namespace>:<release_name>-nri-kube-events
   oc adm policy add-scc-to-user privileged system:serviceaccount:<namespace>:<release_name>-nri-metadata-injection-admission
   oc adm policy add-scc-to-user privileged system:serviceaccount:<namespace>:<release_name>-nrk8s-controlplane
   oc adm policy add-scc-to-user privileged system:serviceaccount:<namespace>:default
   ```
   
   <Callout variant="tip">
    The default `release_name` provided by the installer is `newrelic-bundle`. The default `namespace` is `newrelic`.
   </Callout>
   
2. Complete the steps in our [guided install](https://one.newrelic.com/nr1-core?state=51fbbd48-c8ca-ead9-bb90-af96e18d82a7).
3. If you're using signed certificates, make sure they are properly configured by using the following variables in the `DaemonSet` portion of your manifest, to set the `.pem` file:
  ```yaml
  env:
    - name: NRIA_CA_BUNDLE_DIR
      value: YOUR_CA_BUNDLE_DIR
    - name: NRIA_CA_BUNDLE_FILE
      value: YOUR_CA_BUNDLE_NAME
  ```
4. Set your YAML key path to `spec.template.spec.containers.name.env`. 
5. Save your changes.

<Collapser className="freq-link" id="install-azure-aks" title={<><img src={kubernetesAks} alt="AKS" style={{ verticalAlign: 'middle' }}/> Azure Kubernetes Service (AKS)</>}

The Kubernetes integration only monitors worker nodes in the Azure Kubernetes Service as Azure abstracts the management of master nodes away from the Kubernetes platform.

Use your Kubernetes data

Learn more about: