Skip to content

Commit

Permalink
k8s/changes: minor typos and corrections
Browse files Browse the repository at this point in the history
  • Loading branch information
roobre committed Jan 31, 2022
1 parent dd4145d commit a7b082f
Showing 1 changed file with 5 additions and 5 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -23,9 +23,9 @@ From version 3 onwards, New Relic's Kubernetes solution features a new architect

In this new version, the main component of the integration, the `newrelic-infrastructure` DaemonSet, is divided in three different components: `nrk8s-ksm`, `nrk8s-kubelet`, and `nrk8s-controlplane`, with the first being a deployment and the next two being DaemonSets. This makes it easier to make decisions at scheduling and deployment time, rather than runtime.

Moreover, we also changed the lifecycle of the scraping process. We went from a one-shot, short-lived process, to a long-lived one, allowing it to leverage higher-level Kubernetes APIs like the Kubernetes informers, that provide built-in caching and watching of cluster objects. For this reason, each of the components has two containers:
Moreover, we also changed the lifecycle of the scraping process. We went from a one-shot, short-lived process, to a long-lived one, allowing it to leverage higher-level Kubernetes APIs like the Kubernetes informers, that provide built-in caching and watching of cluster objects. For this reason, each of the components has two containers:

1. A container for the integration, responsible of collecting metrics.
1. A container for the integration, responsible for collecting metrics.
2. A container with the New Relic Infrastructure Agent, which is used to send the metrics to the New Relic Platform.

### Kube-state-metrics component [#nrk8s-ksm]
Expand Down Expand Up @@ -61,7 +61,7 @@ We built the current approach with the following scenarios in mind:

1. CP monitoring should work out of the box for those environments in which the CP is reachable out of the box, e.g. Kubeadm or even Minikube.

2. For setups where the CP can't be autodiscovered. For example, if it lives out of the cluster, we should provide a way for the user to specify their own endpoints.
2. For setups where the CP cannot be autodiscovered. For example, if it lives out of the cluster, we should provide a way for the user to specify their own endpoints.

3. Failure to autodiscover shouldn't cause the deployment to fail, but failure to hit a manually defined endpoint should.

Expand Down Expand Up @@ -95,9 +95,9 @@ config:
auth: {}
```

If `staticEndpoint` is set, the component will try to scrape it. If it fails, the integration will fail so there are no silent errors when manual endpoints are configured.
If `staticEndpoint` is set, the component will try to scrape it. If it cannot hit the endpoint, the integration will fail so there are no silent errors when manual endpoints are configured.

If `staticEndpoint` is not set, the component will iterate over the autodiscover entries looking for the first pod that matches the `selector` in the specified `namespace`, and optionally is running in the same node of the DaemonSet in `matchNode`. After a pod is discovered, the component probes, issuing an http `HEAD` request, the listed endpoints in order and scrapes the first successful probed one using the authorization type selected.
If `staticEndpoint` is not set, the component will iterate over the autodiscover entries looking for the first pod that matches the `selector` in the specified `namespace`, and optionally is running in the same node of the DaemonSet (if `matchNode` is set to `true`). After a pod is discovered, the component probes, issuing an http `HEAD` request, the listed endpoints in order and scrapes the first successful probed one using the authorization type selected.

While above we show a config excerpt for the `etcd` component, the scraping logic is the same for other components.

Expand Down

0 comments on commit a7b082f

Please sign in to comment.