Application visibility for Kubernetes.
Clone or download
emfree Bump libhoney-go dependency
Currently, we're vendoring an old version of `libhoney-go` that would report
very cryptic errors, e.g.,

json: cannot unmarshal object into Go value of type []libhoney.Response

instead of

unknown API key - check your credentials

Bump libhoney-go to latest to fix this (this is the libhoney patch:
Latest commit 7ecbe54 Sep 12, 2018
Failed to load latest commit information.
Godeps Bump libhoney-go dependency Sep 12, 2018
build Add end-to-end smoke test Jul 11, 2017
config Allow support of legacy container log paths Aug 8, 2018
docs Fix log_format typo May 17, 2018
e2e-tests Added kubernetes to the honeytail user agent May 24, 2018
event Always attach container metadata to events Jul 13, 2017
examples Allow support of legacy container log paths Aug 8, 2018
handlers Stop PodWatchers after pods are deleted Aug 4, 2018
k8sagent Stop PodWatchers after pods are deleted Aug 4, 2018
parsers Add parser shortcut for nginx ingress logs Jun 14, 2018
podtailer [podtailer] support yet another log pattern Sep 12, 2018
processors Properly close everything on shutdown, refactor pod logic out of main Oct 4, 2017
static Update diagram, remove fluentd bits Jun 23, 2017
tailer Clarify spurious "error getting state" log Aug 24, 2018
transmission Make it possible to check config before deploying Jul 15, 2017
unwrappers Add dedicated parser for k8s audit logs Aug 29, 2017
vendor Bump libhoney-go dependency Sep 12, 2018
version Honey, I shrunk the build Jun 29, 2017
.gitignore Honey, I shrunk the build Jun 29, 2017
.travis.yml Fix docker build in travis May 3, 2018 Update reference deployment, add example configs Jun 28, 2017 Added kubernetes to the honeytail user agent May 24, 2018 Make container user root for now Jun 29, 2017
LICENSE Initial commit. Feb 9, 2017
Makefile Also push a 'head' tag on builds Jul 13, 2017 Update dataset name in quickstart to be consistent with site docs Sep 19, 2017
main.go Allow support of legacy container log paths Aug 8, 2018

Cluster-level Kubernetes Logging with Honeycomb

Build Status

Honeycomb's Kubernetes agent aggregates logs across a Kubernetes cluster. Stop managing log storage in all your clusters and start tracking down real problems.

To get started with Honeycomb, check out the Honeycomb general quickstart.

This README includes some basic information about getting started with the Kubernetes agent. Please see Honeycomb's Kubernetes documentation for more comprehensive documentation.

How it Works

honeycomb-agent runs as a DaemonSet on each node in a cluster. It reads container log files from the node's filesystem, augments them with metadata from the Kubernetes API, and ships them to Honeycomb so that you can see what's going on.

architecture diagram


The following steps will deploy the Honeycomb agent to each node in your cluster, and configure it to process logs from all pods.

  1. Grab your Honeycomb writekey from your account page, and create a Kubernetes secret from it:

    kubectl create secret generic honeycomb-writekey --from-literal=key=$WRITEKEY --namespace=kube-system
  2. Run the agent

    kubectl apply -f examples/quickstart.yaml

    This will do three things:

    • create a service account for the agent so that it can list pods from the API
    • create a minimal ConfigMap containing configuration for the agent
    • create a DaemonSet from the agent.

Production-Ready Use

Service-specific parsing

It's best if all of your containers output structured JSON logs. But that's not always realistic. In particular, you're likely to operate third-party services, such as proxies or databases, that don't log JSON.

You may also want to aggregate logs from specific services, rather than from everything that might be running in a cluster.

In order to get usefully structured data from services, you can use Kubernetes label selectors to describe how to parse logs for specific services.

For example, to parse logs from pods with the label app: nginx as NGINX logs, you'd specify the following configuration:

- labelSelector: "app=nginx"
  dataset: kubernetes-nginx
  parser: nginx

Post-Processing Events

You might want to do additional munging of events before sending them to Honeycomb. For each label selector, you can specify a list of processors, which will be applied in order. For example:

- labelSelector: "app=nginx"
  parser: nginx
  dataset: kubernetes-nginx
  - request_shape:            # Unpack the field "request": "GET /path HTTP/1.x"
      field: request          # into its constituent components

  - drop_field:               # Remove the "user_email" field from all events
      field: user_email

  - sample:                   # Sample events: only send one in 20
      type: static
      rate: 20

See the docs for more examples.