Skip to content

ClearcodeHQ/docker-logs-to-stackdriver

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Centralized Logging with Stackdriver

Stackdriver logs are not only for Google Cloud Platform. You can drop your logs into Stackdriver from anywhere and this docker-compose will help you achieve that.

What is the Centralized Logging?

Main idea of Centralized Logging is collecting all logs' data across a system into one storage. By parsing, filtering and searching logs you will be able to monitor health of the system from one place. By having all the logs in one place, you gain comprehensive source of truth about your system. Based on it you can search for patterns, dig for information and prepare system reports which can be later utilized in process of decision making. If you are not convinced, please check this reddit thread for some inspiration.

Why Stackdriver?

Centralized Logging can be provisioned On Premises, mostly by utilizing ELK stack or emerging Kubernates Logging. Maintaining own logging stack come with performance and costs issues. The most popular mitigation of those issues is done by reducing logs' data retention.

On the other hand Centralized Logging as a Service like Sentry or Papertrail can also be costly. How much? It depends how much data you will push in and what data retention should be granted.

Knowing above, Stackdriver Logging comes into the game. It's pricing plan guarantee first 50 GiB free allotment per month per project and 30 days retention period (last checked at August 12th, 2019).

50 GiB per month? Is it much? Again it depends, nowadays we are working in a project which generates at least 5 GiB logs per day. But let's face it, your project really need to explode to reach this amounts.

Pricing in Stackdriver starts when you would like to automate monitoring based on collected logs. Just logs almost comes for free.

Requirements

Disclaimer: This package is for educational purpose, it is not production ready solution. To run it locally you will need to install:

  • Docker 18.06.0+

Installation

  1. Clone this repo
  2. Register account in GCP and create a new project
  3. Generate Service Account Key and place it into ./config/service-key.json
  4. In docker-compose.yml file set environment variables STACKDRIVER_VM_ID and STACKDRIVER_ZONE_ID accordingly to your needs
  5. Start Docker Compose $ docker-compose up
  6. Enter http://0.0.0.0:8080 to open default Nginx page, logs from access_log should be sent to Stackdriver in a background
  7. Open your project in GCP and navigate to Logging tab
  8. From resources dropdown choose GCE VM Instance
  9. Your Nginx's logs should appear

Used technologies

It is always cool to read about parts which were used to build a solution, this is a reference:

How does it work

  1. Nginx Docker is configured to push logs to STDOUT/STDERR by default
  2. Docker Compose is configured to use Fluentd Logging Driver
  3. Fluentd Docker grab all the logs which are exposed by Logging Driver
  4. And forward them via Fluentd's Google Cloud plugin
  5. In the plugin, Stackdriver Logging API is used to push logs to Stackdriver

Please be informed that official Stackdriver Logging Agent support only GCP or AWS instances. Also Fluentd's Google Cloud plugin doesn't inform that pushing logs from outside is possible.

But it has configuration which makes it possible and also Stackdriver Logging API is open and there is a lot of movement in Knative which states that Kubernetes Clusters can push logs to Stackdriver no matter on what Cloud Provider it is provisioned.

Contributors

Adam Łukaszczyk

About

Forward Docker containers' logs to Google Stackdriver

Resources

License

Stars

Watchers

Forks

Packages

No packages published