Skip to content

Aviv-public/datadog_flask_poc

Repository files navigation

DATADOG Flask poc

Features

  • APM & Profiling enabled
  • Trace enabled
  • Metrics:
    • Custom StatsDMetrics
    • Prometheus (openmetrics) metrics
  • Logging: applicaton and/or container logs

Known Issues

  • StatsD Metrics:
    • type "set" set value to 1 (instead of metric value)
    • type "timer" replacement
  • Logging
    • Container logs
      • 💔 all logs are reported as info
      • 💔 stacktraces are not grouped

Run application

$ DD_API_KEY=xxx docker-compose up

A locust service is automatically started on launch to generate traffic on application.

View/Monitor locust run at http://localhost:8089/

Disable service, comment the locust services in docker compose file.

How does it work?

Use datadog agent

Run a datadog agent instance with the following env variables:

# datadog configuration
DD_SERVICE=datadog-agent
DD_SITE=datadoghq.eu
DD_API_KEY=<YOUR_KEY>

# application configuration
DD_VERSION=0.1
DD_ENV=dev

Run your flask application with the following env variables:

DD_VERSION=0.1
DD_ENV=dev
DD_SERVICE=testapi
DD_AGENT_HOST=<datadog agent url>

Use ddtrace to run application

$ ddtrace-run gunicorn --name="testapi"--bind=0.0.0.0:5000 - testapi:app

💡 See docker-compose for implementation details.

APM, Trace, Profiling and Error Tracking

APM, traces, profiling and Error tracking are enabled with environment variables set in datadog-agent.

DD_APM_ENABLED=true
DD_APM_NON_LOCAL_TRAFFIC=true
DD_TRACE_ENABLED=true
DD_TRACE_CLI_ENABLED=true
DD_PROFILING_ENABLED=true
DD_PROCESS_AGENT_ENABLED=true
  • APM and profiling are reported in https://<datadog_website>/apm/services?env=dev
  • Traces are reported in https://<datadog_website>/apm/traces?query=env
  • Applications errors are reported in https://<datadog_website>/apm/error-tracking (eg. errors raised by /server_error/ endpoint)

Statsd metrics

Pre-requisite: Open a port on datadog agent to receive statsd metrics (see docker-compose.yaml datadog-agent ports).

Add the following environent variables to datadog-agent:

DD_DOGSTATSD_NON_LOCAL_TRAFFIC=true
DD_DOGSTATSD_ORIGIN_DETECTION=true

Gunicorn Metrics with statsd

Run Flask Application using gunicorn argument --statsd-host=datadog-agent:<agent port> to send gunicorn metrics to datadog-agent.

Custom statsd Metrics

In Flask application:

  • Configure application to send statsd Metrics to datadog-agent (see testapi/datadog_utils/metrics_statsd.py)
  • Use datadog.statsd to send metrics (see /sdmetrics... endpoints in application)

In datadog your metrics are reported with service and env matching the DD_SERVICE and DD_ENV from environment variables.

Prometheus (openmetrics) metrics

  1. Configure application to build and expose a /metrics endpoint (see testapi/datadog_utils/prometheus_metrics)
  • 💡 Example of manual metrics increment on endpoint /prometheus_counter_inc/<count_type>/<value>)
  1. Configure metrics in datadog-agent to specify exhaustive list of metrics to send to datadog. See agent_conf/prometheus.d/conf.yaml

Metrics can be found in datadog metrics exporter https:///metric/explorer. In the current application, the following metrics are generated:

  • TESTAPI.custom_metric
  • TESTAPI.request_count
  • TESTAPI.request_latency_seconds

Notes: TESTAPI. prefix comes from the datadog-agent openmetrics conf.yaml file. A service and env flag are automatically added to every metrics sent by agent. (env flag is auto by defautl, service flag comes from openmetrics conf.yaml file)

Logging

View logs in https:///logs/livetail

Explicit Python logs

The current version of application send application logs to Datadog. :warning: This option only send logs generated explicitely in application code.

  • Flask Application explicit logs are written in a json file (see application logging with a fileHandler in /testapi/datadog_utils/logger)
  • Json log file is mounted on datadog-agent (see datadog-agents volumes in docker-compose)
  • Datadog-agent read and send logs to Datadog website (see ./datadog-agent-python-logs-conf.yml).

Note: see in environment/datadog-agent.env the environement variables required to send logs.

To test logging, see /logging/<level> endpoint of the provided application.

Container logs

Another option available is to send all application containter logs to Datadog. This option allow to report all logs generated by application container (including traces etc.). Gathering such metrics is expected to be done through the orchestrator system that might be in use, but you can do it by mounting the Docker socket for testing purposes. This represents a security risk and should not be done in production.

💡 To report application traces, the tracing module (see above) seem more suited than this solution.

In datadog-agent.env

DD_LOGS_CONFIG_CONTAINER_COLLECT_ALL=true

In docker-compose

      # disable OPTION1 mounts.
      # LOGGING[OPTION2]: enable for logging based on containter autodiscovery
      - /var/run/docker.sock:/var/run/docker.sock:ro
      - /proc/:/host/proc/:ro
      - /sys/fs/cgroup/:/host/sys/fs/cgroup:ro
      - /etc/passwd:/etc/passwd:ro

⚠️ The following issues are encountered in this option:

  • All logs are reported with a level=INFO
  • Multi-line logs are not grouped (stacktraces are reported one line by one)

About

How to integrate datadog in Flask application

Resources

Stars

Watchers

Forks