Skip to content

Conversation

@jaipaljadeja
Copy link
Member

No description provided.

@coderabbitai
Copy link

coderabbitai bot commented Mar 19, 2025

📝 Walkthrough

Walkthrough

The pull request introduces several new configuration files, updates package metadata, and enhances telemetry integration across multiple projects. A new JSON file for the @apibara/indexer package now marks it as a prerelease with runtime tracer and metrics changes. Other updates include renaming and dependency removals in example package files, the addition of telemetry and instrumentation configurations (including Docker Compose, OpenTelemetry Collector, Grafana dashboards, and Prometheus changes), and modifications to the indexer implementation to record metrics during operation. CI workflows are also updated to exclude additional directories.

Changes

File(s) Change Summary
change/@apibara-indexer-*.json New JSON file added for @apibara/indexer defining prerelease status, including comment, package name, contact email, and dependent change type.
examples/cli-drizzle/package.json Package name updated from "example-cli" to "example-cli-drizzle" and dependencies @rollup/plugin-typescript and rolldown removed.
examples/cli-instrumentation/apibara.config.ts New configuration file added exporting a config object with a rolldownConfig specifying external dependencies.
examples/cli-instrumentation/indexers/starknet.indexer.ts New indexer implementation for USDC transfers on Starknet, featuring runtime configuration and logging of block transformations.
examples/cli-instrumentation/instrumentation.ts New file implementing OpenTelemetry instrumentation; exports functions for registering telemetry (register) and logger creation.
examples/cli-instrumentation/otel/docker-compose.yml New Docker Compose configuration for monitoring with services for Prometheus, Grafana, and OpenTelemetry Collector, including custom network and volume settings.
examples/cli-instrumentation/otel/grafana/provisioning/dashboards/dashboards.yml
examples/cli-instrumentation/otel/grafana/provisioning/dashboards/indexer-dashboard.json
examples/cli-instrumentation/otel/grafana/provisioning/datasources/prometheus.yml
New Grafana provisioning files: dashboard provider config, indexer dashboard configuration, and Prometheus datasource definition.
examples/cli-instrumentation/otel/prometheus/prometheus.yml Updated scrape interval configuration changed from 15s to 10s in Prometheus settings.
examples/cli-instrumentation/package.json New package.json for example-cli-instrumentation project including project metadata, scripts, and dependencies related to Apibara and OpenTelemetry.
examples/cli-instrumentation/tsconfig.json New TypeScript configuration file with compiler options, module resolution settings, and include/exclude patterns.
packages/indexer/src/indexer.ts
packages/indexer/src/otel.ts
Enhancement of indexer functionality: added createTracer() and createIndexerMetrics() functions; indexer now records current block numbers, processed blocks, and reorganization events.
.github/workflows/build.yml CI workflow updated with modified build and test commands excluding additional directories (./examples/cli-drizzle, ./examples/cli-instrumentation, and ./examples/cli-js).
examples/cli-instrumentation/otel/opentelemetry-collector/config.yml New OpenTelemetry Collector configuration file defining receivers, batch processors, exporters (debug and Prometheus remote write), and pipelines for metrics and traces.

Sequence Diagram(s)

sequenceDiagram
    participant Indexer
    participant Metrics as "Indexer Metrics"
    participant Tracer as "Tracer"

    Indexer->>Metrics: createIndexerMetrics()
    Indexer->>Tracer: createTracer()
    Indexer->>Metrics: Record current block gauge (block number)
    Indexer->>Metrics: Increment processed block counter
    Note right of Metrics: On receiving an "invalidate" message,
    Metrics->>Metrics: Increment reorg counter
Loading

Possibly related PRs

Suggested reviewers

  • fracek

Poem

Hoppin’ through lines of code so bright,
I nibble JSON files in the moonlit night.
Metrics and tracers now dance in the stream,
Like carrots in a dream, a coder’s sweetest gleam.
With config files sprouting like blooms in spring,
This rabbit hops happy—code makes my heart sing!
🐇✨

Tip

⚡🧪 Multi-step agentic review comment chat (experimental)
  • We're introducing multi-step agentic chat in review comments. This experimental feature enhances review discussions with the CodeRabbit agentic chat by enabling advanced interactions, including the ability to create pull requests directly from comments.
    - To enable this feature, set early_access to true under in the settings.
✨ Finishing Touches
  • 📝 Generate Docstrings

🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (5)
examples/cli-instrumentation/otel/grafana/provisioning/datasources/prometheus.yml (1)

1-8: LGTM - Proper Grafana datasource configuration

The configuration properly sets up Prometheus as the default datasource for Grafana with appropriate access settings. This is well-structured for a monitoring setup.

I recommend adding a newline at the end of the file to comply with YAML standards:

    url: http://prometheus:9090
    isDefault: true
+
🧰 Tools
🪛 YAMLlint (1.35.1)

[error] 8-8: no new line character at the end of file

(new-line-at-end-of-file)

examples/cli-instrumentation/otel/prometheus/prometheus.yml (1)

10-12: LGTM - Appropriate Node application metrics scraping configuration

The configuration correctly targets the Node application metrics endpoint using host.docker.internal to access the host from within Docker.

I recommend adding a newline at the end of the file to comply with YAML standards:

      - targets: ['host.docker.internal:9464']  # This points to your host where the app is running
+
🧰 Tools
🪛 YAMLlint (1.35.1)

[error] 12-12: no new line character at the end of file

(new-line-at-end-of-file)

examples/cli-instrumentation/otel/grafana/provisioning/dashboards/dashboards.yml (1)

1-13: LGTM! Dashboard provisioning configuration looks good.

The Grafana dashboard provisioning configuration is well-structured and correctly set up to load dashboards from the specified path. The configuration enables automatic updates and allows UI modifications, which is appropriate for a demo environment.

Consider adding a newline at the end of the file to follow YAML best practices:

  options:
    path: /etc/grafana/provisioning/dashboards
    foldersFromFilesStructure: true
+
🧰 Tools
🪛 YAMLlint (1.35.1)

[error] 13-13: no new line character at the end of file

(new-line-at-end-of-file)

examples/cli-instrumentation/indexers/starknet.indexer.ts (1)

11-11: Unusual block number formatting.

The starting block number uses an unconventional format with underscores in unexpected positions.

Consider using a more standard format for readability:

-  startingBlock: 10_30_000n,
+  startingBlock: 1_030_000n,
examples/cli-instrumentation/otel/otel.ts (1)

26-32: Consider making the trace endpoint configurable

The hardcoded trace collector endpoint might limit flexibility across different environments.

Consider using environment variables for the trace endpoint:

const traceExporter = new OTLPTraceExporter({
-  url: "http://localhost:4318/v1/traces",
+  url: process.env.OTEL_EXPORTER_OTLP_ENDPOINT || "http://localhost:4318/v1/traces",
});
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 049fb00 and e14d096.

⛔ Files ignored due to path filters (1)
  • pnpm-lock.yaml is excluded by !**/pnpm-lock.yaml
📒 Files selected for processing (15)
  • change/@apibara-indexer-c2878337-1d57-45db-b28e-0902e9725097.json (1 hunks)
  • examples/cli-drizzle/package.json (1 hunks)
  • examples/cli-instrumentation/apibara.config.ts (1 hunks)
  • examples/cli-instrumentation/indexers/starknet.indexer.ts (1 hunks)
  • examples/cli-instrumentation/instrumentation.ts (1 hunks)
  • examples/cli-instrumentation/otel/docker-compose.yml (1 hunks)
  • examples/cli-instrumentation/otel/grafana/provisioning/dashboards/dashboards.yml (1 hunks)
  • examples/cli-instrumentation/otel/grafana/provisioning/dashboards/indexer-dashboard.json (1 hunks)
  • examples/cli-instrumentation/otel/grafana/provisioning/datasources/prometheus.yml (1 hunks)
  • examples/cli-instrumentation/otel/otel.ts (1 hunks)
  • examples/cli-instrumentation/otel/prometheus/prometheus.yml (1 hunks)
  • examples/cli-instrumentation/package.json (1 hunks)
  • examples/cli-instrumentation/tsconfig.json (1 hunks)
  • packages/indexer/src/indexer.ts (6 hunks)
  • packages/indexer/src/otel.ts (1 hunks)
🧰 Additional context used
🧬 Code Definitions (3)
examples/cli-instrumentation/instrumentation.ts (1)
examples/cli-instrumentation/otel/otel.ts (1) (1)
  • sdk (35-40)
packages/indexer/src/indexer.ts (1)
packages/indexer/src/otel.ts (2) (2)
  • createIndexerMetrics (7-30)
  • createTracer (3-5)
examples/cli-instrumentation/indexers/starknet.indexer.ts (1)
examples/cli-instrumentation/instrumentation.ts (1) (1)
  • logger (10-17)
🪛 YAMLlint (1.35.1)
examples/cli-instrumentation/otel/grafana/provisioning/datasources/prometheus.yml

[error] 8-8: no new line character at the end of file

(new-line-at-end-of-file)

examples/cli-instrumentation/otel/grafana/provisioning/dashboards/dashboards.yml

[error] 13-13: no new line character at the end of file

(new-line-at-end-of-file)

examples/cli-instrumentation/otel/prometheus/prometheus.yml

[error] 12-12: no new line character at the end of file

(new-line-at-end-of-file)

🔇 Additional comments (25)
examples/cli-instrumentation/instrumentation.ts (2)

3-3: Good import for OpenTelemetry integration

Properly importing the SDK from the local OpenTelemetry configuration file.


6-6: Correctly initializing OpenTelemetry SDK

The sdk.start() call is appropriately placed in the register function to initialize telemetry when the application starts.

examples/cli-instrumentation/otel/prometheus/prometheus.yml (2)

1-4: LGTM - Good Prometheus global configuration

The global scrape and evaluation intervals are set to reasonable default values (15s).


5-9: LGTM - Proper self-monitoring configuration

The job for monitoring Prometheus itself is correctly configured.

examples/cli-instrumentation/apibara.config.ts (1)

1-10: LGTM - Proper OpenTelemetry external dependency configuration

The configuration correctly specifies OpenTelemetry dependencies as external, which is appropriate for instrumentation libraries. This prevents these dependencies from being bundled, which can cause issues with native modules and dynamic loading patterns often used by monitoring tools.

change/@apibara-indexer-c2878337-1d57-45db-b28e-0902e9725097.json (1)

1-7: Change file correctly documents indexer improvements.

The change file appropriately captures the purpose of the modifications to the @apibara/indexer package, addressing load issues with tracers and metrics by creating them at runtime.

examples/cli-instrumentation/otel/grafana/provisioning/dashboards/indexer-dashboard.json (1)

1-235:

❓ Verification inconclusive

Well-designed indexer monitoring dashboard.

The Grafana dashboard effectively visualizes three key indexer metrics (Current Block, Reorg Count, Processed Blocks) with appropriate units and thresholds. The Reorg Count panel usefully highlights when reorgs occur by changing to red.

Note that the dashboard is configured to use a specific Prometheus datasource with UID "PBFA97CFB590B2093". Verify that this datasource will be available in your environment or be prepared to update this value after deployment.


🏁 Script executed:

#!/bin/bash
# Check if there are any datasource configuration files that might define this datasource
find . -type f -name "*.yml" -o -name "*.yaml" | xargs grep -l "PBFA97CFB590B2093" || echo "No matching datasource configuration found"

Length of output: 177


Datasource Verification Needed for Indexer Dashboard

The Grafana dashboard is well-structured and effectively visualizes key indexer metrics (Current Block, Reorg Count, Processed Blocks) with clear units and thresholds. However, our check did not locate any configuration files defining the Prometheus datasource with UID "PBFA97CFB590B2093". Please verify that this datasource is set up in your deployment environment or update the UID as needed.

examples/cli-drizzle/package.json (1)

2-2: Package name update looks good!

The rename from "example-cli" to "example-cli-drizzle" makes sense in the context of adding a new instrumentation example, helping to clearly differentiate between the different example types.

packages/indexer/src/otel.ts (3)

1-1: Good addition of metrics import

Adding the metrics API alongside trace is necessary for the new instrumentation features.


3-5: Good refactoring to use function factory pattern

Converting from a static exported constant to a function factory pattern is a good approach to address load issues by creating the tracer at runtime rather than initialization time.


7-30: Well-structured metrics implementation

The metrics implementation is clean and well-organized with:

  1. A gauge for tracking current block progress
  2. A counter for processed blocks
  3. A counter for chain reorganizations

These are all meaningful metrics for monitoring indexer performance and health.

examples/cli-instrumentation/otel/otel.ts (4)

1-14: Good OpenTelemetry imports and setup

The imports include all necessary components for comprehensive instrumentation with both metrics and tracing capabilities.


16-21: Well-defined service resource

The service resource is properly configured with the essential attributes for service name and version.


23-24: Good Prometheus exporter setup

Using the default port (9464) for Prometheus metrics is a standard approach.


34-40: Well-configured SDK export

The NodeSDK is properly configured with all the necessary components and exported for use in the application.

examples/cli-instrumentation/tsconfig.json (1)

1-21: Good TypeScript configuration

This is a solid TypeScript configuration for a modern Node.js application:

  • Using modern ECMAScript targets (ES2022/ES2023)
  • Enabling strict type checking
  • Proper module resolution settings
  • Including necessary type definitions

The path alias setup and inclusion of .apibara/types will help with type definitions and imports.

examples/cli-instrumentation/package.json (1)

1-36: Well-structured package.json for OpenTelemetry instrumentation

This package.json correctly sets up an instrumentation example with all necessary dependencies:

  • OpenTelemetry packages for metrics, tracing, and Node.js auto-instrumentation
  • Prometheus exporter for metrics and OTLP HTTP for traces
  • Apibara dependencies for indexer functionality

The workspace references ensure proper integration with the monorepo structure, and the scripts provide all necessary commands for development and execution.

packages/indexer/src/indexer.ts (6)

31-31: Proper imports for telemetry integration

Good replacement of direct tracer import with more structured imports for metrics and tracing creation functions, along with the internal context hook.

Also applies to: 33-33


220-221: Correctly initialized metrics and indexer context

The metrics initialization and retrieval of indexer ID from the internal context is well-placed at the beginning of the run function to ensure proper tracking throughout the execution.

Also applies to: 224-225


286-287: Well-timed tracer creation

Creating the tracer just before the message handling switch statement ensures it's available for all message types.


298-305: Effective block tracking with metrics

Good implementation of current block tracking with the gauge metric, including proper labeling with the indexer ID. This will be valuable for monitoring processing progress.


382-386: Proper block processing metric

The counter for processed blocks is correctly implemented after block processing completes, with appropriate labeling. This will help track throughput and performance.


395-399: Useful reorg tracking metric

The counter for reorgs (invalidate messages) is well-placed and properly labeled. This metric will be valuable for monitoring blockchain reorganizations that affect indexing.

examples/cli-instrumentation/otel/docker-compose.yml (2)

1-39: Comprehensive monitoring stack configuration

The Docker Compose setup provides a complete monitoring solution:

  • Prometheus for metrics collection
  • Grafana for visualization with pre-configured dashboards
  • Jaeger for distributed tracing

All services are properly networked and configured with appropriate ports and volumes.


40-46: Proper network and volume configuration

The dedicated monitoring network and persistent volume for Grafana storage follow Docker Compose best practices. This ensures isolation of the monitoring stack and persistence of Grafana configurations.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
.github/workflows/build.yml (2)

47-48: Update CI Comment to Reflect Current Directory Exclusions.
The comment on line 47 mentions excluding examples/cli since it requires the CLI to be built, but the actual build command on line 48 excludes ./examples/cli-drizzle, ./examples/cli-instrumentation, and ./examples/cli-js. This discrepancy may confuse future maintainers. Please update the comment to accurately describe the directories being excluded or adjust the filtering if the original intent was different.


52-52: Consistent Test Command Exclusions.
The test step on line 52 properly mirrors the exclusions applied in the build step by filtering out ./examples/cli-drizzle, ./examples/cli-instrumentation, and ./examples/cli-js. Ensure that tests for these examples are covered in separate workflows or integration tests if required.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between e14d096 and a992faa.

📒 Files selected for processing (1)
  • .github/workflows/build.yml (1 hunks)

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (3)
examples/cli-instrumentation/otel/prometheus/prometheus.yml (1)

1-2: Add a newline at the end of the file.

Some linters (e.g., YAMLlint) advise ending files with a blank line to improve compatibility across different environments.

You can fix this like so:

 global:
   scrape_interval: 10s
+
🧰 Tools
🪛 YAMLlint (1.35.1)

[error] 2-2: no new line character at the end of file

(new-line-at-end-of-file)

examples/cli-instrumentation/otel/opentelemetry-collector/config.yml (1)

1-24: LGTM - OpenTelemetry Collector configuration is properly structured!

The configuration correctly defines:

  • OTLP receiver on port 4317 for gRPC protocol
  • Batch processor for efficient telemetry aggregation
  • Debug exporter for logging and Prometheus remote write exporter for metrics storage
  • Separate pipelines for metrics and traces with appropriate components

Add a newline at the end of the file to satisfy the linter:

      receivers: [otlp]
      processors: [batch]
      exporters: [debug]
+
🧰 Tools
🪛 YAMLlint (1.35.1)

[error] 24-24: no new line character at the end of file

(new-line-at-end-of-file)

examples/cli-instrumentation/package.json (1)

1-38: LGTM - Package configuration is well-structured

The package.json contains all necessary dependencies for an OpenTelemetry instrumentation example:

  • Appropriate workspace references for Apibara packages
  • Complete set of OpenTelemetry dependencies for metrics and tracing
  • Scripts for building, development, and linting

Consider adding a brief description to the empty description field to clarify the purpose of this example:

  "name": "example-cli-instrumentation",
  "private": true,
  "version": "0.0.0",
- "description": "",
+ "description": "Example demonstrating OpenTelemetry instrumentation with Apibara indexers",
  "type": "module",
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a992faa and f422222.

⛔ Files ignored due to path filters (1)
  • pnpm-lock.yaml is excluded by !**/pnpm-lock.yaml
📒 Files selected for processing (9)
  • examples/cli-instrumentation/indexers/starknet.indexer.ts (1 hunks)
  • examples/cli-instrumentation/instrumentation.ts (1 hunks)
  • examples/cli-instrumentation/otel/docker-compose.yml (1 hunks)
  • examples/cli-instrumentation/otel/grafana/provisioning/dashboards/indexer-dashboard.json (1 hunks)
  • examples/cli-instrumentation/otel/opentelemetry-collector/config.yml (1 hunks)
  • examples/cli-instrumentation/otel/prometheus/prometheus.yml (1 hunks)
  • examples/cli-instrumentation/package.json (1 hunks)
  • packages/indexer/src/indexer.ts (5 hunks)
  • packages/indexer/src/otel.ts (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • examples/cli-instrumentation/indexers/starknet.indexer.ts
🧰 Additional context used
🧬 Code Definitions (1)
packages/indexer/src/indexer.ts (1)
packages/indexer/src/otel.ts (2) (2)
  • createIndexerMetrics (7-30)
  • createTracer (3-5)
🪛 YAMLlint (1.35.1)
examples/cli-instrumentation/otel/opentelemetry-collector/config.yml

[error] 24-24: no new line character at the end of file

(new-line-at-end-of-file)

examples/cli-instrumentation/otel/prometheus/prometheus.yml

[error] 2-2: no new line character at the end of file

(new-line-at-end-of-file)

🔇 Additional comments (9)
examples/cli-instrumentation/otel/grafana/provisioning/dashboards/indexer-dashboard.json (1)

25-86: Good use of “current_block” metric.

The panel logic and query expression “current_block” look consistent with the gauge name defined in createIndexerMetrics(). Nice job!

packages/indexer/src/otel.ts (1)

3-5: Tracer setup looks good.

The new createTracer function is a straightforward way to provide a named tracer for the indexer. This approach helps keep instrumentation concerns well-organized.

examples/cli-instrumentation/instrumentation.ts (1)

54-62: Logger configuration looks good.

The logger creation logic is straightforward and aligns with the typical usage in apibara. This helps ensure consistent, readable logging output.

packages/indexer/src/indexer.ts (5)

31-33: LGTM - Good addition of telemetry imports

The imported modules will support the metrics and tracing functionality being added to the indexer.


220-226: Organize telemetry initialization

Good job initializing the metrics and tracer at the beginning of the function. This approach makes the telemetry setup clear and centralizes it before usage.

Note: This addresses the previous review comment to move this code close to createIndexerMetrics.


297-304: LGTM - Appropriate current block gauge implementation

This effectively records the current block number being processed with proper indexer_id labeling.


381-385: LGTM - Effective processed block counter

This counter implementation will provide valuable metrics about indexer throughput.


394-398: LGTM - Good reorg counter implementation

Tracking reorg events is valuable for monitoring chain stability and understanding indexer challenges.

examples/cli-instrumentation/otel/docker-compose.yml (1)

1-50:

✅ Verification successful

Docker Compose configuration follows best practices

The configuration correctly sets up a monitoring stack with Prometheus, Grafana, and OpenTelemetry Collector, following the recommended approach mentioned in the previous review.

Key points:

  • Services are properly networked and dependencies are established
  • Volumes are mounted for configuration and persistence
  • Appropriate ports are exposed for each service

Add a newline at the end of the file:

volumes:
  grafana-storage:
+

For an additional security best practice in real-world deployments (though acceptable for this example), consider using environment variables for credentials:


🌐 Web query:

What are the best practices for handling Grafana credentials in Docker Compose?

💡 Result:

When handling Grafana credentials in Docker Compose, security best practices focus on avoiding exposed credentials, using secure secret management, and minimizing attack vectors. Below are key recommendations:

Use Docker Secrets with __FILE Suffix

Avoid storing credentials in environment variables. Instead:

  • Mount secrets via /run/secrets/ using GF_<Section>_<Key>__FILE syntax
    environment:
      GF_SECURITY_ADMIN_PASSWORD_FILE: /run/secrets/grafana_admin_password
  • Reference secrets in your Compose file:
    services:
      grafana:
        image: grafana/grafana
        secrets:
          - grafana_admin_password
    secrets:
      grafana_admin_password:
        file: ./secrets/admin_password.txt

This prevents credentials from appearing in process listings or logs[1][5][15].

Credential Security Best Practices

  1. Rotate default admin credentials:

    • Change Grafana's default admin/admin credentials immediately after setup[6][17].
    • Use GF_SECURITY_ADMIN_PASSWORD_FILE for initial password configuration[5][9].
  2. Limit credential permissions:

    • Use read-only credentials for data sources (e.g., SQL users without INSERT/UPDATE)[4].
    • Avoid credentials with administrative privileges unless necessary[10].
  3. Secure secret storage:

    • Set file permissions: chmod 400 for secret files[2][11]
    • Exclude secret files from version control via .gitignore[3][11]
    • Use tools like phase CLI for encrypted secret management[3][11]

Docker Compose Configuration Tips

  • Use HTTPS:

    [server]
    protocol = https
    cert_file = /etc/grafana/grafana.crt
    cert_key = /etc/grafana/grafana.key

    Generate self-signed certs with openssl for encrypted traffic[2].

  • Persist configurations safely:

    volumes:
      - grafana_data:/var/lib/grafana
      - ./grafana.ini:/etc/grafana/grafana.ini

    Mount configuration files instead of storing secrets inline[9][12].

  • Health checks:

    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:3000/api/health"]

    Ensures credential checks during container startup[8][12].

Example Secure Setup

version: "3.8"

services:
  grafana:
    image: grafana/grafana
    environment:
      GF_SECURITY_ADMIN_USER_FILE: /run/secrets/admin_user
      GF_SECURITY_ADMIN_PASSWORD_FILE: /run/secrets/admin_password
      GF_AWS_default_ACCESS_KEY_ID_FILE: /run/secrets/aws_key
      GF_AWS_default_SECRET_ACCESS_KEY_FILE: /run/secrets/aws_secret
    ports:
      - "3000:3000"
    secrets:
      - admin_user
      - admin_password
      - aws_key
      - aws_secret

secrets:
  admin_user:
    file: ./secrets/admin_user.txt
  admin_password:
    file: ./secrets/admin_password.txt
  aws_key:
    file: ./secrets/aws_access_key
  aws_secret:
    file: ./secrets/aws_secret_key

For production environments, consider integrating with a secrets manager like HashiCorp Vault or AWS Secrets Manager for enhanced security[3][7][11].

Citations:


Review Approved – Minor Security Enhancement Recommendation

The monitoring stack (Prometheus, Grafana, and OpenTelemetry Collector) is correctly configured with proper networking, volumes, and dependency management. In this example, hardcoding the Grafana credentials is acceptable; however, for production deployments it is recommended to secure admin credentials via Docker secrets. For instance, instead of:

environment:
  - GF_SECURITY_ADMIN_PASSWORD=admin
  - GF_SECURITY_ADMIN_USER=admin

you could use the file-based approach:

environment:
  GF_SECURITY_ADMIN_USER_FILE: /run/secrets/grafana_admin_user
  GF_SECURITY_ADMIN_PASSWORD_FILE: /run/secrets/grafana_admin_password
secrets:
  - grafana_admin_user
  - grafana_admin_password

This change prevents credentials from appearing in process listings or logs.

Additionally, please add a newline at the end of the file:

volumes:
  grafana-storage:
+

Copy link
Contributor

@fracek fracek left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good now!

@fracek fracek merged commit 3a19dbb into apibara:main Mar 19, 2025
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants