-
Notifications
You must be signed in to change notification settings - Fork 11
examples: add instrumentation example with prometheus integration #159
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
📝 WalkthroughWalkthroughThe pull request introduces several new configuration files, updates package metadata, and enhances telemetry integration across multiple projects. A new JSON file for the Changes
Sequence Diagram(s)sequenceDiagram
participant Indexer
participant Metrics as "Indexer Metrics"
participant Tracer as "Tracer"
Indexer->>Metrics: createIndexerMetrics()
Indexer->>Tracer: createTracer()
Indexer->>Metrics: Record current block gauge (block number)
Indexer->>Metrics: Increment processed block counter
Note right of Metrics: On receiving an "invalidate" message,
Metrics->>Metrics: Increment reorg counter
Possibly related PRs
Suggested reviewers
Poem
Tip ⚡🧪 Multi-step agentic review comment chat (experimental)
✨ Finishing Touches
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🧹 Nitpick comments (5)
examples/cli-instrumentation/otel/grafana/provisioning/datasources/prometheus.yml (1)
1-8: LGTM - Proper Grafana datasource configurationThe configuration properly sets up Prometheus as the default datasource for Grafana with appropriate access settings. This is well-structured for a monitoring setup.
I recommend adding a newline at the end of the file to comply with YAML standards:
url: http://prometheus:9090 isDefault: true +🧰 Tools
🪛 YAMLlint (1.35.1)
[error] 8-8: no new line character at the end of file
(new-line-at-end-of-file)
examples/cli-instrumentation/otel/prometheus/prometheus.yml (1)
10-12: LGTM - Appropriate Node application metrics scraping configurationThe configuration correctly targets the Node application metrics endpoint using
host.docker.internalto access the host from within Docker.I recommend adding a newline at the end of the file to comply with YAML standards:
- targets: ['host.docker.internal:9464'] # This points to your host where the app is running +🧰 Tools
🪛 YAMLlint (1.35.1)
[error] 12-12: no new line character at the end of file
(new-line-at-end-of-file)
examples/cli-instrumentation/otel/grafana/provisioning/dashboards/dashboards.yml (1)
1-13: LGTM! Dashboard provisioning configuration looks good.The Grafana dashboard provisioning configuration is well-structured and correctly set up to load dashboards from the specified path. The configuration enables automatic updates and allows UI modifications, which is appropriate for a demo environment.
Consider adding a newline at the end of the file to follow YAML best practices:
options: path: /etc/grafana/provisioning/dashboards foldersFromFilesStructure: true +🧰 Tools
🪛 YAMLlint (1.35.1)
[error] 13-13: no new line character at the end of file
(new-line-at-end-of-file)
examples/cli-instrumentation/indexers/starknet.indexer.ts (1)
11-11: Unusual block number formatting.The starting block number uses an unconventional format with underscores in unexpected positions.
Consider using a more standard format for readability:
- startingBlock: 10_30_000n, + startingBlock: 1_030_000n,examples/cli-instrumentation/otel/otel.ts (1)
26-32: Consider making the trace endpoint configurableThe hardcoded trace collector endpoint might limit flexibility across different environments.
Consider using environment variables for the trace endpoint:
const traceExporter = new OTLPTraceExporter({ - url: "http://localhost:4318/v1/traces", + url: process.env.OTEL_EXPORTER_OTLP_ENDPOINT || "http://localhost:4318/v1/traces", });
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (15)
change/@apibara-indexer-c2878337-1d57-45db-b28e-0902e9725097.json(1 hunks)examples/cli-drizzle/package.json(1 hunks)examples/cli-instrumentation/apibara.config.ts(1 hunks)examples/cli-instrumentation/indexers/starknet.indexer.ts(1 hunks)examples/cli-instrumentation/instrumentation.ts(1 hunks)examples/cli-instrumentation/otel/docker-compose.yml(1 hunks)examples/cli-instrumentation/otel/grafana/provisioning/dashboards/dashboards.yml(1 hunks)examples/cli-instrumentation/otel/grafana/provisioning/dashboards/indexer-dashboard.json(1 hunks)examples/cli-instrumentation/otel/grafana/provisioning/datasources/prometheus.yml(1 hunks)examples/cli-instrumentation/otel/otel.ts(1 hunks)examples/cli-instrumentation/otel/prometheus/prometheus.yml(1 hunks)examples/cli-instrumentation/package.json(1 hunks)examples/cli-instrumentation/tsconfig.json(1 hunks)packages/indexer/src/indexer.ts(6 hunks)packages/indexer/src/otel.ts(1 hunks)
🧰 Additional context used
🧬 Code Definitions (3)
examples/cli-instrumentation/instrumentation.ts (1)
examples/cli-instrumentation/otel/otel.ts (1) (1)
sdk(35-40)
packages/indexer/src/indexer.ts (1)
packages/indexer/src/otel.ts (2) (2)
createIndexerMetrics(7-30)createTracer(3-5)
examples/cli-instrumentation/indexers/starknet.indexer.ts (1)
examples/cli-instrumentation/instrumentation.ts (1) (1)
logger(10-17)
🪛 YAMLlint (1.35.1)
examples/cli-instrumentation/otel/grafana/provisioning/datasources/prometheus.yml
[error] 8-8: no new line character at the end of file
(new-line-at-end-of-file)
examples/cli-instrumentation/otel/grafana/provisioning/dashboards/dashboards.yml
[error] 13-13: no new line character at the end of file
(new-line-at-end-of-file)
examples/cli-instrumentation/otel/prometheus/prometheus.yml
[error] 12-12: no new line character at the end of file
(new-line-at-end-of-file)
🔇 Additional comments (25)
examples/cli-instrumentation/instrumentation.ts (2)
3-3: Good import for OpenTelemetry integrationProperly importing the SDK from the local OpenTelemetry configuration file.
6-6: Correctly initializing OpenTelemetry SDKThe
sdk.start()call is appropriately placed in the register function to initialize telemetry when the application starts.examples/cli-instrumentation/otel/prometheus/prometheus.yml (2)
1-4: LGTM - Good Prometheus global configurationThe global scrape and evaluation intervals are set to reasonable default values (15s).
5-9: LGTM - Proper self-monitoring configurationThe job for monitoring Prometheus itself is correctly configured.
examples/cli-instrumentation/apibara.config.ts (1)
1-10: LGTM - Proper OpenTelemetry external dependency configurationThe configuration correctly specifies OpenTelemetry dependencies as external, which is appropriate for instrumentation libraries. This prevents these dependencies from being bundled, which can cause issues with native modules and dynamic loading patterns often used by monitoring tools.
change/@apibara-indexer-c2878337-1d57-45db-b28e-0902e9725097.json (1)
1-7: Change file correctly documents indexer improvements.The change file appropriately captures the purpose of the modifications to the
@apibara/indexerpackage, addressing load issues with tracers and metrics by creating them at runtime.examples/cli-instrumentation/otel/grafana/provisioning/dashboards/indexer-dashboard.json (1)
1-235:❓ Verification inconclusive
Well-designed indexer monitoring dashboard.
The Grafana dashboard effectively visualizes three key indexer metrics (Current Block, Reorg Count, Processed Blocks) with appropriate units and thresholds. The Reorg Count panel usefully highlights when reorgs occur by changing to red.
Note that the dashboard is configured to use a specific Prometheus datasource with UID "PBFA97CFB590B2093". Verify that this datasource will be available in your environment or be prepared to update this value after deployment.
🏁 Script executed:
#!/bin/bash # Check if there are any datasource configuration files that might define this datasource find . -type f -name "*.yml" -o -name "*.yaml" | xargs grep -l "PBFA97CFB590B2093" || echo "No matching datasource configuration found"Length of output: 177
Datasource Verification Needed for Indexer Dashboard
The Grafana dashboard is well-structured and effectively visualizes key indexer metrics (Current Block, Reorg Count, Processed Blocks) with clear units and thresholds. However, our check did not locate any configuration files defining the Prometheus datasource with UID "PBFA97CFB590B2093". Please verify that this datasource is set up in your deployment environment or update the UID as needed.
examples/cli-drizzle/package.json (1)
2-2: Package name update looks good!The rename from "example-cli" to "example-cli-drizzle" makes sense in the context of adding a new instrumentation example, helping to clearly differentiate between the different example types.
packages/indexer/src/otel.ts (3)
1-1: Good addition of metrics importAdding the metrics API alongside trace is necessary for the new instrumentation features.
3-5: Good refactoring to use function factory patternConverting from a static exported constant to a function factory pattern is a good approach to address load issues by creating the tracer at runtime rather than initialization time.
7-30: Well-structured metrics implementationThe metrics implementation is clean and well-organized with:
- A gauge for tracking current block progress
- A counter for processed blocks
- A counter for chain reorganizations
These are all meaningful metrics for monitoring indexer performance and health.
examples/cli-instrumentation/otel/otel.ts (4)
1-14: Good OpenTelemetry imports and setupThe imports include all necessary components for comprehensive instrumentation with both metrics and tracing capabilities.
16-21: Well-defined service resourceThe service resource is properly configured with the essential attributes for service name and version.
23-24: Good Prometheus exporter setupUsing the default port (9464) for Prometheus metrics is a standard approach.
34-40: Well-configured SDK exportThe NodeSDK is properly configured with all the necessary components and exported for use in the application.
examples/cli-instrumentation/tsconfig.json (1)
1-21: Good TypeScript configurationThis is a solid TypeScript configuration for a modern Node.js application:
- Using modern ECMAScript targets (ES2022/ES2023)
- Enabling strict type checking
- Proper module resolution settings
- Including necessary type definitions
The path alias setup and inclusion of
.apibara/typeswill help with type definitions and imports.examples/cli-instrumentation/package.json (1)
1-36: Well-structured package.json for OpenTelemetry instrumentationThis package.json correctly sets up an instrumentation example with all necessary dependencies:
- OpenTelemetry packages for metrics, tracing, and Node.js auto-instrumentation
- Prometheus exporter for metrics and OTLP HTTP for traces
- Apibara dependencies for indexer functionality
The workspace references ensure proper integration with the monorepo structure, and the scripts provide all necessary commands for development and execution.
packages/indexer/src/indexer.ts (6)
31-31: Proper imports for telemetry integrationGood replacement of direct tracer import with more structured imports for metrics and tracing creation functions, along with the internal context hook.
Also applies to: 33-33
220-221: Correctly initialized metrics and indexer contextThe metrics initialization and retrieval of indexer ID from the internal context is well-placed at the beginning of the run function to ensure proper tracking throughout the execution.
Also applies to: 224-225
286-287: Well-timed tracer creationCreating the tracer just before the message handling switch statement ensures it's available for all message types.
298-305: Effective block tracking with metricsGood implementation of current block tracking with the gauge metric, including proper labeling with the indexer ID. This will be valuable for monitoring processing progress.
382-386: Proper block processing metricThe counter for processed blocks is correctly implemented after block processing completes, with appropriate labeling. This will help track throughput and performance.
395-399: Useful reorg tracking metricThe counter for reorgs (invalidate messages) is well-placed and properly labeled. This metric will be valuable for monitoring blockchain reorganizations that affect indexing.
examples/cli-instrumentation/otel/docker-compose.yml (2)
1-39: Comprehensive monitoring stack configurationThe Docker Compose setup provides a complete monitoring solution:
- Prometheus for metrics collection
- Grafana for visualization with pre-configured dashboards
- Jaeger for distributed tracing
All services are properly networked and configured with appropriate ports and volumes.
40-46: Proper network and volume configurationThe dedicated monitoring network and persistent volume for Grafana storage follow Docker Compose best practices. This ensures isolation of the monitoring stack and persistence of Grafana configurations.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (2)
.github/workflows/build.yml (2)
47-48: Update CI Comment to Reflect Current Directory Exclusions.
The comment on line 47 mentions excludingexamples/clisince it requires the CLI to be built, but the actual build command on line 48 excludes./examples/cli-drizzle,./examples/cli-instrumentation, and./examples/cli-js. This discrepancy may confuse future maintainers. Please update the comment to accurately describe the directories being excluded or adjust the filtering if the original intent was different.
52-52: Consistent Test Command Exclusions.
The test step on line 52 properly mirrors the exclusions applied in the build step by filtering out./examples/cli-drizzle,./examples/cli-instrumentation, and./examples/cli-js. Ensure that tests for these examples are covered in separate workflows or integration tests if required.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
🧹 Nitpick comments (3)
examples/cli-instrumentation/otel/prometheus/prometheus.yml (1)
1-2: Add a newline at the end of the file.Some linters (e.g., YAMLlint) advise ending files with a blank line to improve compatibility across different environments.
You can fix this like so:
global: scrape_interval: 10s +🧰 Tools
🪛 YAMLlint (1.35.1)
[error] 2-2: no new line character at the end of file
(new-line-at-end-of-file)
examples/cli-instrumentation/otel/opentelemetry-collector/config.yml (1)
1-24: LGTM - OpenTelemetry Collector configuration is properly structured!The configuration correctly defines:
- OTLP receiver on port 4317 for gRPC protocol
- Batch processor for efficient telemetry aggregation
- Debug exporter for logging and Prometheus remote write exporter for metrics storage
- Separate pipelines for metrics and traces with appropriate components
Add a newline at the end of the file to satisfy the linter:
receivers: [otlp] processors: [batch] exporters: [debug] +🧰 Tools
🪛 YAMLlint (1.35.1)
[error] 24-24: no new line character at the end of file
(new-line-at-end-of-file)
examples/cli-instrumentation/package.json (1)
1-38: LGTM - Package configuration is well-structuredThe package.json contains all necessary dependencies for an OpenTelemetry instrumentation example:
- Appropriate workspace references for Apibara packages
- Complete set of OpenTelemetry dependencies for metrics and tracing
- Scripts for building, development, and linting
Consider adding a brief description to the empty description field to clarify the purpose of this example:
"name": "example-cli-instrumentation", "private": true, "version": "0.0.0", - "description": "", + "description": "Example demonstrating OpenTelemetry instrumentation with Apibara indexers", "type": "module",
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (9)
examples/cli-instrumentation/indexers/starknet.indexer.ts(1 hunks)examples/cli-instrumentation/instrumentation.ts(1 hunks)examples/cli-instrumentation/otel/docker-compose.yml(1 hunks)examples/cli-instrumentation/otel/grafana/provisioning/dashboards/indexer-dashboard.json(1 hunks)examples/cli-instrumentation/otel/opentelemetry-collector/config.yml(1 hunks)examples/cli-instrumentation/otel/prometheus/prometheus.yml(1 hunks)examples/cli-instrumentation/package.json(1 hunks)packages/indexer/src/indexer.ts(5 hunks)packages/indexer/src/otel.ts(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- examples/cli-instrumentation/indexers/starknet.indexer.ts
🧰 Additional context used
🧬 Code Definitions (1)
packages/indexer/src/indexer.ts (1)
packages/indexer/src/otel.ts (2) (2)
createIndexerMetrics(7-30)createTracer(3-5)
🪛 YAMLlint (1.35.1)
examples/cli-instrumentation/otel/opentelemetry-collector/config.yml
[error] 24-24: no new line character at the end of file
(new-line-at-end-of-file)
examples/cli-instrumentation/otel/prometheus/prometheus.yml
[error] 2-2: no new line character at the end of file
(new-line-at-end-of-file)
🔇 Additional comments (9)
examples/cli-instrumentation/otel/grafana/provisioning/dashboards/indexer-dashboard.json (1)
25-86: Good use of “current_block” metric.The panel logic and query expression “current_block” look consistent with the gauge name defined in
createIndexerMetrics(). Nice job!packages/indexer/src/otel.ts (1)
3-5: Tracer setup looks good.The new
createTracerfunction is a straightforward way to provide a named tracer for the indexer. This approach helps keep instrumentation concerns well-organized.examples/cli-instrumentation/instrumentation.ts (1)
54-62: Logger configuration looks good.The logger creation logic is straightforward and aligns with the typical usage in
apibara. This helps ensure consistent, readable logging output.packages/indexer/src/indexer.ts (5)
31-33: LGTM - Good addition of telemetry importsThe imported modules will support the metrics and tracing functionality being added to the indexer.
220-226: Organize telemetry initializationGood job initializing the metrics and tracer at the beginning of the function. This approach makes the telemetry setup clear and centralizes it before usage.
Note: This addresses the previous review comment to move this code close to
createIndexerMetrics.
297-304: LGTM - Appropriate current block gauge implementationThis effectively records the current block number being processed with proper indexer_id labeling.
381-385: LGTM - Effective processed block counterThis counter implementation will provide valuable metrics about indexer throughput.
394-398: LGTM - Good reorg counter implementationTracking reorg events is valuable for monitoring chain stability and understanding indexer challenges.
examples/cli-instrumentation/otel/docker-compose.yml (1)
1-50:✅ Verification successful
Docker Compose configuration follows best practices
The configuration correctly sets up a monitoring stack with Prometheus, Grafana, and OpenTelemetry Collector, following the recommended approach mentioned in the previous review.
Key points:
- Services are properly networked and dependencies are established
- Volumes are mounted for configuration and persistence
- Appropriate ports are exposed for each service
Add a newline at the end of the file:
volumes: grafana-storage: +For an additional security best practice in real-world deployments (though acceptable for this example), consider using environment variables for credentials:
🌐 Web query:
What are the best practices for handling Grafana credentials in Docker Compose?💡 Result:
When handling Grafana credentials in Docker Compose, security best practices focus on avoiding exposed credentials, using secure secret management, and minimizing attack vectors. Below are key recommendations:
Use Docker Secrets with __FILE Suffix
Avoid storing credentials in environment variables. Instead:
- Mount secrets via
/run/secrets/usingGF_<Section>_<Key>__FILEsyntaxenvironment: GF_SECURITY_ADMIN_PASSWORD_FILE: /run/secrets/grafana_admin_password- Reference secrets in your Compose file:
services: grafana: image: grafana/grafana secrets: - grafana_admin_password secrets: grafana_admin_password: file: ./secrets/admin_password.txtThis prevents credentials from appearing in process listings or logs[1][5][15].
Credential Security Best Practices
Rotate default admin credentials:
- Change Grafana's default
admin/admincredentials immediately after setup[6][17].- Use
GF_SECURITY_ADMIN_PASSWORD_FILEfor initial password configuration[5][9].Limit credential permissions:
- Use read-only credentials for data sources (e.g., SQL users without
INSERT/UPDATE)[4].- Avoid credentials with administrative privileges unless necessary[10].
Secure secret storage:
- Set file permissions:
chmod 400for secret files[2][11]- Exclude secret files from version control via
.gitignore[3][11]- Use tools like
phase CLIfor encrypted secret management[3][11]Docker Compose Configuration Tips
Use HTTPS:
[server] protocol = https cert_file = /etc/grafana/grafana.crt cert_key = /etc/grafana/grafana.keyGenerate self-signed certs with
opensslfor encrypted traffic[2].Persist configurations safely:
volumes: - grafana_data:/var/lib/grafana - ./grafana.ini:/etc/grafana/grafana.iniMount configuration files instead of storing secrets inline[9][12].
Health checks:
healthcheck: test: ["CMD", "curl", "-f", "http://localhost:3000/api/health"]Ensures credential checks during container startup[8][12].
Example Secure Setup
version: "3.8" services: grafana: image: grafana/grafana environment: GF_SECURITY_ADMIN_USER_FILE: /run/secrets/admin_user GF_SECURITY_ADMIN_PASSWORD_FILE: /run/secrets/admin_password GF_AWS_default_ACCESS_KEY_ID_FILE: /run/secrets/aws_key GF_AWS_default_SECRET_ACCESS_KEY_FILE: /run/secrets/aws_secret ports: - "3000:3000" secrets: - admin_user - admin_password - aws_key - aws_secret secrets: admin_user: file: ./secrets/admin_user.txt admin_password: file: ./secrets/admin_password.txt aws_key: file: ./secrets/aws_access_key aws_secret: file: ./secrets/aws_secret_keyFor production environments, consider integrating with a secrets manager like HashiCorp Vault or AWS Secrets Manager for enhanced security[3][7][11].
Citations:
- 1: https://grafana.com/docs/grafana/latest/setup-grafana/configure-docker/
- 2: https://grafana.com/docs/grafana/latest/setup-grafana/set-up-https/
- 3: https://phase.dev/blog/docker-compose-secrets
- 4: https://grafana.com/blog/2024/05/06/data-source-security-in-grafana-best-practices-and-what-to-avoid/
- 5: https://community.grafana.com/t/configure-grafana-using-docker-secrets/109648
- 6: https://signoz.io/guides/what-is-the-default-username-and-password-for-grafana-login-page/
- 7: https://docs.docker.com/compose/how-tos/use-secrets/
- 8: https://last9.io/blog/grafana-and-docker/
- 9: https://community.grafana.com/t/cannot-override-default-admin-user-and-passord/6704
- 10: https://grafana.com/docs/grafana/latest/setup-grafana/configure-security/
- 11: https://spacelift.io/blog/docker-secrets
- 12: https://community.grafana.com/t/grafana-docker-and-data-persistence/33702
- 13: https://github.com/Einsteinish/Docker-Compose-Prometheus-and-Grafana/blob/master/README.md
- 14: https://grafana.com/docs/grafana/latest/setup-grafana/configure-security/configure-authentication/grafana/
- 15: https://docs.docker.com/engine/swarm/secrets/
- 16: https://www.youtube.com/watch?v=FlDfcMbSLXs
- 17: https://community.grafana.com/t/changing-grafana-password-in-docker-environment/17794
- 18: grafana/grafana-docker#149
- 19: https://forums.docker.com/t/compose-passwords-and-security/137419
- 20: https://grafana.com/docs/grafana/latest/setup-grafana/installation/docker/
Review Approved – Minor Security Enhancement Recommendation
The monitoring stack (Prometheus, Grafana, and OpenTelemetry Collector) is correctly configured with proper networking, volumes, and dependency management. In this example, hardcoding the Grafana credentials is acceptable; however, for production deployments it is recommended to secure admin credentials via Docker secrets. For instance, instead of:
environment: - GF_SECURITY_ADMIN_PASSWORD=admin - GF_SECURITY_ADMIN_USER=adminyou could use the file-based approach:
environment: GF_SECURITY_ADMIN_USER_FILE: /run/secrets/grafana_admin_user GF_SECURITY_ADMIN_PASSWORD_FILE: /run/secrets/grafana_admin_password secrets: - grafana_admin_user - grafana_admin_passwordThis change prevents credentials from appearing in process listings or logs.
Additionally, please add a newline at the end of the file:
volumes: grafana-storage: +
examples/cli-instrumentation/otel/grafana/provisioning/dashboards/indexer-dashboard.json
Show resolved
Hide resolved
examples/cli-instrumentation/otel/grafana/provisioning/dashboards/indexer-dashboard.json
Show resolved
Hide resolved
fracek
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good now!
No description provided.