Skip to content

This integration collects telemetry from Databricks (including Spark on Databricks) and/or Spark telemetry from any Spark deployment. See the Features section for supported telemetry types.

License

Notifications You must be signed in to change notification settings

newrelic-experimental/newrelic-databricks-integration

New Relic Open Source experimental project banner.

GitHub forks GitHub stars GitHub watchers

GitHub all releases GitHub release (latest by date) GitHub last commit GitHub Release Date

GitHub issues GitHub issues closed GitHub pull requests GitHub pull requests closed

New Relic Databricks Integration

This integration collects telemetry from Databricks (including Spark on Databricks) and/or Spark telemetry from any Spark deployment. See the Features section for supported telemetry types.

Apache Spark Dashboard Screenshot

Table of Contents

Important Notes

Getting Started

To get started with the New Relic Databricks integration, deploy the integration using a supported deployment type, configure the integration using supported configuration mechanisms, and then import the sample dashboard.

On-host

The New Relic Databricks integration can be run on any supported host platform. The integration will collect Databricks telemetry (including Spark on Databricks) via the Databricks ReST API using the Databricks SDK for Go and/or Spark telemetry from a non-Databricks Spark deployment via the Spark ReST API.

The New Relic Databricks integration can also be deployed on the driver node of a Databricks cluster using the provided init script to install and configure the integration at cluster startup time.

Deploy the integration on a host

The New Relic Databricks integration provides binaries for the following host platforms.

  • Linux amd64
  • Windows amd64

To run the Databricks integration on a host, perform the following steps.

  1. Download the appropriate archive for your platform from the latest release.
  2. Extract the archive to a new or existing directory.
  3. Create a directory named configs in the same directory.
  4. Create a file named config.yml in the configs directory and copy the contents of the file configs/config.template.yml in this repository into it.
  5. Edit the config.yml file to configure the integration appropriately for your environment.
  6. From the directory where the archive was extracted, execute the integration binary using the command ./newrelic-databricks-integration (or .\newrelic-databricks-integration.exe on Windows) with the appropriate Command Line Options.

Deploy the integration on the driver node of a Databricks cluster

The New Relic Databricks integration can be deployed on the driver node of a Databricks cluster using a cluster-scoped init script. The init script uses custom environment variables to specify configuration parameters necessary for the integration configuration.

To install the init script, perform the following steps.

  1. Login to your Databricks account and navigate to the desired workspace.
  2. Follow the recommendations for init scripts to store the cluster_init_integration.sh script within your workspace in the recommended manner. For example, if your workspace is enabled for Unity Catalog, you should store the init script in a Unity Catalog volume.
  3. Navigate to the Compute tab and select the desired all-purpose or job compute to open the compute details UI.
  4. Click the button labeled Edit to edit the compute's configuration.
  5. Follow the steps to use the UI to configure a cluster-scoped init script and point to the location where you stored the init script in step 2 above.
  6. If your cluster is not running, click on the button labeled Confirm to save your changes. Then, restart the cluster. If your cluster is already running, click on the button labeled Confirm and restart to save your changes and restart the cluster.

Additionally, follow the steps to set environment variables to add the following environment variables.

Note that the NEW_RELIC_API_KEY and NEW_RELIC_ACCOUNT_ID are currently unused but are required by the new-relic-client-go module used by the integration. Additionally, note that only the personal access token or OAuth credentials need to be specified but not both. If both are specified, the OAuth credentials take precedence. Finally, make sure to restart the cluster following the configuration of the environment variables.

Features

The New Relic Databricks integration supports the following capabilities.

  • Collect Spark telemetry

    The New Relic Databricks integration can collect telemetry from Spark running on Databricks. By default, the integration will automatically connect to and collect telemetry from the Spark deployments in all clusters created via the UI or API in the specified workspace.

    The New Relic Databricks integration can also collect Spark telemetry from any non-Databricks Spark deployment.

Usage

Command Line Options

Option Description Default
--config_path path to the (#configyml) to use configs/config.yml
--dry_run flag to enable "dry run" mode false
--env_prefix prefix to use for environment variable lookup ''
--verbose flag to enable "verbose" mode false
--version display version information only N/a

Configuration

The Databricks integration is configured using the config.yml and/or environment variables. For Databricks, authentication related configuration parameters may also be set in a Databricks configuration profile. In all cases, where applicable, environment variables always take precedence.

config.yml

All configuration parameters for the Databricks integration can be set using a YAML file named config.yml. The default location for this file is configs/config.yml relative to the current working directory when the integration binary is executed. The supported configuration parameters are listed below. See config.template.yml for a full configuration example.

General configuration

The parameters in this section are configured at the top level of the config.yml.

licenseKey
Description Valid Values Required Default
New Relic license key string Y N/a

This parameter specifies the New Relic License Key (INGEST) that should be used to send generated metrics.

The license key can also be specified using the NEW_RELIC_LICENSE_KEY environment variable.

region
Description Valid Values Required Default
New Relic region identifier US / EU N US

This parameter specifies which New Relic region that generated metrics should be sent to.

interval
Description Valid Values Required Default
Polling interval (in seconds) numeric N 60

This parameter specifies the interval (in seconds) at which the integration should poll for data.

This parameter is only used when runAsService is set to true.

runAsService
Description Valid Values Required Default
Flag to enable running the integration as a "service" true / false N false

The integration can run either as a "service" or as a simple command line utility which runs once and exits when it is complete.

When set to true, the integration process will run continuously and poll the for data at the recurring interval specified by the interval parameter. The process will only exit if it is explicitly stopped or a fatal error or panic occurs.

When set to false, the integration will run once and exit. This is intended for use with an external scheduling mechanism like cron.

pipeline
Description Valid Values Required Default
The root node for the set of pipeline configuration parameters YAML Mapping N N/a

The integration retrieves, processes, and exports data to New Relic using a data pipeline consisting of one or more receivers, a processing chain, and a New Relic exporter. Various aspects of the pipeline are configurable. This element groups together the configuration parameters related to pipeline configuration.

log
Description Valid Values Required Default
The root node for the set of log configuration parameters YAML Mapping N N/a

The integration uses the logrus package for application logging. This element groups together the configuration parameters related to log configuration.

mode
Description Valid Values Required Default
The integration execution mode databricks N databricks

The integration execution mode. Currently, the only supported execution mode is databricks.

Deprecated: As of v2.3.0, this configuration parameter is no longer used. The presence (or not) of the databricks top-level node will be used to enable (or disable) the Databricks collector. Likewise, the presence (or not) of the spark top-level node will be used to enable (or disable) the Spark collector separate from Databricks.

databricks
Description Valid Values Required Default
The root node for the set of Databricks configuration parameters YAML Mapping N N/a

This element groups together the configuration parameters to configure the Databricks collector. If this element is not specified, the Databricks collector will not be run.

Note that this node is not required. It can be used with or without the spark top-level node.

spark
Description Valid Values Required Default
The root node for the set of Spark configuration parameters YAML Mapping N N/a

This element groups together the configuration parameters to configure the Spark collector. If this element is not specified, the Spark collector will not be run.

Note that this node is not required. It can be used with or without the databricks top-level node.

tags
Description Valid Values Required Default
The root node for a set of custom tags to add to all telemetry sent to New Relic YAML Mapping N N/a

This element specifies a group of custom tags that will be added to all telemetry sent to New Relic. The tags are specified as a set of key-value pairs.

Pipeline configuration
receiveBufferSize
Description Valid Values Required Default
Size of the buffer that holds items before processing number N 500

This parameter specifies the size of the buffer that holds received items before being flushed through the processing chain and on to the exporters. When this size is reached, the items in the buffer will be flushed automatically.

harvestInterval
Description Valid Values Required Default
Harvest interval (in seconds) number N 60

This parameter specifies the interval (in seconds) at which the pipeline should automatically flush received items through the processing chain and on to the exporters. Each time this interval is reached, the pipeline will flush items even if the item buffer has not reached the size specified by the receiveBufferSize parameter.

instances
Description Valid Values Required Default
Number of concurrent pipeline instances to run number N 3

The integration retrieves, processes, and exports metrics to New Relic using a data pipeline consisting of one or more receivers, a processing chain, and a New Relic exporter. When runAsService is true, the integration can launch one or more "instances" of this pipeline to receive, process, and export data concurrently. Each "instance" will be configured with the same processing chain and exporters and the receivers will be spread across the available instances in a round-robin fashion.

This parameter specifies the number of pipeline instances to launch.

NOTE: When runAsService is false, only a single pipeline instance is used.

Log configuration
level
Description Valid Values Required Default
Log level panic / fatal / error / warn / info / debug / trace N warn

This parameter specifies the maximum severity of log messages to output with trace being the least severe and panic being the most severe. For example, at the default log level (warn), all log messages with severities warn, error, fatal, and panic will be output but info, debug, and trace will not.

fileName
Description Valid Values Required Default
Path to a file where log output will be written string N stderr

This parameter designates a file path where log output should be written. When no path is specified, log output will be written to the standard error stream (stderr).

Databricks configuration

The Databricks configuration parameters are used to configure the Databricks collector.

workspaceHost
Description Valid Values Required Default
Databricks workspace instance name string Y N/a

This parameter specifies the instance name of the target Databricks instance for which data should be collected. This is used by the integration when constructing the URLs for API calls. Note that the value of this parameter must not include the https:// prefix, e.g. https://my-databricks-instance-name.cloud.databricks.com.

The workspace host can also be specified using the DATABRICKS_HOST environment variable.

accessToken
Description Valid Values Required Default
Databricks personal access token string N N/a

When set, the integration will use Databricks personal access token authentication to authenticate Databricks API calls with the value of this parameter as the Databricks personal access token.

The personal access token can also be specified using the DATABRICKS_TOKEN environment variable or any other SDK-supported mechanism (e.g. the token field in a Databricks configuration profile).

See the authentication section for more details.

oauthClientId
Description Valid Values Required Default
Databricks OAuth M2M client ID string N N/a

When set, the integration will use a service principal to authenticate with Databricks (OAuth M2M) when making Databricks API calls. The value of this parameter will be used as the OAuth client ID.

The OAuth client ID can also be specified using the DATABRICKS_CLIENT_ID environment variable or any other SDK-supported mechanism (e.g. the client_id field in a Databricks configuration profile).

See the authentication section for more details.

oauthClientSecret
Description Valid Values Required Default
Databricks OAuth M2M client secret string N N/a

When the oauthClientId is set, this parameter can be set to specify the OAuth secret associated with the service principal.

The OAuth client secret can also be specified using the DATABRICKS_CLIENT_SECRET environment variable or any other SDK-supported mechanism (e.g. the client_secret field in a Databricks configuration profile).

See the authentication section for more details.

sparkMetrics
Description Valid Values Required Default
Flag to enable automatic collection of Spark metrics. true / false N true

By default, when the Databricks collector is enabled, it will automatically collect Spark telemetry from Spark running on Databricks.

This flag can be used to disable the collection of Spark telemetry by the Databricks collector. This may be useful to control data ingest when business requirements call for the collection of non-Spark related Databricks telemetry and Spark telemetry is not used. This flag is also used by the integration when it is deployed directly on the driver node of a Databricks cluster using the the provided init script since Spark telemetry is collected by the Spark collector in this scenario.

sparkMetricPrefix
Description Valid Values Required Default
A prefix to prepend to Spark metric names string N N/a

This parameter serves the same purpose as the metricPrefix parameter of the Spark configuration except that it applies to Spark telemetry collected by the Databricks collector. See the metricPrefix parameter of the Spark configuration for more details.

Note that this parameter has no effect on Spark telemetry collected by the Spark collector. This includes the case when the integration is deployed directly on the driver node of a Databricks cluster using the the provided init script since Spark telemetry is collected by the Spark collector in this scenario.

sparkClusterSources
Description Valid Values Required Default
The root node for the Databricks cluster source configuration YAML Mapping N N/a

The mechanism used to create a cluster is referred to as a cluster "source". The Databricks collector supports collecting Spark telemetry from all-purpose clusters created via the UI or API and from job clusters created via the Databricks Jobs Scheduler. This element groups together the flags used to individually enable or disable the cluster sources from which the Databricks collector will collect Spark telemetry.

Databricks cluster source configuration
ui
Description Valid Values Required Default
Flag to enable automatic collection of Spark telemetry from all-purpose clusters created via the UI true / false N true

By default, when the Databricks collector is enabled, it will automatically collect Spark telemetry from all all-purpose clusters created via the UI.

This flag can be used to disable the collection of Spark telemetry from all-purpose clusters created via the UI.

job
Description Valid Values Required Default
Flag to enable automatic collection of Spark telemetry from job clusters created via the Databricks Jobs Scheduler true / false N true

By default, when the Databricks collector is enabled, it will automatically collect Spark telemetry from job clusters created by the Databricks Jobs Scheduler.

This flag can be used to disable the collection of Spark telemetry from job clusters created via the Databricks Jobs Scheduler.

api
Description Valid Values Required Default
Flag to enable automatic collection of Spark telemetry from all-purpose clusters created via the Databricks ReST API true / false N true

By default, when the Databricks collector is enabled, it will automatically collect Spark telemetry from all-purpose clusters created via the Databricks ReST API.

This flag can be used to disable the collection of Spark telemetry from all-purpose clusters created via the Databricks ReST API.

Spark configuration

The Spark configuration parameters are used to configure the Spark collector.

webUiUrl
Description Valid Values Required Default
The Web UI URL of an application on the Spark deployment to monitor string N N/a

This parameter can be used to monitor a non-Databricks Spark deployment. It specifes the URL of the Web UI of an application running on the Spark deployment to monitor. The value should be of the form http[s]://<hostname>:<port> where <hostname> is the hostname of the Spark deployment to monitor and <port> is the port number of the Spark application's Web UI (typically 4040 or 4041, 4042, etc if more than one application is running on the same host).

Note that the value must not contain a path. The path of the Spark ReST API endpoints (mounted at /api/v1) will automatically be prepended.

metricPrefix
Description Valid Values Required Default
A prefix to prepend to Spark metric names string N N/a

This parameter specifies a prefix that will be prepended to each Spark metric name when the metric is exported to New Relic.

For example, if this parameter is set to spark., then the full name of the metric representing the value of the memory used on application executors (app.executor.memoryUsed) will be spark.app.executor.memoryUsed.

Note that it is not recommended to leave this value empty as the metric names without a prefix may be ambiguous. Additionally, note that this parameter has no effect on Spark telemetry collected by the Databricks collector. In that case, use the sparkMetricPrefix instead.

Authentication

The Databricks integration uses the Databricks SDK for Go to access the Databricks and Spark ReST APIs. The SDK performs authentication on behalf of the integration and provides many options for configuring the authentication type and credentials to be used. See the SDK documentation and the Databricks client unified authentication documentation for details.

For convenience purposes, the following parameters can be used in the Databricks configuration section of the `config.yml file.

Building

Coding Conventions

Style Guidelines

While not strictly enforced, the basic preferred editor settings are set in the .editorconfig. Other than this, no style guidelines are currently imposed.

Static Analysis

This project uses both go vet and staticcheck to perform static code analysis. These checks are run via precommit on all commits. Though this can be bypassed on local commit, both tasks are also run during the validate workflow and must have no errors in order to be merged.

Commit Messages

Commit messages must follow the conventional commit format. Again, while this can be bypassed on local commit, it is strictly enforced in the validate workflow.

The basic commit message structure is as follows.

<type>[optional scope][!]: <description>

[optional body]

[optional footer(s)]

In addition to providing consistency, the commit message is used by svu during the release workflow. The presence and values of certain elements within the commit message affect auto-versioning. For example, the feat type will bump the minor version. Therefore, it is important to use the guidelines below and carefully consider the content of the commit message.

Please use one of the types below.

  • feat (bumps minor version)
  • fix (bumps patch version)
  • chore
  • build
  • docs
  • test

Any type can be followed by the ! character to indicate a breaking change. Additionally, any commit that has the text BREAKING CHANGE: in the footer will indicate a breaking change.

Local Development

For local development, simply use go build and go run. For example,

go build cmd/databricks/databricks.go

Or

go run cmd/databricks/databricks.go

If you prefer, you can also use goreleaser with the --single-target option to build the binary for the local GOOS and GOARCH only.

goreleaser build --single-target

Releases

Releases are built and packaged using goreleaser. By default, a new release will be built automatically on any push to the main branch. For more details, review the .goreleaser.yaml and the goreleaser documentation.

The svu utility is used to generate the next tag value based on commit messages.

GitHub Workflows

This project utilizes GitHub workflows to perform actions in response to certain GitHub events.

Workflow Events Description
validate push, pull_request to main branch Runs precommit to perform static analysis and runs commitlint to validate the last commit message
build push, pull_request Builds and tests code
release push to main branch Generates a new tag using svu and runs goreleaser
repolinter pull_request Enforces repository content guidelines

Appendix

The sections below cover topics that are related to Databricks telemetry but that are not specifically part of this integration. In particular, any assets referenced in these sections must be installed and/or managed separately from the integration. For example, the init scripts provided to monitor cluster health are not automatically installed or used by the integration.

Monitoring Cluster Health

New Relic Infrastructure can be used to collect system metrics like CPU and memory usage from the nodes in a Databricks cluster. Additionally, New Relic APM can be used to collect application metrics like JVM heap size and GC cycle count from the Apache Spark driver and executor JVMs. Both are achieved using cluster-scoped init scripts. The sections below cover the installation of these init scripts.

NOTE: Use of one or both init scripts will have a slight impact on cluster startup time. Therefore, consideration should be given when using the init scripts with a job cluster, particularly when using a job cluster scoped to a single task.

Configure the New Relic license key

Both the New Relic Infrastructure Agent init script and the New Relic APM Java Agent init script require a New Relic license key to be specified in a custom environment variable named NEW_RELIC_LICENSE_KEY. While the license key can be specified by hard-coding it in plain text in the compute configuration, this is not recommended. Instead, it is recommended to create a secret. using the Databricks CLI and reference the secret in the environment variable.

To create the secret and set the environment variable, perform the following steps.

  1. Follow the steps to install or update the Databricks CLI.

  2. Use the Databricks CLI to create a Databricks-backed secret scope with the name newrelic. For example,

    databricks secrets create-scope newrelic

    NOTE: Be sure to take note of the information in the referenced URL about the MANAGE scope permission and use the correct version of the command.

  3. Use the Databricks CLI to create a secret for the license key in the new scope with the name licenseKey. For example,

    databricks secrets put-secret --json '{
       "scope": "newrelic",
       "key": "licenseKey",
       "string_value": "[YOUR_LICENSE_KEY]"
    }'

To set the custom environment variable named NEW_RELIC_LICENSE_KEY and reference the value from the secret, follow the steps to configure custom environment variables and add the following line after the last entry in the Environment variables field.

NEW_RELIC_LICENSE_KEY={{secrets/newrelic/licenseKey}}

Install the New Relic Infrastructure Agent init script

The cluster_init_infra.sh script automatically installs the latest version of the New Relic Infrastructure Agent on each node of the cluster.

To install the init script, perform the following steps.

  1. Login to your Databricks account and navigate to the desired workspace.
  2. Follow the recommendations for init scripts to store the cluster_init_infra.sh script within your workspace in the recommended manner. For example, if your workspace is enabled for Unity Catalog, you should store the init script in a Unity Catalog volume.
  3. Navigate to the Compute tab and select the desired all-purpose or job compute to open the compute details UI.
  4. Click the button labeled Edit to edit the compute's configuration.
  5. Follow the steps to use the UI to configure a cluster to run an init script and point to the location where you stored the init script in step 2.
  6. If your cluster is not running, click on the button labeled Confirm to save your changes. Then, restart the cluster. If your cluster is already running, click on the button labeled Confirm and restart to save your changes and restart the cluster.

Install the New Relic APM Java Agent init script

The cluster_init_apm.sh script automatically installs the latest version of the New Relic APM Java Agent on each node of the cluster.

To install the init script, perform the same steps as outlined in the Install the New Relic Infrastructure Agent init script section using the cluster_init_apm.sh script instead of the cluster_init_infra.sh script.

Additionally, perform the following steps.

  1. Login to your Databricks account and navigate to the desired workspace.

  2. Navigate to the Compute tab and select the desired all-purpose or job compute to open the compute details UI.

  3. Click the button labeled Edit to edit the compute's configuration.

  4. Follow the steps to configure custom Spark configuration properties and add the following lines after the last entry in the Spark Config field.

    spark.driver.extraJavaOptions -javaagent:/databricks/jars/newrelic-agent.jar
    spark.executor.extraJavaOptions -javaagent:/databricks/jars/newrelic-agent.jar -Dnewrelic.tempdir=/tmp
    
  5. If your cluster is not running, click on the button labeled Confirm to save your changes. Then, restart the cluster. If your cluster is already running, click on the button labeled Confirm and restart to save your changes and restart the cluster.

Viewing your cluster data

With the New Relic Infrastructure Agent init script installed, a host entity will show up for each node in the cluster.

With the New Relic APM Java Agent init script installed, an APM application entity named Databricks Driver will show up for the Spark driver JVM and an APM application entity named Databricks Executor will show up for the executor JVMs. Note that all executor JVMs will report to a single APM application entity. Metrics for a specific executor can be viewed on many pages of the APM UI by selecting the instance from the Instances menu located below the time range selector. On the JVM Metrics page, the JVM metrics for a specific executor can be viewed by selecting an instance from the JVM instances table.

Additionally, both the host entities and the APM entities are tagged with the tags listed below to make it easy to filter down to the entities that make up your cluster using the entity filter bar that is available in many places in the UI.

  • databricksClusterId - The ID of the Databricks cluster
  • databricksClusterName - The name of the Databricks cluster
  • databricksIsDriverNode - true if the entity is on the driver node, otherwise false
  • databricksIsJobCluster - true if the entity is part of a job cluster, otherwise false

Below is an example of using the databricksClusterName to filter down to the host and APM entities for a single cluster using the entity filter bar on the All entities view.

infra and apm cluster filter example

Support

New Relic has open-sourced this project. This project is provided AS-IS WITHOUT WARRANTY OR DEDICATED SUPPORT. Issues and contributions should be reported to the project here on GitHub.

We encourage you to bring your experiences and questions to the Explorers Hub where our community members collaborate on solutions and new ideas.

Privacy

At New Relic we take your privacy and the security of your information seriously, and are committed to protecting your information. We must emphasize the importance of not sharing personal data in public forums, and ask all users to scrub logs and diagnostic information for sensitive information, whether personal, proprietary, or otherwise.

We define “Personal Data” as any information relating to an identified or identifiable individual, including, for example, your name, phone number, post code or zip code, Device ID, IP address, and email address.

For more information, review New Relic’s General Data Privacy Notice.

Contribute

We encourage your contributions to improve this project! Keep in mind that when you submit your pull request, you'll need to sign the CLA via the click-through using CLA-Assistant. You only have to sign the CLA one time per project.

If you have any questions, or to execute our corporate CLA (which is required if your contribution is on behalf of a company), drop us an email at opensource@newrelic.com.

A note about vulnerabilities

As noted in our security policy, New Relic is committed to the privacy and security of our customers and their data. We believe that providing coordinated disclosure by security researchers and engaging with the security community are important means to achieve our security goals.

If you believe you have found a security vulnerability in this project or any of New Relic's products or websites, we welcome and greatly appreciate you reporting it to New Relic through HackerOne.

If you would like to contribute to this project, review these guidelines.

To all contributors, we thank you! Without your contribution, this project would not be what it is today.

License

The New Relic Databricks Integration project is licensed under the Apache 2.0 License.

About

This integration collects telemetry from Databricks (including Spark on Databricks) and/or Spark telemetry from any Spark deployment. See the Features section for supported telemetry types.

Topics

Resources

License

Code of conduct

Security policy

Stars

Watchers

Forks

Packages

No packages published