From 6185c5645c74f8807c74aa42ad1d1d19b94d6811 Mon Sep 17 00:00:00 2001 From: Marc LeBlanc Date: Tue, 17 Dec 2024 03:23:09 -0700 Subject: [PATCH 1/4] Cleaning up docs for HTTP traces --- docs/admin/config/private-network.mdx | 10 +- docs/admin/config/site_config.mdx | 2 +- .../deploy/docker-compose/configuration.mdx | 229 +++++++++--------- .../deploy/docker-compose/operations.mdx | 36 +-- docs/admin/deploy/kubernetes/configure.mdx | 60 ++--- docs/admin/deploy/kubernetes/index.mdx | 48 ++-- docs/admin/deploy/scale.mdx | 2 +- docs/admin/observability/alerts.mdx | 1 - docs/admin/observability/opentelemetry.mdx | 124 +++++----- docs/admin/observability/tracing.mdx | 147 ++++++----- docs/admin/observability/troubleshooting.mdx | 31 +-- 11 files changed, 312 insertions(+), 378 deletions(-) diff --git a/docs/admin/config/private-network.mdx b/docs/admin/config/private-network.mdx index e744cd132..1ae642665 100644 --- a/docs/admin/config/private-network.mdx +++ b/docs/admin/config/private-network.mdx @@ -1,9 +1,11 @@ # Private network configuration + A **private network** refers to a secure network environment segregated from the public internet, designed to facilitate internal communications and operations within an organization. This network setup restricts external access, enhancing security and control over data flow by limiting exposure to external threats and unauthorized access. -When deploying self-hosted Sourcegraph instances in private networks with specific compliance and policy requirements, additional configuration may be required to ensure all networking features function correctly. The reasons for applying the following configuration options depend on the specific functionality of the Sourcegraph service and the unique network and infrastructure requirements of the organization. +When deploying self-hosted Sourcegraph instances in private networks with specific compliance and policy requirements, additional configuration may be required to ensure all networking features function correctly. The reasons for applying the following configuration options depend on the specific functionality of the Sourcegraph service and the unique network and infrastructure requirements of the organization. The following is a list of Sourcegraph services and how and when each initiates outbound connections to external services: + - **executor**: Sourcegraph [Executor](../executors) batch change or precise indexing jobs may need to connect to services hosted within an organization's private network - **frontend**: The frontend service communicates externally when connecting to external [auth providers](../auth), sending [telemetry data](../pings), testing code host connections, and connecting to [externally hosted](../external_services) Sourcegraph services - **gitserver**: Executes git commands against externally hosted [code hosts](../external_service) @@ -12,15 +14,17 @@ The following is a list of Sourcegraph services and how and when each initiates - **worker**: Sourcegraph [Worker](../workers) run various background jobs that may require establishing connections to services hosted within an organization's private network ## HTTP proxy configuration + All Sourcegraph services respect the conventional `HTTP_PROXY`, `HTTPS_PROXY`, and `NO_PROXY` environment variables for routing Sourcegraph client application HTTP traffic through a proxy server. The steps for configuring proxy environment variables will depend on your Sourcegraph deployment method. ### Kubernetes Helm + Add the proxy environment variables to your Sourcegraph Helm chart [override file](https://github.com/sourcegraph/deploy-sourcegraph-helm/blob/main/charts/sourcegraph/values.yaml): ```yaml executor|frontend|gitserver|migrator|repo-updater|worker: env: - - name: HTTP_PROXY + - name: HTTP_PROXY value: http://proxy.example.com:8080 - name: HTTPS_PROXY value: http://proxy.example.com:8080 @@ -33,7 +37,7 @@ executor|frontend|gitserver|migrator|repo-updater|worker: ## Using private CA root certificates Some organizations maintain a private Certificate Authority (CA) for issuing certificates within their private network. When Sourcegraph connects to TLS encrypted service using a self-signed certificate that it does not trust, you will observe an `x509: certificate signed by unknown authority` error message in logs. -In order for Sourcegraph to respect an organization's self-signed certificates, the private CA root certificate(s) will need to be appended to Sourcegraph's trusted CA root certificate list in `/etc/ssl/certs/ca-certificates.crt`. +In order for Sourcegraph to respect an organization's self-signed certificates, the private CA root certificate(s) will need to be appended to Sourcegraph's trusted CA root certificate list in `/etc/ssl/certs/ca-certificates.crt`. ### Configuring sourcegraph-frontend to recognize private CA root certificates The following details the process for setting up the sourcegraph-frontend to acknowledge and trust a private CA root certificate for Sourcegraph instances deployed using [Helm](../deploy/kubernetes/helm). For any other Sourcegraph service that needs to trust an organization's private CA root certificate (including gitserver, repo-updater, or migrator), similar steps will need to be followed. diff --git a/docs/admin/config/site_config.mdx b/docs/admin/config/site_config.mdx index 55bc5e252..b760c010d 100644 --- a/docs/admin/config/site_config.mdx +++ b/docs/admin/config/site_config.mdx @@ -268,7 +268,7 @@ All site configuration options and their default values are shown below. // - { // "debug": true, // "sampling": "all", - // "type": "jaeger", + // "type": "opentelemetry", // Jaeger now uses the OpenTelemetry format, the old jaeger format is deprecated // "urlTemplate": "{{ .ExternalURL }}/-/debug/jaeger/trace/{{ .TraceID }}" // } diff --git a/docs/admin/deploy/docker-compose/configuration.mdx b/docs/admin/deploy/docker-compose/configuration.mdx index 97e224c56..ceaef1cf0 100644 --- a/docs/admin/deploy/docker-compose/configuration.mdx +++ b/docs/admin/deploy/docker-compose/configuration.mdx @@ -1,105 +1,169 @@ # Configuration -> ⚠️ We recommend new users use our [machine image](/admin/) or [script-install](/admin/deploy/single-node/script) instructions, which are easier and offer more flexibility when configuring Sourcegraph. Existing customers can reach out to our Customer Engineering team support@sourcegraph.com if they wish to migrate to these deployment models. +> ⚠️ We recommend using our [machine image](/admin/deploy/machine-images/), which is much easier and offers more flexibility when configuring Sourcegraph. Existing customers can reach out to our Customer Support Engineering team for assistance with migrating. -You can find the default [docker-compose.yaml](https://github.com/sourcegraph/deploy-sourcegraph-docker/blob/master/docker-compose/docker-compose.yaml) file inside the deployment repository. +You can find the default base [docker-compose.yaml](https://github.com/sourcegraph/deploy-sourcegraph-docker/blob/master/docker-compose/docker-compose.yaml) file inside the [deploy-sourcegraph-docker](https://github.com/sourcegraph/deploy-sourcegraph-docker/blob/master/docker-compose) repository. We strongly recommend using an override file, instead of modifying the base docker-compose.yaml file. -If you would like to make changes to the default configurations, we highly recommend you to create a new file called `docker-compose.override.yaml` in the same directory where the base file ([docker-compose.yaml](https://github.com/sourcegraph/deploy-sourcegraph-docker/blob/master/docker-compose/docker-compose.yaml)) is located, and make your customizations inside the `docker-compose.override.yaml` file. - ->WARNING: For configuration of Sourcegraph, see Sourcegraph's [configuration](/admin/config/) docs. +To configure your Sourcegraph instance, see Sourcegraph's [configuration](/admin/config/) docs. ## What is an override file? -Docker Compose allows you to customize configuration settings using an override file called `docker-compose.override.yaml`, which allows customizations to persist through upgrades without needing to manage merge conflicts as changes are not made directly to the base `docker-compose.yaml` file. - -When you run the `docker-compose up` command, the override file will be automatically merged over the base [docker-compose.yaml](https://github.com/sourcegraph/deploy-sourcegraph-docker/blob/master/docker-compose/docker-compose.yaml) file. +Docker Compose allows you to customize configurations using an override file, ex. `docker-compose.override.yaml`, which enables you to persist your configurations through upgrades, without having to manage merge conflicts when we update our base docker-compose.yaml file, as your changes are not made directly to the base file. -The [official Docker Compose docs](https://docs.docker.com/compose/extends/) provide details about override files. +When you run docker compose commands, we recommend that you specify the compose files in the order of precedence. In this example, the values in the override file override any conflicting values in the base file. You can also provide multiple override files in a command, to help you manage multiple instances / environments / test configurations, etc. -## Examples +```bash +docker compose -f docker-compose.yaml -f docker-compose.override.yaml up -d --remove-orphans +``` -In order to make changes to the configuration settings defined in the base file [docker-compose.yaml](https://github.com/sourcegraph/deploy-sourcegraph-docker/blob/master/docker-compose/docker-compose.yaml), create an empty `docker-compose.override.yaml` file in the same directory as the [docker-compose.yaml](https://github.com/sourcegraph/deploy-sourcegraph-docker/blob/master/docker-compose/docker-compose.yaml) file, using the same version number, and then add the customizations under the `services` field. +See the [Docker Compose](https://docs.docker.com/compose/extends/) docs for details. -### Adjust resources +## Adjust resources -Note that you will only need to list the fragments that you would like to change from the base file. +You only need to specify the services and configurations which you need to override from the base file. ```yaml # docker-compose.override.yaml -version: '2.4' services: gitserver-0: cpus: 8 - mem_limit: '26g' + mem_limit: '32g' ``` -### Add replica endpoints +## Use external databases -When adding a new replica for `gitserver`, `searcher`, `symbols`, and `indexed-search`, you must list the endpoints for each replica individually in order for frontend to communicate with them. +The Docker Compose configuration has its own internal PostgreSQL and Redis databases. -To do that, add or modify the environment variables to all of the sourcegraph-frontend-* services and the sourcegraph-frontend-internal service in the [Docker Compose YAML file](https://github.com/sourcegraph/deploy-sourcegraph-docker/blob/master/docker-compose/docker-compose.yaml). +You can alternatively configure Sourcegraph to [use external services](/admin/external_services/). -#### version older than 4.5.0 +## Set environment variables -The following configuration in a docker-compose.override.yaml file shows how to list the endpoints for each replica service individually when the replica count for gitserver, searcher, symbols, and indexed-search has been increased to 2. This is done by using the environment variables specified for each service: +Add / override an environment variable on the sourcegraph-frontend-0 service: ```yaml # docker-compose.override.yaml -version: '2.4' services: sourcegraph-frontend-0: environment: - # List all replica endpoints for gitserver - - 'SRC_GIT_SERVERS=gitserver-0:3178 gitserver-1:3178' - # List all replica endpoints for indexed-search/zoekt-webserver - - 'INDEXED_SEARCH_SERVERS=zoekt-webserver-0:6070 zoekt-webserver-1:6070' - # List all replica endpoints for searcher - - 'SEARCHER_URL=http://searcher-0:3181 http://searcher-1:3181' - # List all replica endpoints for symbols - - 'SYMBOLS_URL=http://symbols-0:3184 http://symbols-1:3184' + - EXAMPLE_ENV_VAR=example_value ``` -The above configuration uses the environment variables SRC_GIT_SERVERS, INDEXED_SEARCH_SERVERS, SEARCHER_URL, and SYMBOLS_URL to specify the individual endpoints for each replica service. This is done by listing the hostname and port number for each replica, separated by a space. +See ["Environment variables in Compose"](https://docs.docker.com/compose/environment-variables/) for other ways to pass these environment variables to the relevant services (command line, .env file, etc.). + +## Enable HTTP tracing -#### version 4.5.0 or above +Sourcegraph supports HTTP tracing to help troubleshoot issues. See [Tracing](/admin/observability/tracing) for details. -In version 4.5.0 or above of Sourcegraph, it is possible to update the environment variables in the docker-compose.override.yaml file to automatically generate the endpoints based on the number of replicas provided. This eliminates the need to list each replica endpoint individually as in the previous example. +The base docker-compose.yaml file enables the bundled [otel-collector](https://sourcegraph.com/search?q=repo:%5Egithub%5C.com/sourcegraph/deploy-sourcegraph-docker$+file:docker-compose/docker-compose.yaml+content:%22++otel-collector:%22&patternType=keyword) by default, but a tracing backend needs to be deployed or configured to see HTTP traces. + +To enable tracing on your instance, you'll need to either: + +1. Deploy our bundled Jaeger backend, or +2. Configure an external tracing backend + +Once a tracing backend has been deployed, see our [Tracing](/admin/observability/tracing) page for next steps, including required changes to your Site Configuration to enable traces. + +### Deploy the bundled Jaeger + +To deploy the bundled Jaeger web UI to see HTTP trace data, add [Jaeger's docker-compose.yaml override file](https://github.com/sourcegraph/deploy-sourcegraph-docker/blob/main/docker-compose/jaeger/docker-compose.yaml) to your deployment command. + +```bash +docker compose \ + -f docker-compose/docker-compose.yaml \ + -f docker-compose/jaeger/docker-compose.yaml \ + -f docker-compose/docker-compose.override.yaml \ + up -d --remove-orphans +``` + +### Configure an external tracing backend + +The bundled otel-collector can be configured to export HTTP traces to an OTel-compatible backend of your choosing. + +To customize the otel-collector config file: + +- Create a copy of the default config in [otel-collector/config.yaml](https://github.com/sourcegraph/deploy-sourcegraph-docker/blob/main/otel-collector/config.yaml) +- Follow the [OpenTelemetry collector configuration guidance](/admin/observability/opentelemetry) +- Edit your `docker-compose.override.yaml` file to mount your custom config file to the `otel-collector` container: + +```yaml +services: + otel-collector: + command: ['--config', '/etc/otel-collector/config.yaml'] + volumes: + - '~/deploy-docker-compose/otel-collector/custom-config.yaml:/etc/otel-collector/config.yaml' +``` + +## Git configuration + +### Git SSH configuration + +Provide your `gitserver` container with SSH / Git configuration needed to connect to some code hosts, by mounting a directory that contains the needed config files into the `gitserver` container, ex. + +- `.ssh/config` +- `.ssh/id_rsa.pub` +- `.ssh/id_rsa` +- `.ssh/known_hosts` + +You can also provide other files like `.netrc`, `.gitconfig`, etc. at their respective paths, if needed. + +```yaml +# docker-compose.override.yaml +services: + gitserver-0: + volumes: + - 'gitserver-0:/data/repos' + - '~/sg/volume-maps/gitserver/.ssh:/home/sourcegraph/.ssh' +``` + +> WARNING: The permissions on your SSH / Git configuration must be set to be readable by the user in the `gitserver` container. For example, run `chmod -v -R 600 ~/sg/volume-maps/gitserver/.ssh` in the folder on the host machine. + +### Git HTTP(S) basic username + password authentication + +The easiest way to specify basic authentication usernames and passwords code hosts which require basic authentication, is to include them in the clone URL itself, ex. `https://user:password@example.com/my/repo`. These credentials won't be displayed to non-admin users. + +If you must use a `.netrc` file to store these credentials instead, follow the previous example for mounting SSH configuration, to mount a `.netrc` file from the host to `/home/sourcegraph/.netrc` in the `gitserver` container. + +## Add replicas + +When adding replicas for `gitserver`, `indexed-search`, `searcher`, or `symbols`, you must update the corresponding environment variable on each of the frontend services in your docker-compose.override.yaml file, `SRC_GIT_SERVERS`, `INDEXED_SEARCH_SERVERS`, `SEARCHER_URL`, and `SYMBOLS_URL` to the number of replicas for each respective service. Sourcegraph will then automatically infer the endpoints for each replica. ```yaml # docker-compose.override.yaml -version: '2.4' services: + sourcegraph-frontend-0: environment: - # To generate replica endpoints for gitserver - 'SRC_GIT_SERVERS=2' - # To generate replica endpoints for indexed-search/zoekt-webserver - 'INDEXED_SEARCH_SERVERS=2' - # To generate replica endpoints for searcher - 'SEARCHER_URL=1' - # To generate replica endpoints for symbols + - 'SYMBOLS_URL=1' + + sourcegraph-frontend-internal: + environment: + - 'SRC_GIT_SERVERS=2' + - 'INDEXED_SEARCH_SERVERS=2' + - 'SEARCHER_URL=1' - 'SYMBOLS_URL=1' ``` -In the above example, the value of the environment variables `SRC_GIT_SERVERS`, `INDEXED_SEARCH_SERVERS`, `SEARCHER_URL`, and `SYMBOLS_URL` are set to the number of replicas for each respective service. This allows Sourcegraph to automatically generate the endpoints for each replica, eliminating the need to list them individually. This can be a useful feature when working with large numbers of replicas. +## Shard gitserver -### Create multiple gitserver shards +If you find that your gitserver container is performing poorly, you can shard it into multiple containers. This is especially helpful when your Docker Compose host can mount multiple storage volumes, and each gitserver shared can use its own storage IOPS limit. -Split gitserver across multiple shards: +To split gitserver across multiple shards: ```yaml # docker-compose.override.yaml -version: '2.4' services: # Adjust resources for gitserver-0 # And then create an anchor to share with the replica gitserver-0: &gitserver cpus: 8 - mem_limit: '26g' + mem_limit: '32g' # Create a new service called gitserver-1, # which is an extension of gitserver-0 gitserver-1: - # Extend the original gitserver-0 to get the image values etc + # Extend the original gitserver-0 to reuse most values extends: file: docker-compose.yaml service: gitserver-0 @@ -120,8 +184,6 @@ services: # Set the following environment variables to generate the replica endpoints environment: &env_gitserver - 'SRC_GIT_SERVERS=2' - # IMPORTANT: For version below 4.3.1, you must list the endpoints individually - # - &env_gitserver 'SRC_GIT_SERVERS=gitserver-0:3178 gitserver-1:3178' # Use the same override values as sourcegraph-frontend-0 above sourcegraph-frontend-internal: <<: *frontend @@ -134,95 +196,30 @@ volumes: gitserver-1: ``` -### Disable a service +## Disable a service -You can "disable services" by assigning them to one or more [profiles](https://docs.docker.com/compose/profiles/), so that when running the `docker compose up` command, services assigned to profiles will not be started unless explicitly specified in the command (e.g., `docker compose --profile disabled up`). +You can disable services by assigning them to one or more [profiles](https://docs.docker.com/compose/profiles/), so that when running the `docker compose up` command, services assigned to profiles will not be started unless explicitly specified in the command (e.g., `docker compose --profile disabled up`). -For example, when you need to disable the internal codeintel-db in order to use an external database, you can assign `codeintel-db` to a profile called `disabled`: +For example, when you need to disable the bundled databases to use external databases, you can assign the bundled database containers to a profile called `disabled`: ```yaml # docker-compose.override.yaml -version: '2.4' services: codeintel-db: profiles: - disabled ``` -### Enable tracing - -Tracing should be enabled in the `docker-compose.yaml` file by default. +## Expose debug port -If not, you can enable it by setting the environment variable to `SAMPLING_STRATEGIES_FILE=/etc/jaeger/sampling_strategies.json` in the `jaeger` container: +To generate [pprof profiling data](/admin/pprof), you must configure your deployment to expose port 6060 on one of your frontend containers, for example: ```yaml # docker-compose.override.yaml -version: '2.4' -services: - jaeger: - environment: - - 'SAMPLING_STRATEGIES_FILE=/etc/jaeger/sampling_strategies.json' -``` - -### Git configuration - -#### Git SSH configuration - -Provide your `gitserver` instance with your SSH / Git configuration (e.g. `.ssh/config`, `.ssh/id_rsa`, `.ssh/id_rsa.pub`, and `.ssh/known_hosts`. You can also provide other files like `.netrc`, `.gitconfig`, etc. if needed) by mounting a directory that contains this configuration into the `gitserver` container. - -For example, in the `gitserver-0` container configuration in your `docker-compose.yaml` file or `docker-compose.override.yaml`, add the volume listed in the following example, while replacing `~/path/on/host/` with the path on the host machine to the `.ssh` directory: - -```yaml -# docker-compose.override.yaml -version: '2.4' -services: - gitserver-0: - volumes: - - 'gitserver-0:/data/repos' - - '~/path/on/host/.ssh:/home/sourcegraph/.ssh' -``` - -> WARNING: The permissions on your SSH / Git configuration must be set to be readable by the user in the `gitserver` container. For example, run `chmod -v -R 600 ~/path/to/.ssh` in the folder on the host machine. - -#### Git HTTP(S) authentication - -The easiest way to specify HTTP(S) authentication for repositories is to include the username and password in the clone URL itself, such as `https://user:password@example.com/my/repo`. These credentials won't be displayed to non-admin users. - -Otherwise, follow the previous steps for mounting SSH configuration to mount a host directory containing the desired `.netrc` file to `/home/sourcegraph/` in the `gitserver` container. - -### Expose debug port - -To [generate pprof profiling data](/admin/pprof), you must configure your deployment to expose port 6060 on one of your frontend containers, for example: - -```yaml -# docker-compose.override.yaml -version: '2.4' services: sourcegraph-frontend-0: ports: - '0.0.0.0:6060:6060' ``` -For specific ports that can be exposed, see the [debug ports section](/admin/pprof#debug-ports) of Sourcegraphs's [generate pprof profiling data](/admin/pprof) docs. - -### Set environment variables - -Add/modify the environment variables to all of the sourcegraph-frontend-* services and the sourcegraph-frontend-internal service in the [Docker Compose YAML file](https://github.com/sourcegraph/deploy-sourcegraph-docker/blob/master/docker-compose/docker-compose.yaml): - -```yaml -# docker-compose.override.yaml -version: '2.4' -services: - sourcegraph-frontend-0: - environment: - - (YOUR CODE) -``` - -See ["Environment variables in Compose"](https://docs.docker.com/compose/environment-variables/) for other ways to pass these environment variables to the relevant services (including from the command line, a .env file, etc.). - - -### Use an external database - -The Docker Compose configuration has its own internal PostgreSQL and Redis databases. - -You can alternatively configure Sourcegraph to [use external services](/admin/external_services/). +For specific ports that can be exposed, see the [debug ports](/admin/pprof#debug-ports) section of the [pprof profiling data](/admin/pprof) page. diff --git a/docs/admin/deploy/docker-compose/operations.mdx b/docs/admin/deploy/docker-compose/operations.mdx index 5b51526d2..96ac382b8 100644 --- a/docs/admin/deploy/docker-compose/operations.mdx +++ b/docs/admin/deploy/docker-compose/operations.mdx @@ -1,7 +1,7 @@ # Management Operations -> ⚠️ We recommend new users use our [machine image](/admin/deploy/machine-images/) or [script-install](/admin/deploy/single-node/script) instructions, which are easier and offer more flexibility when configuring Sourcegraph. Existing customers can reach out to our Customer Engineering team support@sourcegraph.com if they wish to migrate to these deployment models. +> ⚠️ We recommend using our [machine image](/admin/deploy/machine-images/), which is much easier and offers more flexibility when configuring Sourcegraph. Existing customers can reach out to our Customer Support Engineering team for assistance with migrating. --- @@ -27,7 +27,7 @@ docker exec -it codeinsights-db psql -U postgres #access codeinsights-db contain The `frontend` container in the `docker-compose.yaml` file will automatically run on startup and migrate the databases if any changes are required, however administrators may wish to migrate their databases before upgrading the rest of the system when working with large databases. Sourcegraph guarantees database backward compatibility to the most recent minor point release so the database can safely be upgraded before the application code. -To execute the database migrations independently, follow the [docker-compose instructions on how to manually run database migrations](/admin/updates/migrator/migrator-operations#docker-compose). Running the `up` (default) command on the `migrator` of the *version you are upgrading to* will apply all migrations required by the next version of Sourcegraph. +To execute the database migrations independently, follow the [docker-compose instructions on how to manually run database migrations](/admin/updates/migrator/migrator-operations#docker-compose). Running the `up` (default) command on the `migrator` of the *version you are upgrading to* will apply all migrations required by that version of Sourcegraph. ## Backup and restore @@ -239,35 +239,3 @@ You can monitor the health of a deployment in several ways: - Using [Sourcegraph's built-in observability suite](/admin/observability/), which includes dashboards and alerting for Sourcegraph services. - Using [`docker ps`](https://docs.docker.com/engine/reference/commandline/ps/) to check on the status of containers within the deployment (any tooling designed to work with Docker containers and/or Docker Compose will work too). - This requires direct access to your instance's host machine. - -## OpenTelemetry Collector - -Learn more about Sourcegraph's integrations with the [OpenTelemetry Collector](https://opentelemetry.io/docs/collector/) in our [OpenTelemetry documentation](/admin/observability/opentelemetry). - -### Configure a tracing backend - -[Tracing](/admin/observability/tracing) export can be configured via the [OpenTelemetry collector](/admin/observability/opentelemetry) deployed by default in all Sourcegraph docker-compose deployments. -To get started, edit the mounted configuration file in `otel-collector/config.yaml` based on the [OpenTelemetry collector configuration guidance](/admin/observability/opentelemetry) and edit your `docker-compose.yaml` file to have the `otel-collector` service use the mounted configuration: - -```yaml -services: - # ... - otel-collector: - # ... - command: ['--config', '/etc/otel-collector/config.yaml'] - volumes: - - '/admin/deploy/otel-collector/config.yaml:/etc/otel-collector/config.yaml' -``` - -#### Enable the bundled Jaeger deployment - -Alternatively, you can use the `jaeger` overlay to easily deploy Sourcegraph with some default configuration that exports traces to a standalone Jaeger instance: - -```sh -docker-compose \ - -f docker-compose/docker-compose.yaml \ - -f docker-compose/jaeger/docker-compose.yaml \ - up -``` - -Once a tracing backend has been set up, refer to the [tracing guidance](/admin/observability/tracing) for more details. diff --git a/docs/admin/deploy/kubernetes/configure.mdx b/docs/admin/deploy/kubernetes/configure.mdx index 988ed29d1..f7f97ea30 100644 --- a/docs/admin/deploy/kubernetes/configure.mdx +++ b/docs/admin/deploy/kubernetes/configure.mdx @@ -174,30 +174,20 @@ Following these steps will allow Prometheus to successfully scrape metrics from ## Tracing -Sourcegraph exports traces in OpenTelemetry format. The OpenTelemetry collector, which must be configured as part of the deployment using the [otel component](#deploy-opentelemetry-collector), [collects and exports traces](https://sourcegraph.com/github.com/sourcegraph/sourcegraph/-/blob/docker-images/opentelemetry-collector/configs/logging.yaml). +Sourcegraph supports HTTP tracing to help troubleshoot issues. See [Tracing](/admin/observability/tracing) for details. -By default, Sourcegraph supports exporting traces to multiple backends including Jaeger. +To enable tracing on your Kustomize instance, you'll need to either: -### Deploy OpenTelemetry Collector +1. Deploy our bundled OpenTelemetry Collector with our bundled Jaeger backend, or +2. Deploy our bundled OpenTelemetry Collector and configure an external tracing backend -Include the `otel` component to deploy OpenTelemetry Collector: - -```yaml -# instances/$INSTANCE_NAME/kustomization.yaml - components: - # Deploy OpenTelemetry Collector - - ../../components/monitoring/otel -``` - -Learn more about Sourcegraph's integrations with the [OpenTelemetry Collector](https://opentelemetry.io/docs/collector/) in our [OpenTelemetry documentation](/admin/observability/opentelemetry). +Once a tracing backend has been deployed, see our [Tracing](/admin/observability/tracing) page for next steps, including required changes to your Site Configuration to enable traces. -### Deploy OpenTelemetry Collector with Jaeger as tracing backend +### Deploy the bundled OpenTelemetry Collector and Jaeger -If you do not have an external backend available for the OpenTelemetry Collector to export tracing data to, you can deploy the Collector with the Jaeger backend to store and view traces using the `tracing component` as described below. +The quickest way to get started with HTTP tracing is by deploying our bundled OTel and Jaeger containers together. -#### Enable the bundled Jaeger deployment - -**Step 1**: Include the `tracing` component to deploy both OpenTelemetry and Jaeger. The component also configures the following services: +Include the `tracing` component to deploy both OpenTelemetry and Jaeger together. This component also configures the following services: - `otel-collector` to export to this Jaeger instance - `grafana` to get metrics from this Jaeger instance @@ -209,28 +199,22 @@ If you do not have an external backend available for the OpenTelemetry Collector - ../../components/monitoring/tracing ``` -**Step 2**: In your Site configuration, add the following to: +### Deploy OpenTelemetry Collector to use an external tracing backend + +#### Deploy OpenTelemetry Collector -- sends Sourcegraph traces to OpenTelemetry Collector -- send traces from OpenTelemerty to Jaeger +Include the `otel` component to deploy OpenTelemetry Collector: -```json -{ - "observability.client": { - "openTelemetry": { - "endpoint": "/-/debug/otlp" - } - }, - "observability.tracing": { - "type": "opentelemetry", - "urlTemplate": "{{ .ExternalURL }}/-/debug/jaeger/trace/{{ .TraceID }}" - } -} +```yaml +# instances/$INSTANCE_NAME/kustomization.yaml + components: + # Deploy OpenTelemetry Collector + - ../../components/monitoring/otel ``` -### Configure a tracing backend +#### Configure a tracing backend -Follow these steps to add configure OpenTelementry to use a different backend: +Follow these steps to configure the otel-collector to export traces to an external OTel-compatible backend: 1. Create a subdirectory called 'patches' within the directory of your overlay 2. Copy and paste the [base/otel-collector/otel-collector.ConfigMap.yaml file](https://sourcegraph.com/github.com/sourcegraph/deploy-sourcegraph-k8s@master/-/tree/base/otel-collector/otel-collector.ConfigMap.yaml) to the new [patches subdirectory](/admin/deploy/kubernetes/kustomize/#patches-directory) @@ -247,7 +231,7 @@ Follow these steps to add configure OpenTelementry to use a different backend: The component will update the `command` for the `otel-collector` container to `"--config=/etc/otel-collector/conf/config.yaml"`, which is now pointing to the mounted config. -Please refer to [OpenTelemetry](/admin/observability/opentelemetry) for detailed descriptions on how to configure your backend of choice. +See the [OpenTelemetry](/admin/observability/opentelemetry) page for details on how to configure your backend of choice. --- @@ -1062,7 +1046,7 @@ To create a kubernetes secret you can use the following command: kubectl create secret generic pgsql-secret --from-literal=password=YOUR_SECURE_PASSWORD_HERE ``` -Then replace the password in the yaml files it's located in, based on the deployment method you are using. +Then replace the password in the yaml files it's located in, based on the deployment method you are using. Below is the example Helm deployment files modified to reference this secret. ```yaml @@ -1082,7 +1066,7 @@ spec: You can then drop the environment variable `PGPASSWORD` from the default deployment. -Similar changes will be required for other pods and services, depending on the secret being used. It's recommended to read the [official documentation](https://kubernetes.io/docs/concepts/configuration/secret/) to understand how Kubernetes secrets work. +Similar changes will be required for other pods and services, depending on the secret being used. It's recommended to read the [official documentation](https://kubernetes.io/docs/concepts/configuration/secret/) to understand how Kubernetes secrets work. ### External Postgres diff --git a/docs/admin/deploy/kubernetes/index.mdx b/docs/admin/deploy/kubernetes/index.mdx index 527f9ef17..b5f364174 100644 --- a/docs/admin/deploy/kubernetes/index.mdx +++ b/docs/admin/deploy/kubernetes/index.mdx @@ -338,15 +338,29 @@ An example of a subchart is shown in the [examples/subchart](https://github.com/ More details on how to create and configure a subchart can be found in the [helm documentation](https://helm.sh/docs/chart_template_guide/subcharts_and_globals). -### OpenTelemetry Collector +### Tracing -Learn more about Sourcegraph's integrations with the [OpenTelemetry Collector](https://opentelemetry.io/docs/collector/) in our [OpenTelemetry documentation](/admin/observability/opentelemetry). +Sourcegraph supports HTTP tracing to help troubleshoot issues. See [Tracing](/admin/observability/tracing) for details. -#### Configure a tracing backend +To enable tracing on your Helm instance, you'll need to either: -Sourcegraph currently supports exporting tracing data to several backends. Refer to [OpenTelemetry](/admin/observability/opentelemetry) for detailed descriptions on how to configure your backend of choice. +1. Deploy our bundled Jaeger backend, or +2. Configure an external tracing backend -You can add the following values in your `override.yaml` to configure trace exporting: +Once a tracing backend has been deployed, see our [Tracing](/admin/observability/tracing) page for next steps, including required changes to your Site Configuration to enable traces. + +#### Enable the bundled Jaeger deployment + +Sourcegraph bundles a Jaeger instance, but it is disabled by default. You can enable it by either adding this to your Helm values override file, or by appending the [jaeger/override.yaml](https://github.com/sourcegraph/deploy-sourcegraph-helm/blob/main/charts/sourcegraph/examples/jaeger/override.yaml) file to your Helm upgrade command. + +```yaml +jaeger: + enabled: true +``` + +#### Configure OpenTelemetry Collector to use an external tracing backend + +To configure the bundled otel-collector to export traces to an external OTel-compatible backend, you you can customize the otel-collector's config file directly in your Helm values `override.yaml` file: ```yaml openTelemetry: @@ -355,12 +369,11 @@ openTelemetry: traces: exporters: ... - processors: ... ``` -As an example, to configure the collector to export to an external Jaeger instance, add the following to your [override.yaml](https://github.com/sourcegraph/deploy-sourcegraph-helm/tree/main/charts/sourcegraph/examples/opentelemetry-exporter/override.yaml): +To use an external Jaeger instance, copy and customize the configs from the [opentelemetry-exporter/override.yaml](https://github.com/sourcegraph/deploy-sourcegraph-helm/tree/main/charts/sourcegraph/examples/opentelemetry-exporter/override.yaml) file, and add them to your Helm values override file: ```yaml openTelemetry: @@ -379,9 +392,9 @@ openTelemetry: #### Configure a tracing backend with TLS enabled -If you require a TLS connection to export trace data, you need to first add the certificate data to a Secret. The following snippet demonstrates how you can achieve this: +If you require a TLS connection to export trace data, you need to first add the certificate data to a secret. The following snippet demonstrates how you can achieve this: -Do NOT commit the secret manifest into your Git repository unless you are okay with storing sensitive information in plaintext and your repository is private. +> Do NOT commit the secret manifest into your Git repository unless you are okay with storing sensitive information in plaintext and your repository is private. ```yaml apiVersion: v1 @@ -393,7 +406,7 @@ data: file.key: "<.key data>" ``` -After applying the secret to your cluster, you can [override](https://github.com/sourcegraph/deploy-sourcegraph-helm/tree/main/charts/sourcegraph/examples/opentelemetry-exporter/override-tls.yaml) the value `openTelemetry.gateway.config.traces.exportersTlsSecretName` to mount the certificate data in the Collector and instruct the exporter to use TLS: +After applying the secret to your cluster, you can use the [opentelemetry-exporter/override-tls.yaml](https://github.com/sourcegraph/deploy-sourcegraph-helm/tree/main/charts/sourcegraph/examples/opentelemetry-exporter/override-tls.yaml) example, and configure the value `openTelemetry.gateway.config.traces.exportersTlsSecretName` in your Helm values override file to mount the certificate data in the otel-collector, and instruct the exporter to use TLS: ```yaml openTelemetry: @@ -414,9 +427,7 @@ openTelemetry: #### Configure trace sampling -Review the [trace sampling documentation](/admin/observability/opentelemetry#sampling-traces) to understand how to configure sampling. - -Add your config to your [override.yaml](https://github.com/sourcegraph/deploy-sourcegraph-helm/tree/main/charts/sourcegraph/examples/opentelemetry-exporter/override-processor.yaml) as follows: +Review the [trace sampling](/admin/observability/opentelemetry#sampling-traces) documentation, and the [opentelemetry-exporter/override-processor.yaml](https://github.com/sourcegraph/deploy-sourcegraph-helm/tree/main/charts/sourcegraph/examples/opentelemetry-exporter/override-processor.yaml) example, then add the configs to your Helm values override file: ```yaml openTelemetry: @@ -429,17 +440,6 @@ openTelemetry: sampling_percentage: 10.0 # (default = 0): Percentage at which traces are sampled; >= 100 samples all traces ``` -#### Enable the bundled Jaeger deployment - -Sourcegraph ships with a bundled Jaeger instance that is disabled by default. If you do not wish to make use of an external observability backend, you can enable this instance by adding the following to your overrides: - -```yaml -jaeger: - enabled: true -``` - -This will also configure the OpenTelemetry Collector to export trace data to this instance. No further configuration is required. - ## Cloud providers guides This section is aimed at providing high-level guidance on deploying Sourcegraph via Helm on major Cloud providers. In general, you need the following to get started: diff --git a/docs/admin/deploy/scale.mdx b/docs/admin/deploy/scale.mdx index 6c22d673f..0d674b73d 100644 --- a/docs/admin/deploy/scale.mdx +++ b/docs/admin/deploy/scale.mdx @@ -276,7 +276,7 @@ A Jaeger instance for end-to-end distributed tracing | `Factors` | Number of Site Admins | | `Guideline` | Memory depends on the size of buffers, like the number of traces and the size of the queue for example | - The jaeger service does not have to be enabled for Sourcegraph work, however, the ability to troubleshoot the system will be disabled. + The Jaeger service is not mandatory for basic Sourcegraph functionality, however, the ability to troubleshoot issues is vastly improved with it. --- diff --git a/docs/admin/observability/alerts.mdx b/docs/admin/observability/alerts.mdx index 6c5407162..f8dc74a5c 100644 --- a/docs/admin/observability/alerts.mdx +++ b/docs/admin/observability/alerts.mdx @@ -6021,7 +6021,6 @@ Generated query for warning alert: `max((sum(rate(resolve_revision_seconds_sum[5 - View error rates on gitserver and frontend to identify root cause. - Rollback frontend/gitserver deployment if due to a bad code change. -- View error logs for `getIndexOptions` via net/trace debug interface. For example click on a `indexed-search-indexer-` on https://sourcegraph.com/-/debug/. Then click on Traces. Replace sourcegraph.com with your instance address. - More help interpreting this metric is available in the [dashboards reference](dashboards#zoekt-get-index-options-error-increase). - **Silence this alert:** If you are aware of this alert and want to silence notifications for it, add the following to your site configuration and set a reminder to re-evaluate the alert: diff --git a/docs/admin/observability/opentelemetry.mdx b/docs/admin/observability/opentelemetry.mdx index 069ad2887..5d4750877 100644 --- a/docs/admin/observability/opentelemetry.mdx +++ b/docs/admin/observability/opentelemetry.mdx @@ -1,41 +1,41 @@ # OpenTelemetry ->NOTE: This feature is supported on Sourcegraph 4.0 and later. +> This page is a deep dive into OpenTelemetry and customizing it. To get started with HTTP Tracing, see the [Tracing](/admin/observability/tracing) page. -> WARNING: Sourcegraph is actively working on implementing [OpenTelemetry](https://opentelemetry.io/) for all observability data. **The first—and currently only—[signal](https://opentelemetry.io/docs/concepts/signals/) to be fully integrated is [tracing](/admin/observability/tracing)**. +[OpenTelemetry](https://opentelemetry.io/) (OTel) is an industry-standard toolset to handle observability data, ex. metrics, logs, and traces. -Sourcegraph exports OpenTelemetry data to a bundled [OpenTelemetry Collector](https://opentelemetry.io/docs/collector/) instance. -This service can be configured to ingest, process, and then export observability data to an observability backend of choice. -This approach offers a great deal of flexibility. +To handle this data, Sourcegraph deployments include a bundled [OpenTelemetry Collector](https://opentelemetry.io/docs/collector/) (otel-collector) container, which can be configured to ingest, process, and export observability data to a backend of your choice. This approach offers great flexibility. -## Configuration +> NOTE: Sourcegraph currently uses OTel for HTTP Traces, and plans to use it for metrics and logs in the future. -Sourcegraph's OpenTelemetry Collector is deployed with a [custom image, `sourcegraph/opentelemetry-collector`](https://sourcegraph.com/github.com/sourcegraph/sourcegraph/-/tree/docker-images/opentelemetry-collector), and is configured with a configuration YAML file. -By default, `sourcegraph/opentelemetry-collector` is configured to not do anything with the data it receives, but [exporters to various backends](#exporters) can be configured for each signal we currently support—**currently, only [traces data](#tracing) is supported**. +For an in-depth explanation of the parts that compose a full collector pipeline, see OpenTelemetry's [documentation](https://opentelemetry.io/docs/collector/configuration/). -Refer to the [documentation](https://opentelemetry.io/docs/collector/configuration/) for an in-depth explanation of the parts that compose a full collector pipeline. +## Deployment and Configuration -For more details on configuring the OpenTelemetry collector for your deployment method, refer to the deployment-specific guidance: +Sourcegraph's bundled otel-collector is deployed via Docker image, and is configured via configuration YAML file. By default, it is not configured to do anything with the data it receives, but [exporters](#exporters) to various backends can be configured. -- [Kubernetes with Kustomize](/admin/deploy/kubernetes/configure#tracing) -- [Kubernetes with Helm](/admin/deploy/kubernetes#opentelemetry-collector) +For details on how to deploy the otel-collector, and where to find its configuration file, refer to the docs page specific to your deployment type: + +- [Kubernetes via Helm](/admin/deploy/kubernetes#opentelemetry-collector) +- [Kubernetes via Kustomize](/admin/deploy/kubernetes/configure#tracing) - [Docker Compose](/admin/deploy/docker-compose/operations#opentelemetry-collector) -## Tracing +## HTTP Tracing Backends -Sourcegraph traces are exported in OpenTelemetry format to the bundled OpenTelemetry collector. -To learn more about Sourcegraph traces in general, refer to our [tracing documentation](/admin/observability/tracing). +Sourcegraph containers export HTTP traces in OTel format to the bundled otel-collector. +For more information about HTTP traces, see the [Tracing](/admin/observability/tracing) page. -`sourcegraph/opentelemetry-collector` includes the following exporters that support traces: +The bundled otel-collector includes the following exporters, which support HTTP traces in OTel format: -- [OTLP-compatible backends](#otlp-compatible-backends) (includes services like Honeycomb and Grafana Tempo) +- [OTLP-compatible backends](#otlp-compatible-backends), ex. Honeycomb, Grafana Tempo - [Jaeger](#jaeger) - [Google Cloud](#google-cloud) -> NOTE: In case you require an additional exporter from the [`opentelemetry-collector-contrib` repository](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/exporter), please [open an issue](https://github.com/sourcegraph/sourcegraph/issues). +Basic configuration for each tracing backend type is described below. + +> NOTE: If you require an additional exporter from the [opentelemetry-collector-contrib](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/exporter) repository, please contact Sourcegraph Customer Support. -Basic configuration for each tracing backend type is described below. Note that just adding a backend to the `exporters` block does not enable it—it must also be added to the `service` block. -Refer to the next snippet for a basic but complete example, which is the [default out-of-the-box configuration](https://sourcegraph.com/github.com/sourcegraph/sourcegraph/-/blob/docker-images/opentelemetry-collector/configs/logging.yaml): +To enable a backend, it must be adding to both the `exporters` block and the `service` block: ```yaml receivers: @@ -45,7 +45,7 @@ receivers: http: exporters: - logging: + logging: # Export HTTP traces as log events loglevel: warn sampling_initial: 5 sampling_thereafter: 200 @@ -59,17 +59,17 @@ service: - logging # The exporter name must be added here to enable it ``` -### Sampling traces +## Sampling traces -To reduce the volume of traces being exported, the collector can be configured to apply sampling to the exported traces. Sourcegraph bundles the [probabilistic sampler](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/processor/probabilisticsamplerprocessor) and the [tail sampler](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/processor/tailsamplingprocessor/README) as part of it's default collector container image. +To reduce the volume of traces exported, the collector can be configured to apply sampling. Sourcegraph includes the [probabilistic](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/processor/probabilisticsamplerprocessor) and [tail](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/processor/tailsamplingprocessor/README) samplers in the bundled collector. -If enabled, this sampling mechanism will be applied to all traces, regardless if a request was explictly marked as to be traced. +> NOTE: If sampling is enabled, the sampling mechanism will be applied to all traces, regardless if a request was explicitly requested to be traced. -#### Probabilistic sampler +### Probabilistic sampler -The probabilistic sampler hashes Trace IDs and determines whether a trace should be sampled based on this hash. Note that semantic convention of tags on a trace take precedence over Trace ID hashing when deciding whether a trace should be sampled or not. +The probabilistic sampler hashes TraceIDs and determines whether a trace should be sampled based on this hash. Note that semantic convention of tags on a trace take precedence over TraceID hashing when deciding whether a trace should be sampled or not. -Refer to the next snippet for an example on how to update the configuration to enable sampling using the probabilistic sampler. +To enable probabilistic sampling, add the following to the `processors` block: ```yaml exporters: @@ -92,16 +92,17 @@ service: The tail sampler samples traces according to policies and the sampling decision of whether a trace should be sampled is determined at the _tail end_ of a pipeline. For more information on the supported policies and other configuration options of the sampler see [tail sampler configuration](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/processor/tailsamplingprocessor/README). -The sampler waits for a certain amount of spans before making applying the configured policy. Due to it keeping a certain amount of spans in memory the sampler incurs as slight performance cost compared to the Probabilistic sampler. For a better comparison on probabilistic vs tail sampling processors see [here](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/processor/tailsamplingprocessor/README#probabilistic-sampling-processor-compared-to-the-tail-sampling-processor-with-the-probabilistic-policy) +The sampler waits for a certain amount of spans before making applying the configured policy. Due to it keeping a certain amount of spans in memory, the sampler incurs as slight performance cost compared to the probabilistic sampler. -Refer to the next snippet for an example on how to update the configuration to enable tail sampling with a particular policy. +To understand how the policies are applied, see the open-telemetry code [here](https://sourcegraph.com/github.com/open-telemetry/opentelemetry-collector-contrib@71dd19d2e59cd1f8aa9844461089d5c17efaa0ca/-/blob/processor/tailsamplingprocessor/processor.go?L214). + +To enable tail sampling, and customize the policies, add the following to the `processors` block: ```yaml receivers: # ... exporters: # ... - processors: tail_sampling: # Wait time since the first span of a trace before making a sampling decision @@ -110,13 +111,10 @@ processors: num_traces: 50000 # default value = 50000 # Expected number of new traces (helps in allocating data structures) expected_new_traces_per_sec: 10 # default value = 0 - # Recommended reading to understand how the policies are applied: - # https://sourcegraph.com/github.com/open-telemetry/opentelemetry-collector-contrib@71dd19d2e59cd1f8aa9844461089d5c17efaa0ca/-/blob/processor/tailsamplingprocessor/processor.go?L214 policies: [ { - # If a span contains `sampling_retain: true`, it will always be sampled (not dropped), - # regardless of the probabilistic sampling. + # If a span contains `sampling_retain: true`, it will always be included name: policy-retain, type: string_attribute, string_attribute: {key: sampling.retain, values: ['true']}, @@ -136,51 +134,52 @@ service: processors: [tail_sampling] ``` -### Filtering traces +## Filtering traces -As part of the default container image Sourcegraph bundles the [filter processor](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/processor/filterprocessor/README). By configuring a pipeline to have a filter processor one is able to include or exclude (depending on configuration!) on whether a trace should be allowed through the pipeline and be exported. - -Refer to the following snippet where a filter processor is configured to only allow traces with the service name "foobar" to continue through the pipeline. All other traces that do not have this service name will be dropped. +The bundled otel-collector also includes the [filter processor](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/processor/filterprocessor/README). The following example only allows traces with the service name "foobar". All other traces will be dropped. ```yaml exporters: # ... - receivers: # ... - processors: - filter/foobar: # the format is / - note a name is NOT required + filter/foobar: # Format is / - name is optional spans: include: - match_type: strict # regexp is also a supported type + match_type: strict # Also supports regex services: - "foobar" service: pipelines: - traces: # pipeline accepts all traces - traces/foobar: # pipeline that only export foobar traces + traces: # This pipeline exports all traces + traces/foobar: # This pipeline only exports traces from the foobar service # ... processors: [filter/foobar] ``` + ## Exporters -Exporters send observability data from OpenTelemetry collector to desired backends. -Each exporter can support one, or several, OpenTelemetry signals. +Exporters send observability data from the otel-collector to the needed backend(s). +Each exporter can support one or more OTel signals. -This section outlines some common configurations for exporters—for more details, refer to the [official OpenTelemetry exporters documentation](https://opentelemetry.io/docs/collector/configuration/#exporters). +This section outlines some common exporter configurations. For details, see OpenTelemetry's [exporters](https://opentelemetry.io/docs/collector/configuration/#exporters) page. -> NOTE: In case you require an additional exporter from the [`opentelemetry-collector-contrib` repository](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/exporter), please [open an issue](https://github.com/sourcegraph/sourcegraph/issues). +> NOTE: If you require an additional exporter from the [opentelemetry-collector-contrib](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/exporter) repository, please contact Sourcegraph Customer Support. ### OTLP-compatible backends -Backends compatible with the [OpenTelemetry protocol (OTLP)](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/protocol/otlp) include services like [Honeycomb](https://docs.honeycomb.io/getting-data-in/opentelemetry-overview/) and [Grafana Tempo](https://grafana.com/blog/2021/04/13/how-to-send-traces-to-grafana-clouds-tempo-service-with-opentelemetry-collector/). -OTLP-compatible backends typically accept the [OTLP gRPC protocol](#otlp-grpc-backends), but they can also implement the [OTLP HTTP protocol](#otlp-http-backends). +Backends compatible with the [OpenTelemetry Protocol (OTLP)](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/protocol/otlp) include services such as: + +- [Honeycomb](https://docs.honeycomb.io/getting-data-in/opentelemetry-overview/) +- [Grafana Tempo](https://grafana.com/blog/2021/04/13/how-to-send-traces-to-grafana-clouds-tempo-service-with-opentelemetry-collector/) + +OTLP-compatible backends typically accept the [OTLP gRPC protocol](#otlp-grpc-backends), but may require the [OTLP HTTP protocol](#otlp-http-backends) instead. #### OTLP gRPC backends -Refer to the [`otlp` exporter documentation](https://github.com/open-telemetry/opentelemetry-collector/blob/main/exporter/otlpexporter/README) for all available options. +Refer to the [otlp exporter](https://github.com/open-telemetry/opentelemetry-collector/blob/main/exporter/otlpexporter/README) documentation for available options. ```yaml exporters: @@ -197,7 +196,7 @@ exporters: #### OTLP HTTP backends -Refer to the [`otlphttp` exporter documentation](https://github.com/open-telemetry/opentelemetry-collector/tree/main/exporter/otlphttpexporter/README) for all available options. +Refer to the [otlphttp exporter](https://github.com/open-telemetry/opentelemetry-collector/tree/main/exporter/otlphttpexporter/README) documentation for available options. ```yaml exporters: @@ -207,37 +206,34 @@ exporters: ### Jaeger -Refer to the [`jaeger` exporter documentation](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/exporter/jaegerexporter/README) for all options. - -Most Sourcegraph deployment methods still ship with an opt-in Jaeger instance—to set this up, follow the relevant deployment guides, which will also set up the appropriate configuration for you: +If you're looking for information about Sourcegraph's bundled Jaeger instance, head back to the [Tracing](/admin/observability/tracing) page to find the instructions for your deployment method. -- [Kubernetes with Kustomize](/admin/deploy/kubernetes/configure#deploy-opentelemetry-collector-with-jaeger-as-tracing-backend) -- [Kubernetes with Helm](/admin/deploy/kubernetes/helm#enable-the-bundled-jaeger-deployment) -- [Docker Compose](/admin/deploy/docker-compose/operations#enable-the-bundled-jaeger-deployment) +Refer to the [jaeger exporter](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/exporter/jaegerexporter/README) documentation for options. -If you wish to do additional configuration or connect to your own Jaeger instance, the deployed Collector image is bundled with a [basic configuration with Jaeger exporting](https://sourcegraph.com/github.com/sourcegraph/sourcegraph/-/blob/docker-images/opentelemetry-collector/configs/jaeger.yaml). -If this configuration serves your needs, you do not have to provide a separate config—the Collector startup command can be set to `/bin/otelcol-sourcegraph --config=/etc/otel-collector/configs/jaeger.yaml`. Note that this requires the environment variable `$JAEGER_HOST` to be set on the Collector instance (i.e. the container in Kubernetes or Docker Compose): +If you must use your own Jaeger instance, and if the bundled otel-collector's basic configuration with the Jaeger OTel exporter enabled meets your needs, configure the otel-collector's startup command to `/bin/otelcol-sourcegraph --config=/etc/otel-collector/configs/jaeger.yaml`. Note that this requires the environment variable `$JAEGER_HOST` to be set on the otel-collector service / container: ```yaml +# otel-collector config.yaml exporters: jaeger: # Default Jaeger gRPC server endpoint: "$JAEGER_HOST:14250" tls: insecure: true + +# Deployment environment variables: + ``` ### Google Cloud -Refer to the [`googlecloud` exporter documentation](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/exporter/googlecloudexporter/README) for all available options. - -If you run Sourcegraph on a GCP workload, all requests will be authenticated automatically. The documentation describes other authentication methods. +If you run Sourcegraph in GCP and wish to export your HTTP traces to Google Cloud Trace, otel-collector can use project authentication. See the [googlecloud exporter](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/exporter/googlecloudexporter/README) documentation for available options. ```yaml exporters: googlecloud: # See docs - project: project-name # or fetched from credentials + project: project-name # Project name can also be fetched from secrets retry_on_failure: enabled: false ``` diff --git a/docs/admin/observability/tracing.mdx b/docs/admin/observability/tracing.mdx index 1759af372..9973b76ce 100644 --- a/docs/admin/observability/tracing.mdx +++ b/docs/admin/observability/tracing.mdx @@ -1,104 +1,119 @@ -# Tracing +# HTTP Tracing -In site configuration, you can enable tracing globally by configuring a sampling mode in `observability.tracing`. -There are currently three modes: +HTTP traces are a powerful debugging tool to help you see how your Sourcegraph requests are processed under the hood - like having X-ray vision into how long each part takes and where errors occur. -* `"sampling": "selective"` (default) will cause a trace to be recorded only when `trace=1` is present as a URL parameter (though background jobs may still emit traces). -* `"sampling": "all"` will cause a trace to be recorded on every request. -* `"sampling": "none"` will disable all tracing. +To enable HTTP traces on your Sourcegraph Instance: -`"selective"` is the recommended default, because collecting traces on all requests can be quite memory- and network-intensive. -If you have a large Sourcegraph instance (e.g,. more than 10k repositories), turn this on with caution. -Note that the policies above are implemented at an application level—to sample all traces, please configure your tracing backend directly. +1. Deploy and / or configure a tracing backend -We support the following tracing backend types: +2. Configure tracing in your Site Configuration settings, to match your tracing backend -* [`"type": "opentelemetry"`](#opentelemetry) (default) -* [`"type": "jaeger"`](#jaeger) +## Backends -In addition, we also export some tracing [via net/trace](#nettrace). +The quickest way to get started with HTTP tracing is to deploy our bundled Jaeger backend. You can also configure an external, OpenTelemetry-compatible backend of your choice. -## How to use traces - -Tracing is a powerful debugging tool that can break down where time is spent over the lifecycle of a -request and help pinpoint the source of high latency or errors. -To get started with using traces, you must first [configure a tracing backend](#tracing-backends). - -We generally follow the following algorithm to root-cause issues with traces: - -1. Reproduce a slower user request (e.g., a search query that takes too long or times out) and acquire a trace: - 1. [Trace a search query](#trace-a-search-query) - 2. [Trace a GraphQL request](#trace-a-graphql-request) -2. Explore the breakdown of the request tree in the UI of your [tracing backend](#tracing-backends), such as Honeycomb or Jaeger. Look for: - 1. items near the leaves that take up a significant portion of the overall request time. - 2. spans that have errors attached to them - 3. [log entries](/admin/observability/logs) that correspond to spans in the trace (using the `TraceId` and `SpanId` fields) -3. Report this information to Sourcegraph (via [issue](https://github.com/sourcegraph/sourcegraph/issues/new) or [reaching out directly](https://about.sourcegraph.com/contact/request-info/)) by screenshotting the relevant trace or sharing the trace JSON. +### Jaeger -### Trace a search query +To deploy our bundled Jaeger backend, follow the instructions for your deployment type: -To trace a search query, run a search on your Sourcegraph instance with the `?trace=1` query parameter. -A link to the [exported trace](#tracing-backends) should be show up in the search results: +- [Kubernetes with Helm](/admin/deploy/kubernetes/helm#enable-the-bundled-jaeger-deployment) +- [Kubernetes with Kustomize](/admin/deploy/kubernetes/configure#deploy-opentelemetry-collector-with-jaeger-as-tracing-backend) +- [Docker Compose](/admin/deploy/docker-compose/configuration#enable-http-tracing) -![link to trace](https://user-images.githubusercontent.com/23356519/184953302-099bcb62-ccdb-4eed-be5d-801b7fe16d97.png) +Then configure your Site Configuration: -Note that getting a trace URL requires `urlTemplate` to be configured. +1. Ensure your `externalURL` is configured +2. Configure `urlTemplate` +3. Optionally, configure `observability.client`, for Sourcegraph clients to also report traces, ex. src cli -### Trace a GraphQL request +```json + "externalURL": "https://your-sourcegraph-instance.example.com", + "observability.tracing": { + "urlTemplate": "{{ .ExternalURL }}/-/debug/jaeger/trace/{{ .TraceID }}" + }, + "observability.client": { + "openTelemetry": { + "endpoint": "/-/debug/otlp" + } + }, +``` -To receive a traceID on a GraphQL request, include the header `X-Sourcegraph-Should-Trace: true` with the request. -The response headers of the response will now include an `x-trace-url` entry, which will have a URL the [exported trace](#tracing-backends). +Where: -Note that getting a trace URL requires `urlTemplate` to be configured. +- `{{ .ExternalURL }}` is the value of the `externalURL` setting in your Sourcegraph instance's Site Configuration +- `{{ .TraceID }}` is the TraceID which gets generated while processing the request -## Tracing backends +Once deployed, the Jaeger web UI will be accessible at `/-/debug/jaeger`. -Tracing backends can be configured for Sourcegraph to export traces to. -We support exporting traces via [OpenTelemetry](#opentelemetry) (recommended), or directly to [Jaeger](#jaeger). +The Sourcegraph frontend automatically proxies Jaeger's web UI to make it available at `/-/debug/jaeger`. You can proxy your own Jaeger instance instead by configuring the `JAEGER_SERVER_URL` environment variable on the `frontend` containers, and the `QUERY_BASE_PATH='/-/debug/jaeger'` environment variable on your `jaeger` container. -### OpenTelemetry +### External OpenTelemetry-Compatible Platforms -To learn about exporting traces to various backends using OpenTelemetry, review our [OpenTelemetry documentation](/admin/observability/opentelemetry). -Once configured, you can set up a `urlTemplate` that points to your traces backend, which allows you to use the following variables: +If you prefer to use an external, OTel-compatible platform, you can configure Sourcegraph to export traces to it instead. See our [OpenTelemetry documentation](/admin/observability/opentelemetry) for further details. -* `{{ .TraceID }}` is the full trace ID -* `{{ .ExternalURL }}` is the external URL of your Sourcegraph instance +Once your OTel backend is configured, configure the `urlTemplate` to link to your tracing backend. -For example, if you [export your traces to Honeycomb](/admin/observability/opentelemetry#otlp-compatible-backends), your configuration might look like: +For example, if you [export your traces to Honeycomb](/admin/observability/opentelemetry#otlp-compatible-backends), your Site Configuration may look like: ```json -{ "observability.tracing": { - "type": "opentelemetry", - "urlTemplate": "https://ui.honeycomb.io/$ORG/environments/$DATASET/trace?trace_id={{ .TraceID }}" + "urlTemplate": "https://ui.honeycomb.io/YOUR-HONEYCOMB-ORG/environments/YOUR-HONEYCOMB-DATASET/trace?trace_id={{ .TraceID }}" } -} ``` -You can test the exporter by [tracing a search query](#trace-a-search-query). +Where: -### Jaeger +- `{{ .TraceID }}` is the TraceID which gets generated while processing the request + +## How to use traces + +We generally use the following approach when using traces to help root-cause an issue: + +1. Reproduce the problematic user request, with the `trace=1` parameter in the URL +2. Get the link to the trace in the tracing backend, from the `x-trace-url` response header +3. Explore the request tree in the the tracing backend's UI, and take note of: + 1. Items near the leaves which take up a significant portion of the overall request time + 2. Spans which have errors attached to them +4. Search your Sourcegraph instance [logs](/admin/observability/logs) for events which include the corresponding `TraceId` or `SpanId` values +5. Include this information in your Sourcegraph support ticket, by attaching the trace JSON file, and / or screenshots + +### Trace a search query + +To trace a search query, run a search on your Sourcegraph instance with the `?trace=1` parameter in the URL. + +Depending on your Sourcegraph instance version, a link to the exported trace may appear in the UI: -There are two ways to export traces to Jaeger: +![link to trace](https://user-images.githubusercontent.com/23356519/184953302-099bcb62-ccdb-4eed-be5d-801b7fe16d97.png) + +### Trace a GraphQL request + +To trace a GraphQL request, include a `X-Sourcegraph-Should-Trace: true` header when you send the request. -1. **Recommended:** Configuring the [OpenTelemetry Collector](/admin/observability/opentelemetry) (`"type": "opentelemetry"` in `observability.tracing`) to [send traces to a Jaeger instance](/admin/observability/opentelemetry#jaeger). -2. Using the legacy `"type": "jaeger"` configuration in `observability.tracing` to send spans directly to Jaeger. +The response will include an `x-trace-url` header, which will include a URL to the exported trace. -We strongly recommend using option 1 to use Jaeger, which is supported via opt-in mechanisms for each of our core deployment methods—to learn more, refer to the [Jaeger exporter documentation](/admin/observability/opentelemetry#jaeger). +## Trace Formats -To use option 2 instead, which enables behaviour similar to how Sourcegraph exported traces before Sourcegraph 4.0, [Jaeger client environment variables](https://github.com/jaegertracing/jaeger-client-go#environment-variables) must be set on all services for traces to export to Jaeger correctly using `"observability.tracing": { "type": "jaeger" }`. +As the OTel (OpenTelemetry) HTTP trace format has gained broad industry adoption, we've centralized our support for HTTP traces on the OTel format, whether with our bundled Jaeger, or an external backend of your choice. -A mechanism within Sourcegraph is available to reverse-proxy a Jaeger instance by setting the `JAEGER_SERVER_URL` environment variable on the `frontend` service, which allows you to access Jaeger using `/-/debug/jaeger`. -The Jaeger instance will also need `QUERY_BASE_PATH='/-/debug/jaeger'` to be configured. -Once set up, you can use the following URL template for traces exported to Jaeger: +As Jaeger has also switched to the OTel format, we've removed support for Jaeger's deprecated format. +We've also removed support for Go's net/trace format. + +## Basic sampling modes + +Three basic sampling modes are available in the `observability.tracing` Site Configuration: ```json -{ "observability.tracing": { - // set "type" to "opentelemetry" for option 1, "jaeger" for option 2 - "urlTemplate": "{{ .ExternalURL }}/-/debug/jaeger/trace/{{ .TraceID }}" + "urlTemplate": "{{ .ExternalURL }}/-/debug/jaeger/trace/{{ .TraceID }}", + "sampling": "selective" } -} ``` -You can test the exporter by [tracing a search query](#trace-a-search-query). +- `selective` + - Default + - Only exports a trace when the `trace=1` parameter is in the request URL +- `all` + - Exports traces for all requests + - Not recommended, as it can be memory and network intensive, while very few traces are actually needed +- `none` + - Disables tracing diff --git a/docs/admin/observability/troubleshooting.mdx b/docs/admin/observability/troubleshooting.mdx index dd09106b6..79c96225f 100644 --- a/docs/admin/observability/troubleshooting.mdx +++ b/docs/admin/observability/troubleshooting.mdx @@ -109,8 +109,6 @@ find a repro if possible. If that isn't possible, file an issue with the followi components of the request are slow. Remember that many Sourcegraph API requests identify the Jaeger trace ID in the `x-trace` HTTP response header, which makes it easy to look up the trace corresponding to a particular request. - 1. If Jaeger is unavailable or unreliable, you can collect trace data from [the Go net/trace - endpoint](#examine-go-net-trace). 1. Copy the [Sourcegraph configuration](#copy-configuration) to the error report. #### Scenario: the issue is performance-related and there is NOT a consistent reproduction @@ -127,10 +125,8 @@ find a repro if possible. If that isn't possible, try the following: around a certain time, [check the logs](#examine-logs) around that time. 1. If the issue is ongoing or if you know the time during which the issue occurred, [search Jaeger](#collect-a-jaeger-trace) for long-running request traces in the appropriate time window. - 1. If Jaeger is unavailable, you can alternatively use the Go net/trace endpoint. (You will have - to scan the traces for each service to look for slow traces.) 1. If tracing points to a specific service as the source of high latency, [examine the - logs](#examine-logs) and [net/trace info](#examine-go-net-trace) for that service. + logs](#examine-logs) for that service. #### Scenario: multiple actions are slow or Sourcegraph as a whole feels sluggish @@ -164,8 +160,6 @@ If Sourcegraph feels sluggish overall, the likely culprit is resource allocation when loading search results. This can be a problem when dealing with large monorepos. 1. If it is unclear which service is underallocated, [examine Jaeger](#collect-a-jaeger-trace) to identify long-running traces and see which services take up the most time. - 1. Alternatively, you can use the [Go net/trace endpoint](#examine-go-net-trace) to pull trace - data. #### Scenario: Prometheus scraping metrics outside Sourcegraph Kubernetes namespace @@ -345,29 +339,6 @@ If you are using Kubernetes, * You can tail logs for all pods associated with a given deployment: `kubectl logs -f deployment/sourcegraph-frontend --container=frontend --since=10m` - -### Examine Go net/trace - -Each core service has an endpoint which displays traces using Go's -[net/trace](https://pkg.go.dev/golang.org/x/net/trace) package. - -To access this data, - -1. First ensure you are logged in as a site admin. -1. Go to the URL path `/-/debug`. This page should show a list of links with the names of each core - service (e.g., `frontend`, `gitserver`, etc.) -1. Click on the service you'd like to examine. -1. Click "Requests`. This brings you to a page where you can view traces for that service. - * You can filter to traces by duration or error state. - * You can show histograms of durations by minute, hour, or in total (since the process started) - -On older versions of Sourcegraph on Kubernetes, the `/-/debug` URL path may be inaccessible. If this -is the case, you'll need to forward port 6060 on the main container of a given pod to access its -traces. For example, to access to traces of the first gitserver shard, - -1. `kubectl port-forward gitserver-0 6060` -1. Go to `http://localhost:6060` in your browser, and click on "Requests". - ### Copy configuration Go the the URL path `/site-admin/report-bug` to obtain an all-in-one text box of all Sourcegraph From 1576b508090a6572507198dc406d29d0a34fe6e5 Mon Sep 17 00:00:00 2001 From: Marc LeBlanc Date: Tue, 17 Dec 2024 11:44:22 -0700 Subject: [PATCH 2/4] Fixing email address format --- docs/admin/deploy/docker-compose/configuration.mdx | 2 +- docs/admin/deploy/docker-compose/operations.mdx | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/admin/deploy/docker-compose/configuration.mdx b/docs/admin/deploy/docker-compose/configuration.mdx index ceaef1cf0..2ff227f05 100644 --- a/docs/admin/deploy/docker-compose/configuration.mdx +++ b/docs/admin/deploy/docker-compose/configuration.mdx @@ -1,6 +1,6 @@ # Configuration -> ⚠️ We recommend using our [machine image](/admin/deploy/machine-images/), which is much easier and offers more flexibility when configuring Sourcegraph. Existing customers can reach out to our Customer Support Engineering team for assistance with migrating. +> ⚠️ We recommend using our [machine image](/admin/deploy/machine-images/), which is much easier and offers more flexibility when configuring Sourcegraph. Existing customers can reach out to our Customer Support Engineering team support@sourcegraph.com for assistance with migrating. You can find the default base [docker-compose.yaml](https://github.com/sourcegraph/deploy-sourcegraph-docker/blob/master/docker-compose/docker-compose.yaml) file inside the [deploy-sourcegraph-docker](https://github.com/sourcegraph/deploy-sourcegraph-docker/blob/master/docker-compose) repository. We strongly recommend using an override file, instead of modifying the base docker-compose.yaml file. diff --git a/docs/admin/deploy/docker-compose/operations.mdx b/docs/admin/deploy/docker-compose/operations.mdx index 96ac382b8..2b16e509b 100644 --- a/docs/admin/deploy/docker-compose/operations.mdx +++ b/docs/admin/deploy/docker-compose/operations.mdx @@ -1,7 +1,7 @@ # Management Operations -> ⚠️ We recommend using our [machine image](/admin/deploy/machine-images/), which is much easier and offers more flexibility when configuring Sourcegraph. Existing customers can reach out to our Customer Support Engineering team for assistance with migrating. +> ⚠️ We recommend using our [machine image](/admin/deploy/machine-images/), which is much easier and offers more flexibility when configuring Sourcegraph. Existing customers can reach out to our Customer Support Engineering team support@sourcegraph.com for assistance with migrating. --- From f456ca46702ba8d8fc16d447c4e245d15fa70b0d Mon Sep 17 00:00:00 2001 From: Marc LeBlanc Date: Tue, 17 Dec 2024 13:27:20 -0700 Subject: [PATCH 3/4] Fixing issues for this PR, remaining issues to be resolved in next PR --- docs/admin/deploy/docker-compose/operations.mdx | 2 +- docs/admin/deploy/kubernetes/index.mdx | 8 +++++--- docs/admin/observability/opentelemetry.mdx | 2 +- 3 files changed, 7 insertions(+), 5 deletions(-) diff --git a/docs/admin/deploy/docker-compose/operations.mdx b/docs/admin/deploy/docker-compose/operations.mdx index 2b16e509b..54ba1aa00 100644 --- a/docs/admin/deploy/docker-compose/operations.mdx +++ b/docs/admin/deploy/docker-compose/operations.mdx @@ -27,7 +27,7 @@ docker exec -it codeinsights-db psql -U postgres #access codeinsights-db contain The `frontend` container in the `docker-compose.yaml` file will automatically run on startup and migrate the databases if any changes are required, however administrators may wish to migrate their databases before upgrading the rest of the system when working with large databases. Sourcegraph guarantees database backward compatibility to the most recent minor point release so the database can safely be upgraded before the application code. -To execute the database migrations independently, follow the [docker-compose instructions on how to manually run database migrations](/admin/updates/migrator/migrator-operations#docker-compose). Running the `up` (default) command on the `migrator` of the *version you are upgrading to* will apply all migrations required by that version of Sourcegraph. +To execute the database migrations independently, follow the [docker-compose instructions on how to manually run database migrations](/admin/updates/migrator/migrator-operations#docker-compose). Running the `up` (default) command on the `migrator` of the *version you are upgrading to* will apply all migrations required by the next version of Sourcegraph. ## Backup and restore diff --git a/docs/admin/deploy/kubernetes/index.mdx b/docs/admin/deploy/kubernetes/index.mdx index b5f364174..f7eb3c910 100644 --- a/docs/admin/deploy/kubernetes/index.mdx +++ b/docs/admin/deploy/kubernetes/index.mdx @@ -360,7 +360,9 @@ jaeger: #### Configure OpenTelemetry Collector to use an external tracing backend -To configure the bundled otel-collector to export traces to an external OTel-compatible backend, you you can customize the otel-collector's config file directly in your Helm values `override.yaml` file: +To configure the bundled otel-collector to export traces to an external OTel-compatible backend, you you can customize the otel-collector's config file directly in your Helm values `override.yaml` file. + +For the specific configurations to set, see our [OpenTelemetry](/admin/observability/opentelemetry) page. ```yaml openTelemetry: @@ -368,9 +370,9 @@ openTelemetry: config: traces: exporters: - ... + # Your exporter configuration here processors: - ... + # Your processor configuration here ``` To use an external Jaeger instance, copy and customize the configs from the [opentelemetry-exporter/override.yaml](https://github.com/sourcegraph/deploy-sourcegraph-helm/tree/main/charts/sourcegraph/examples/opentelemetry-exporter/override.yaml) file, and add them to your Helm values override file: diff --git a/docs/admin/observability/opentelemetry.mdx b/docs/admin/observability/opentelemetry.mdx index 5d4750877..ca4db075b 100644 --- a/docs/admin/observability/opentelemetry.mdx +++ b/docs/admin/observability/opentelemetry.mdx @@ -6,7 +6,7 @@ To handle this data, Sourcegraph deployments include a bundled [OpenTelemetry Collector](https://opentelemetry.io/docs/collector/) (otel-collector) container, which can be configured to ingest, process, and export observability data to a backend of your choice. This approach offers great flexibility. -> NOTE: Sourcegraph currently uses OTel for HTTP Traces, and plans to use it for metrics and logs in the future. +> NOTE: Sourcegraph currently uses OTel for HTTP Traces, and may use it for metrics and logs in the future. For an in-depth explanation of the parts that compose a full collector pipeline, see OpenTelemetry's [documentation](https://opentelemetry.io/docs/collector/configuration/). From 2387515b7f7969fedccc2a9ad791bf1bf24546e7 Mon Sep 17 00:00:00 2001 From: Marc LeBlanc Date: Tue, 17 Dec 2024 13:56:16 -0700 Subject: [PATCH 4/4] Adding title to Docker Compose page --- docs/admin/deploy/docker-compose/index.mdx | 2 ++ 1 file changed, 2 insertions(+) diff --git a/docs/admin/deploy/docker-compose/index.mdx b/docs/admin/deploy/docker-compose/index.mdx index 73024e4e3..da6c41673 100644 --- a/docs/admin/deploy/docker-compose/index.mdx +++ b/docs/admin/deploy/docker-compose/index.mdx @@ -1,3 +1,5 @@ +# Docker Compose + Setting up Docker applications with [multiple containers](https://www.docker.com/resources/what-container) like Sourcegraph using Docker Compose allows us to start all the applications with a single command. It also makes configuring the applications easier through updating the docker-compose.yaml and docker-compose.override.yaml files. Please see the [official Docker Compose docs](https://docs.docker.com/compose/) to learn more about Docker Compose. This guide will take you through how to install Sourcegraph with Docker Compose on a server, which could be the local machine, a server on a local network, or cloud-hosted server. You can also follow one of the available *cloud-specific guides* listed below to prepare and install Sourcegraph on a supported cloud environment: