Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions docs/admin/deploy/kubernetes/azure.mdx
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# Sourcegraph with Kubernetes on Azure

> WARNING: This guide applies exclusively to a Kubernetes deployment **without** Helm.
> If you have not deployed Sourcegraph yet, it is higly recommended to use Helm as it simplifies the configuration and greatly simplifies the later upgrade process. See our guidance on [using Helm to deploy to Azure AKS](/admin/deploy/kubernetes#configure-sourcegraph-on-azure-managed-kubernetes-service-aks).
> If you have not deployed Sourcegraph yet, it is highly recommended to use Helm as it simplifies the configuration and greatly simplifies the later upgrade process. See our guidance on [using Helm to deploy to Azure AKS](/admin/deploy/kubernetes#configure-sourcegraph-on-azure-managed-kubernetes-service-aks).

Install the [Azure CLI tool](https://docs.microsoft.com/en-us/cli/azure/install-azure-cli?view=azure-cli-latest) and log in:

Expand Down Expand Up @@ -63,5 +63,5 @@ az aks browse --resource-group sourcegraphResourceGroup --name sourcegraphCluste
Set up a load balancer to make the main web server accessible over the network to external users:

```
kubectl expose deployment sourcegraph-frontend --type=LoadBalancer --name=sourcegraphloadbalancer --port=80 --target-port=3080
kubectl expose deployment sourcegraph-frontend --type=LoadBalancer --name=sourcegraph-load-balancer --port=80 --target-port=3080
```
2 changes: 1 addition & 1 deletion docs/admin/deploy/kubernetes/configure.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -1137,7 +1137,7 @@ Sourcegraph will clone repositories using SSH credentials when the `id_rsa` and

To mount the files through Kustomize:

**Step 1:** Copy the required files to the `configs` folder at the same level as your overylay's kustomization.yaml file
**Step 1:** Copy the required files to the `configs` folder at the same level as your overlay's kustomization.yaml file

**Step 2:** Include the following in your overlay to [generate secrets](https://kubernetes.io/docs/tasks/configmap-secret/managing-secret-using-kustomize/) that base64 encoded the values in those files

Expand Down
10 changes: 5 additions & 5 deletions docs/admin/deploy/kubernetes/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -156,12 +156,12 @@ Although not recommended, credentials can also be configured directly in the hel

```yaml
pgsql:
enabled: false # disable internal pgsql database
enabled: false # Disable internal pgsql database
auth:
database: "customdb"
host: pgsql.database.company.com # external pgsql host
user: "newuser"
password: "newpassword"
database: "custom-db"
host: pgsql.database.company.com # External pgsql host
user: "new-user"
password: "new-password"
port: "5432"
```

Expand Down
2 changes: 1 addition & 1 deletion docs/admin/deploy/kubernetes/kustomize.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -85,7 +85,7 @@ A storage class must be created and configured before deploying Sourcegraph. SSD

#### Option 1: Create a new storage class

We recommend using a preconfigured storage class component for your cloud provider if you can create cluster-wide resources:
We recommend using a pre-configured storage class component for your cloud provider if you can create cluster-wide resources:

```yaml
# instances/my-sourcegraph/kustomization.yaml
Expand Down
33 changes: 16 additions & 17 deletions docs/admin/observability/opentelemetry.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -16,9 +16,9 @@ Sourcegraph's bundled otel-collector is deployed via Docker image, and is config

For details on how to deploy the otel-collector, and where to find its configuration file, refer to the docs page specific to your deployment type:

- [Kubernetes via Helm](/admin/deploy/kubernetes#opentelemetry-collector)
- [Kubernetes via Kustomize](/admin/deploy/kubernetes/configure#tracing)
- [Docker Compose](/admin/deploy/docker-compose/operations#opentelemetry-collector)
- [Kubernetes with Helm](/admin/deploy/kubernetes#configure-opentelemetry-collector-to-use-an-external-tracing-backend)
- [Kubernetes with Kustomize](/admin/deploy/kubernetes/configure#deploy-opentelemetry-collector-to-use-an-external-tracing-backend)
- [Docker Compose](/admin/deploy/docker-compose/configuration#configure-an-external-tracing-backend)

## HTTP Tracing Backends

Expand Down Expand Up @@ -61,7 +61,7 @@ service:

## Sampling traces

To reduce the volume of traces exported, the collector can be configured to apply sampling. Sourcegraph includes the [probabilistic](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/processor/probabilisticsamplerprocessor) and [tail](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/processor/tailsamplingprocessor/README) samplers in the bundled collector.
To reduce the volume of traces exported, the collector can be configured to apply sampling. Sourcegraph includes the [probabilistic](https://github.com/open-telemetry/opentelemetry-collector-contrib/tree/main/processor/probabilisticsamplerprocessor) and [tail](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/processor/tailsamplingprocessor/README.md) samplers in the bundled collector.

> NOTE: If sampling is enabled, the sampling mechanism will be applied to all traces, regardless if a request was explicitly requested to be traced.

Expand Down Expand Up @@ -90,7 +90,7 @@ service:

### Tail sampling

The tail sampler samples traces according to policies and the sampling decision of whether a trace should be sampled is determined at the _tail end_ of a pipeline. For more information on the supported policies and other configuration options of the sampler see [tail sampler configuration](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/processor/tailsamplingprocessor/README).
The tail sampler samples traces according to policies and the sampling decision of whether a trace should be sampled is determined at the _tail end_ of a pipeline. For more information on the supported policies and other configuration options of the sampler see [tail sampler configuration](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/processor/tailsamplingprocessor/README.md).

The sampler waits for a certain amount of spans before making applying the configured policy. Due to it keeping a certain amount of spans in memory, the sampler incurs as slight performance cost compared to the probabilistic sampler.

Expand Down Expand Up @@ -121,7 +121,7 @@ processors:
},
{
# Only keep 10% of the traces.
name: policy-probalistic,
name: policy-probabilistic,
type: probabilistic,
probabilistic: {sampling_percentage: 10}
}
Expand All @@ -136,7 +136,7 @@ service:

## Filtering traces

The bundled otel-collector also includes the [filter processor](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/processor/filterprocessor/README). The following example only allows traces with the service name "foobar". All other traces will be dropped.
The bundled otel-collector also includes the [filter processor](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/processor/filterprocessor/README.md). The following example only allows traces with the service name "foobar". All other traces will be dropped.

```yaml
exporters:
Expand Down Expand Up @@ -170,7 +170,7 @@ This section outlines some common exporter configurations. For details, see Open

### OTLP-compatible backends

Backends compatible with the [OpenTelemetry Protocol (OTLP)](https://github.com/open-telemetry/opentelemetry-specification/blob/main/specification/protocol/otlp) include services such as:
Backends compatible with the [OpenTelemetry Protocol (OTLP)](https://opentelemetry.io/docs/specs/otlp/) include services such as:

- [Honeycomb](https://docs.honeycomb.io/getting-data-in/opentelemetry-overview/)
- [Grafana Tempo](https://grafana.com/blog/2021/04/13/how-to-send-traces-to-grafana-clouds-tempo-service-with-opentelemetry-collector/)
Expand All @@ -179,24 +179,24 @@ OTLP-compatible backends typically accept the [OTLP gRPC protocol](#otlp-grpc-ba

#### OTLP gRPC backends

Refer to the [otlp exporter](https://github.com/open-telemetry/opentelemetry-collector/blob/main/exporter/otlpexporter/README) documentation for available options.
Refer to the [otlp exporter](https://github.com/open-telemetry/opentelemetry-collector/blob/main/exporter/otlpexporter/README.md) documentation for available options.

```yaml
exporters:
otlp:
endpoint: otelcol2:4317
endpoint: secure-otel-collector:4317
tls:
cert_file: file.cert
key_file: file.key
otlp/2:
endpoint: otelcol2:4317
endpoint: insecure-otel-collector:4317
tls:
insecure: true
```

#### OTLP HTTP backends

Refer to the [otlphttp exporter](https://github.com/open-telemetry/opentelemetry-collector/tree/main/exporter/otlphttpexporter/README) documentation for available options.
Refer to the [otlphttp exporter](https://github.com/open-telemetry/opentelemetry-collector/tree/main/exporter/otlphttpexporter/README.md) documentation for available options.

```yaml
exporters:
Expand All @@ -208,7 +208,7 @@ exporters:

If you're looking for information about Sourcegraph's bundled Jaeger instance, head back to the [Tracing](/admin/observability/tracing) page to find the instructions for your deployment method.

Refer to the [jaeger exporter](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/exporter/jaegerexporter/README) documentation for options.
Refer to the [Jaeger](https://opentelemetry.io/docs/languages/js/exporters/#jaeger) documentation for options.

If you must use your own Jaeger instance, and if the bundled otel-collector's basic configuration with the Jaeger OTel exporter enabled meets your needs, configure the otel-collector's startup command to `/bin/otelcol-sourcegraph --config=/etc/otel-collector/configs/jaeger.yaml`. Note that this requires the environment variable `$JAEGER_HOST` to be set on the otel-collector service / container:

Expand All @@ -220,14 +220,13 @@ exporters:
endpoint: "$JAEGER_HOST:14250"
tls:
insecure: true

# Deployment environment variables:

```

The Sourcegraph frontend automatically proxies Jaeger's web UI to make it available at `/-/debug/jaeger`. You can proxy your own Jaeger instance instead by configuring the `JAEGER_SERVER_URL` environment variable on the `frontend` containers, and the `QUERY_BASE_PATH='/-/debug/jaeger'` environment variable on your `jaeger` container.

### Google Cloud

If you run Sourcegraph in GCP and wish to export your HTTP traces to Google Cloud Trace, otel-collector can use project authentication. See the [googlecloud exporter](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/exporter/googlecloudexporter/README) documentation for available options.
If you run Sourcegraph in GCP and wish to export your HTTP traces to Google Cloud Trace, otel-collector can use project authentication. See the [Google Cloud Exporter](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/exporter/googlecloudexporter/README.md) documentation for available options.

```yaml
exporters:
Expand Down
28 changes: 17 additions & 11 deletions docs/admin/observability/tracing.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -16,18 +16,18 @@ The quickest way to get started with HTTP tracing is to deploy our bundled Jaege

To deploy our bundled Jaeger backend, follow the instructions for your deployment type:

- [Kubernetes with Helm](/admin/deploy/kubernetes/helm#enable-the-bundled-jaeger-deployment)
- [Kubernetes with Kustomize](/admin/deploy/kubernetes/configure#deploy-opentelemetry-collector-with-jaeger-as-tracing-backend)
- [Docker Compose](/admin/deploy/docker-compose/configuration#enable-http-tracing)
- [Kubernetes with Helm](/admin/deploy/kubernetes#enable-the-bundled-jaeger-deployment)
- [Kubernetes with Kustomize](/admin/deploy/kubernetes/configure#deploy-the-bundled-opentelemetry-collector-and-jaeger)
- [Docker Compose](/admin/deploy/docker-compose/configuration#deploy-the-bundled-jaeger)

Then configure your Site Configuration:

1. Ensure your `externalURL` is configured
2. Configure `urlTemplate`
2. Configure `observability.tracing` > `urlTemplate`
3. Optionally, configure `observability.client`, for Sourcegraph clients to also report traces, ex. src cli

```json
"externalURL": "https://your-sourcegraph-instance.example.com",
"externalURL": "https://sourcegraph.example.com",
"observability.tracing": {
"urlTemplate": "{{ .ExternalURL }}/-/debug/jaeger/trace/{{ .TraceID }}"
},
Expand All @@ -43,22 +43,28 @@ Where:
- `{{ .ExternalURL }}` is the value of the `externalURL` setting in your Sourcegraph instance's Site Configuration
- `{{ .TraceID }}` is the TraceID which gets generated while processing the request

Once deployed, the Jaeger web UI will be accessible at `/-/debug/jaeger`.

The Sourcegraph frontend automatically proxies Jaeger's web UI to make it available at `/-/debug/jaeger`. You can proxy your own Jaeger instance instead by configuring the `JAEGER_SERVER_URL` environment variable on the `frontend` containers, and the `QUERY_BASE_PATH='/-/debug/jaeger'` environment variable on your `jaeger` container.
Once deployed, the Jaeger web UI will be accessible at `/-/debug/jaeger`

### External OpenTelemetry-Compatible Platforms

If you prefer to use an external, OTel-compatible platform, you can configure Sourcegraph to export traces to it instead. See our [OpenTelemetry documentation](/admin/observability/opentelemetry) for further details.

Once your OTel backend is configured, configure the `urlTemplate` to link to your tracing backend.
Then configure your Site Configuration:

1. Configure `observability.tracing` > `urlTemplate`
2. Optionally, configure `observability.client`, for Sourcegraph clients to also report traces, ex. src cli

For example, if you [export your traces to Honeycomb](/admin/observability/opentelemetry#otlp-compatible-backends), your Site Configuration may look like:
For example, if you export your traces to [Honeycomb](/admin/observability/opentelemetry#otlp-compatible-backends), your Site Configuration may look like:

```json
"observability.tracing": {
"urlTemplate": "https://ui.honeycomb.io/YOUR-HONEYCOMB-ORG/environments/YOUR-HONEYCOMB-DATASET/trace?trace_id={{ .TraceID }}"
}
},
"observability.client": {
"openTelemetry": {
"endpoint": "/-/debug/otlp"
}
},
```

Where:
Expand Down
Loading