From 9b4a35db17d7972597806bb2bd5f72be40b04113 Mon Sep 17 00:00:00 2001 From: Ivana Kellyer Date: Thu, 16 Oct 2025 09:46:51 +0200 Subject: [PATCH] chore(python): Remove 3.x pages --- .../python/configuration/sampling__v3.x.mdx | 127 ------ .../integrations/aiohttp/index__v3.x.mdx | 118 ------ .../python/integrations/asgi/index__v3.x.mdx | 133 ------- .../integrations/celery/index__v3.x.mdx | 266 ------------- .../python/integrations/rq/index__v3.x.mdx | 220 ---------- .../integrations/tornado/index__v3.x.mdx | 85 ---- .../python/integrations/wsgi/index__v3.x.mdx | 120 ------ .../platforms/python/migration/2.x-to-3.x.mdx | 375 ------------------ .../configure-sampling/index__v3.x.mdx | 337 ---------------- .../tracing/span-lifecycle/index__v3.x.mdx | 247 ------------ .../tracing/span-metrics/index__v3.x.mdx | 99 ----- .../tracing/troubleshooting/index__v3.x.mdx | 95 ----- .../python/troubleshooting__v3.x.mdx | 219 ---------- 13 files changed, 2441 deletions(-) delete mode 100644 docs/platforms/python/configuration/sampling__v3.x.mdx delete mode 100644 docs/platforms/python/integrations/aiohttp/index__v3.x.mdx delete mode 100644 docs/platforms/python/integrations/asgi/index__v3.x.mdx delete mode 100644 docs/platforms/python/integrations/celery/index__v3.x.mdx delete mode 100644 docs/platforms/python/integrations/rq/index__v3.x.mdx delete mode 100644 docs/platforms/python/integrations/tornado/index__v3.x.mdx delete mode 100644 docs/platforms/python/integrations/wsgi/index__v3.x.mdx delete mode 100644 docs/platforms/python/migration/2.x-to-3.x.mdx delete mode 100644 docs/platforms/python/tracing/configure-sampling/index__v3.x.mdx delete mode 100644 docs/platforms/python/tracing/span-lifecycle/index__v3.x.mdx delete mode 100644 docs/platforms/python/tracing/span-metrics/index__v3.x.mdx delete mode 100644 docs/platforms/python/tracing/troubleshooting/index__v3.x.mdx delete mode 100644 docs/platforms/python/troubleshooting__v3.x.mdx diff --git a/docs/platforms/python/configuration/sampling__v3.x.mdx b/docs/platforms/python/configuration/sampling__v3.x.mdx deleted file mode 100644 index 38eb9fa7fe878..0000000000000 --- a/docs/platforms/python/configuration/sampling__v3.x.mdx +++ /dev/null @@ -1,127 +0,0 @@ ---- -title: Sampling -description: "Learn how to configure the volume of error and transaction events sent to Sentry." -sidebar_order: 60 ---- - -Adding Sentry to your app gives you a great deal of valuable information about errors and performance you wouldn't otherwise get. And lots of information is good -- as long as it's the right information, at a reasonable volume. - -## Sampling Error Events - -To send a representative sample of your errors to Sentry, set the option in your SDK configuration to a number between `0.0` (0% of errors sent) and `1.0` (100% of errors sent). This is a static rate, which will apply equally to all errors. For example, to sample 25% of your errors: - - - -The error sample rate defaults to `1.0`, meaning all errors are sent to Sentry. - -Changing the error sample rate requires re-deployment. In addition, setting an SDK sample rate limits visibility into the source of events. Setting a [rate limit](/pricing/quotas/manage-event-stream-guide/#rate-limiting) for your project (which only drops events when volume is high) may better suit your needs. - -### Dynamically Sampling Error Events - -To sample error events dynamically, set the to a function that returns the desired sample rate for the event. The takes two arguments, and . `event` is the [Event](https://github.com/getsentry/sentry-python/blob/master/sentry_sdk/_types.py) that will be sent to Sentry, `hint` includes Python's [sys.exc_info()](https://docs.python.org/3/library/sys.html#sys.exc_info) information in `hint["exc_info"]`. - - - -Your function **must return a valid value**. A valid value is either: - -- A **floating-point number** between `0.0` and `1.0` (inclusive) indicating the probability an error gets sampled, **or** -- A **boolean** indicating whether or not to sample the error. - - - -One potential use case for the is to apply different sample rates for different exception types. For instance, if you would like to sample some exception called `MyException` at 50%, discard all events of another exception called `MyIgnoredException`, and sample all other exception types at 100%, you could use the following code when initializing the SDK: - - - - - -You can define at most one of the and the . If both are set, the will control sampling, and the will be ignored. - - - -## Sampling Transaction Events - -We recommend sampling your transactions for two reasons: - -1. Capturing a single trace involves minimal overhead, but capturing traces for _every_ page load or _every_ API request may add an undesirable load to your system. -2. Enabling sampling allows you to better manage the number of events sent to Sentry, so you can tailor your volume to your organization's needs. - -Choose a sampling rate with the goal of finding a balance between performance and volume concerns with data accuracy. You don't want to collect _too_ much data, but you want to collect sufficient data from which to draw meaningful conclusions. If you’re not sure what rate to choose, start with a low value and gradually increase it as you learn more about your traffic patterns and volume. - -## Configuring the Transaction Sample Rate - -The Sentry SDKs have two configuration options to control the volume of transactions sent to Sentry, allowing you to take a representative sample: - -1. Uniform sample rate (): - - Provides an even cross-section of transactions, no matter where in your app or under what circumstances they occur. - - Uses default [inheritance](#inheritance) and [precedence](#precedence) behavior -2. Sampling function () which: - - Samples different transactions at different rates - - Filters out some - transactions entirely - - Modifies default [precedence](#precedence) and [inheritance](#inheritance) behavior - -By default, none of these options are set, meaning no transactions will be sent to Sentry. You must set either one of the options to start sending transactions. - -### Setting a Uniform Sample Rate - - - -### Setting a Sampling Function - - - -## Sampling Context Data - -### Default Sampling Context Data - -The information contained in the object passed to the when a transaction is created varies by integration. - - - -### Custom Sampling Context Data - -When using custom instrumentation to create a transaction, you can add data to the by providing additional `attributes` to . - - - -All span attributes provided at span start are accessible via the and will also ultimately be sent to Sentry. If you want to exclude an attribute, you can filter it out in a `before_send`. - -## Inheritance - -Whatever a transaction's sampling decision, that decision will be passed to its child spans and from there to any transactions they subsequently cause in other services. - -(See Distributed Tracing for more about how that propagation is done.) - -If the transaction currently being created is one of those subsequent transactions (in other words, if it has a parent transaction), the upstream (parent) sampling decision will be included in the sampling context data. Your can use this information to choose whether to inherit that decision. In most cases, inheritance is the right choice, to avoid breaking distributed traces. A broken trace will not include all your services. - - - -In some SDKs, for convenience, the function can return a boolean, so that a parent's decision can be returned directly if that's the desired behavior. - - - -If you're using a rather than a , the decision will always be inherited. The provided will only be used to generate a sample rate if there is no sampling decision coming in from upstream. - -## Forcing a Sampling Decision - -If you know at span creation time whether or not you want the transaction sent to Sentry, you also have the option of passing a sampling decision directly in the `start_span` API. If you do that, the transaction won't be subject to the , nor will be run, so you can count on the decision that's passed not to be overwritten. - - - -## Precedence - -There are multiple ways for a transaction to end up with a sampling decision. - -- Random sampling according to a static sample rate set in -- Random sampling according to a sample function rate returned by -- Absolute decision (100% chance or 0% chance) returned by -- If the transaction has a parent, inheriting its parent's sampling decision -- Absolute decision passed to - -When there's the potential for more than one of these to come into play, the following precedence rules apply: - -1. If a sampling decision is passed to , that decision will be used, overriding everything else. -2. If is defined, its decision will be used. It can choose to keep or ignore any parent sampling decision, use the sampling context data to make its own decision, or choose a sample rate for the transaction. We advise against overriding the parent sampling decision because it will break distributed traces. -3. If is not defined, but there's a parent sampling decision, the parent sampling decision will be used. -4. If is not defined and there's no parent sampling decision, will be used. diff --git a/docs/platforms/python/integrations/aiohttp/index__v3.x.mdx b/docs/platforms/python/integrations/aiohttp/index__v3.x.mdx deleted file mode 100644 index e4aa80c0ce39c..0000000000000 --- a/docs/platforms/python/integrations/aiohttp/index__v3.x.mdx +++ /dev/null @@ -1,118 +0,0 @@ ---- -title: AIOHTTP -description: "Learn about using Sentry with AIOHTTP." ---- - -The AIOHTTP integration adds support for the [AIOHTTP server web framework](https://docs.aiohttp.org/en/stable/web.html). - -If you use AIOHTTP as your HTTP client and want to instrument outgoing HTTP requests, have a look at the AIOHTTP client documentation. - -## Install - -Install `sentry-sdk` from PyPI with the `aiohttp` extra: - -```bash {tabTitle:pip} -pip install "sentry-sdk[aiohttp]" -``` -```bash {tabTitle:uv} -uv add "sentry-sdk[aiohttp]" -``` - -## Configure - -If you have the `aiohttp` package in your dependencies, the AIOHTTP integration will be enabled automatically when you initialize the Sentry SDK. - - - -## Verify - -```python -from aiohttp import web - -sentry_sdk.init(...) # same as above - -async def hello(request): - 1 / 0 # raises an error - return web.Response(text="Hello, world") - -app = web.Application() -app.add_routes([web.get('/', hello)]) - -web.run_app(app) -``` - -When you point your browser to [http://localhost:8080/](http://localhost:8080/) a transaction will be created in the Performance section of [sentry.io](https://sentry.io). Additionally, an error event will be sent to [sentry.io](https://sentry.io) and will be connected to the transaction. - -It takes a couple of moments for the data to appear in [sentry.io](https://sentry.io). - -## Behavior - -- The Sentry Python SDK will install the AIOHTTP integration for all of your apps. -- All exceptions leading to an Internal Server Error are reported. -- _The AIOHTTP integration currently does not attach the request body_, see [GitHub issue](https://github.com/getsentry/sentry-python/issues/220). -- Logging with any logger will create breadcrumbs when the [Logging](/platforms/python/integrations/logging/) integration is enabled (done by default). - -### Tracing - -A set of predefined span attributes will be attached to AIOHTTP transactions by default. These can also be used for sampling since they will also be accessible via the `sampling_context` dictionary in the [`traces_sampler`](/platforms/python/configuration/options/#traces_sampler). - -- `url.path` -- `url.query` -- `url.scheme` -- `url.full` -- `http.request.method` -- `http.request.header.{header}` -- `server.address` -- `server.port` - -These attributes will also be sent to Sentry. If you don't want that, you can filter them out using a custom [`before_send`](/platforms/python/configuration/options/#before_send) function. - -## Options - -By adding `AioHttpIntegration` to your `sentry_sdk.init()` call explicitly, you can set options for `AioHttpIntegration` to change its behavior: - -```python -import sentry_sdk -from sentry_sdk.integrations.aiohttp import AioHttpIntegration - -sentry_sdk.init( - # same as above - integrations=[ - AioHttpIntegration( - transaction_style="...", # type: str - failed_request_status_codes={...} # type: collections.abc.Set[int] - ), - ], -) -``` - -You can pass the following keyword arguments to `AioHttpIntegration()`: - -### `transaction_style` - -Configure the way Sentry names transactions: - -- `GET /path/{id}` if you set `transaction_style="method_and_path_pattern"` -- `.hello` if you set `transaction_style="handler_name"` - -The default is `"handler_name"`. - -### `failed_request_status_codes` - -A `set` of integers that will determine when an `HTTPException` should be reported to Sentry. The `HTTPException` is reported to Sentry if its status code is contained in the `failed_request_status_codes` set. - -Examples of valid `failed_request_status_codes`: - -- `{500}` will only report `HTTPException` with status 500 (i.e. `HTTPInternalServerError`). -- `{400, *range(500, 600)}` will report `HTTPException` with status 400 (i.e. `HTTPBadRequest`) as well as those in the 5xx range. -- `set()` (the empty set) will not report any `HTTPException` to Sentry. - -The default is `{*range(500, 600)}`, meaning that any `HTTPException` with a status in the 5xx range is reported to Sentry. - -Regardless of how `failed_request_status_codes` is set, any exceptions raised by the handler, which are not of type `HTTPException` (or a subclass) are reported to Sentry. For example, if your request handler raises an unhandled `AttributeError`, the `AttributeError` gets reported to Sentry, even if you have set `failed_request_status_codes=set()`. - - -## Supported Versions - -- AIOHTTP: 3.5+ -- Python: 3.7+ diff --git a/docs/platforms/python/integrations/asgi/index__v3.x.mdx b/docs/platforms/python/integrations/asgi/index__v3.x.mdx deleted file mode 100644 index 9cfaf40686200..0000000000000 --- a/docs/platforms/python/integrations/asgi/index__v3.x.mdx +++ /dev/null @@ -1,133 +0,0 @@ ---- -title: ASGI -description: "Learn about the ASGI integration and how it adds support for ASGI applications." ---- - -The ASGI middleware can be used to instrument any [ASGI](https://asgi.readthedocs.io/en/latest/)-compatible web framework to attach request data for your events. - - - -Please check our list of [supported integrations](/platforms/python/integrations/) as there might already be a specific integration (like [FastAPI](/platforms/python/integrations/fastapi/) or [Sanic](/platforms/python/integrations/sanic/)) that is easier to use and captures more useful information than our generic ASGI middleware. If that's the case, you should use the specific integration instead of this middleware. - - - -## Install - -```bash {tabTitle:pip} -pip install "sentry-sdk" -``` -```bash {tabTitle:uv} -uv add "sentry-sdk" -``` - -## Configure - -In addition to capturing errors, you can monitor interactions between multiple services or applications by [enabling tracing](/concepts/key-terms/tracing/). You can also collect and analyze performance profiles from real users with [profiling](/product/explore/profiling/). - -Select which Sentry features you'd like to install in addition to Error Monitoring to get the corresponding installation and configuration instructions below. - - - -```python -import sentry_sdk -from sentry_sdk.integrations.asgi import SentryAsgiMiddleware - -from my_asgi_app import app - -sentry_sdk.init( - dsn="___PUBLIC_DSN___", - # Add data like request headers and IP for users, if applicable; - # see https://docs.sentry.io/platforms/python/data-management/data-collected/ for more info - send_default_pii=True, - # ___PRODUCT_OPTION_START___ performance - # Set traces_sample_rate to 1.0 to capture 100% - # of transactions for tracing. - traces_sample_rate=1.0, - # ___PRODUCT_OPTION_END___ performance - # ___PRODUCT_OPTION_START___ profiling - # To collect profiles for all profile sessions, - # set `profile_session_sample_rate` to 1.0. - profile_session_sample_rate=1.0, - # Profiles will be automatically collected while - # there is an active span. - profile_lifecycle="trace", - # ___PRODUCT_OPTION_END___ profiling -) - -app = SentryAsgiMiddleware(app) -``` - -The middleware supports both ASGI 2 and ASGI 3. - -## Verify - -Trigger an error in your code and see it show up in [sentry.io](https://sentry.io). - -```python -sentry_sdk.init(...) # same as above - -def app(scope): - async def get_body(): - return f"The number is: {1/0}" # raises an error! - - async def asgi(receive, send): - await send( - { - "type": "http.response.start", - "status": 200, - "headers": [[b"content-type", b"text/plain"]], - } - ) - await send({"type": "http.response.body", "body": await get_body()}) - - return asgi - -app = SentryAsgiMiddleware(app) -``` - -Run your ASGI app with unicorn: - -```bash -uvicorn main:app --port 8000 -``` - -Point your browser to [http://localhost:8000](http://localhost:8000) to trigger the error which is then sent to Sentry. - -Additionally, a transaction will show up in the "Performance" section on [sentry.io](https://sentry.io). - -## Behavior - -- Request data is attached to all events: **HTTP method, URL, headers**. Sentry excludes raw bodies and multipart file uploads. Sentry also excludes personally identifiable information (such as user ids, usernames, cookies, authorization headers, IP addresses) unless you set `send_default_pii` to `True`. - -- Each request has a separate scope. Changes to the scope within a view, for example setting a tag, will only apply to events sent as part of the request being handled. - -- The ASGI middleware does not behave like a regular integration. It is not initialized through an extra parameter to `init` and is not attached to a client. When capturing or supplementing events, it just uses the currently active scopes. - -### Default Span Attributes - -A set of predefined span attributes will be attached to ASGI transactions by default. These can also be used for sampling since they will also be accessible via the `sampling_context` dictionary in the [`traces_sampler`](/platforms/python/configuration/options/#traces_sampler). - - | Span Attribute | Description | - | ------------------------------- | -------------------------------------------------------- | - | `network.protocol.name` | `type` on ASGI scope | - | `url.scheme` | `scheme` on ASGI scope | - | `url.path` | `path` on ASGI scope | - | `url.query` | `query` on ASGI scope | - | `network.protocol.version` | `http_version` on ASGI scope | - | `http.request.method` | `method` on ASGI scope | - | `server.address`, `server.port` | `server` on ASGI scope | - | `client.address`, `client.port` | `client` on ASGI scope | - | `url.full` | full URL, reconstructed from individual ASGI scope parts | - | `http.request.header.{header}` | `headers` on ASGI scope | - -These attributes will also be sent to Sentry. If you don't want that, you can filter them out using a custom [`before_send`](/platforms/python/configuration/options/#before_send) function. - -## Supported Versions - -- Python: 3.7+ diff --git a/docs/platforms/python/integrations/celery/index__v3.x.mdx b/docs/platforms/python/integrations/celery/index__v3.x.mdx deleted file mode 100644 index a75517db2ded3..0000000000000 --- a/docs/platforms/python/integrations/celery/index__v3.x.mdx +++ /dev/null @@ -1,266 +0,0 @@ ---- -title: Celery -description: "Learn about using Sentry with Celery." ---- - -The Celery integration adds support for the [Celery Task Queue System](https://docs.celeryq.dev/). - -## Install - -Install `sentry-sdk` from PyPI with the `celery` extra: - -```bash {tabTitle:pip} -pip install "sentry-sdk[celery]" -``` -```bash {tabTitle:uv} -uv add "sentry-sdk[celery]" -``` - -## Configure - -If you have the `celery` package in your dependencies, the Celery integration will be enabled automatically when you initialize the Sentry SDK. - - -Make sure you call `sentry_sdk.init()` when the worker process starts, - not just in the module that defines your tasks. Otherwise, the - initialization may happen too late, causing events to go unreported. - - -### Set up Celery Without Django - -When using Celery without Django, you'll need to initialize the Sentry SDK in both your application and the Celery worker processes spawned by the Celery daemon. - -In addition to capturing errors, you can use Sentry for [distributed tracing](/concepts/key-terms/tracing/) and [profiling](/product/explore/profiling/). Select what you'd like to install to get the corresponding installation and configuration instructions below. - -#### Set up Sentry in Celery Daemon or Worker Processes - - - -```python {filename:tasks.py} -from celery import Celery, signals -import sentry_sdk - -# Initializing Celery -app = Celery("tasks", broker="...") - -# Initialize Sentry SDK on Celery startup -@signals.celeryd_init.connect -def init_sentry(**_kwargs): - sentry_sdk.init( - dsn="___PUBLIC_DSN___", - # Add request headers and IP for users, - # see https://docs.sentry.io/platforms/python/data-management/data-collected/ for more info - send_default_pii=True, - # ___PRODUCT_OPTION_START___ performance - # Set traces_sample_rate to 1.0 to capture 100% - # of transactions for tracing. - traces_sample_rate=1.0, - # ___PRODUCT_OPTION_END___ performance - # ___PRODUCT_OPTION_START___ profiling - # To collect profiles for all profile sessions, - # set `profile_session_sample_rate` to 1.0. - profile_session_sample_rate=1.0, - # Profiles will be automatically collected while - # there is an active span. - profile_lifecycle="trace", - # ___PRODUCT_OPTION_END___ profiling - ) - -# Task definitions go here -@app.task -def add(x, y): - return x + y -``` - -The [`celeryd_init`](https://docs.celeryq.dev/en/stable/userguide/signals.html?#celeryd-init) signal is triggered when the Celery daemon starts, before the worker processes are spawned. If you need to initialize Sentry for each individual worker process, use the [`worker_init`](https://docs.celeryq.dev/en/stable/userguide/signals.html?#worker-init) signal instead. - -#### Set up Sentry in Your Application - - - -```python {filename:main.py} -from tasks import add -import sentry_sdk - -def main(): - # Initializing Sentry SDK in our process - sentry_sdk.init( - dsn="___PUBLIC_DSN___", - # Add data like request headers and IP for users, if applicable; - # see https://docs.sentry.io/platforms/python/data-management/data-collected/ for more info - send_default_pii=True, - # ___PRODUCT_OPTION_START___ performance - # Set traces_sample_rate to 1.0 to capture 100% - # of transactions for tracing. - traces_sample_rate=1.0, - # ___PRODUCT_OPTION_END___ performance - # ___PRODUCT_OPTION_START___ profiling - # To collect profiles for all profile sessions, - # set `profile_session_sample_rate` to 1.0. - profile_session_sample_rate=1.0, - # Profiles will be automatically collected while - # there is an active span. - profile_lifecycle="trace", - # ___PRODUCT_OPTION_END___ profiling - ) - - # Enqueueing a task to be processed by Celery - with sentry_sdk.start_transaction(name="calling-a-celery-task"): - result = add.delay(4, 4) - -if __name__ == "__main__": - main() -``` - -### Set up Celery With Django - -If you're using Celery with Django in a typical setup, have initialized the SDK in your `settings.py` file (as described in the [Django integration documentation](/platforms/python/integrations/django/#configure)), and have your Celery configured to use the same settings as [`config_from_object`](https://docs.celeryq.dev/en/stable/django/first-steps-with-django.html), there's no need to initialize the Celery SDK separately. - -## Verify - -To confirm that your SDK is initialized on worker start, pass `debug=True` to `sentry_sdk.init()`. This will add extra output to your Celery logs when the SDK is initialized. If you see the output during worker startup, and not just after a task has started, then it's working correctly. - -The snippet below includes an intentional `ZeroDivisionError` in the Celery task that will be captured by Sentry. To trigger the error call `debug_sentry.delay()`: - -```python {filename:tasks.py} -from celery import Celery, signals -import sentry_sdk - -app = Celery("tasks", broker="...") - -@signals.celeryd_init.connect -def init_sentry(**_kwargs): - sentry_sdk.init(...) # same as above - -@app.task -def debug_sentry(): - 1/0 -``` - - - -Sentry uses custom message headers for distributed tracing. For Celery versions 4.x, with [message protocol of version 1](https://docs.celeryq.dev/en/stable/internals/protocol.html#version-1), this functionality is broken, and Celery fails to propagate custom headers to the worker. Protocol version 2, which is the default since Celery version 4.0, is not affected. - -The fix for the custom headers propagation issue was introduced to Celery project ([PR](https://github.com/celery/celery/pull/6374)) starting with version 5.0.1. However, the fix was not backported to versions 4.x. - - - -## Options - -To set options on `CeleryIntegration` to change its behavior, add it explicitly to your `sentry_sdk.init()`: - -```python -import sentry_sdk -from sentry_sdk.integrations.celery import CeleryIntegration - -sentry_sdk.init( - # same as above - integrations=[ - CeleryIntegration( - monitor_beat_tasks=True, - exclude_beat_tasks=[ - "unimportant-task", - "payment-check-.*" - ], - ), - ], -) -``` - -You can pass the following keyword arguments to `CeleryIntegration()`: - -- `propagate_traces` - - Propagate Sentry tracing information to the Celery task. This makes it possible to link Celery task errors to the function that triggered the task. - - If this is set to `False`: - - - errors in Celery tasks won't be matched to the triggering function. - - your Celery tasks will start a new trace and won't be connected to the trace in the calling function. - - The default is `True`. - - See [Distributed Traces](#distributed-traces) below to learn how to get more fine-grained control over distributed tracing in Celery tasks. - -- `monitor_beat_tasks`: - - Turn auto-instrumentation on or off for Celery Beat tasks using Sentry Crons. - - See Celery Beat Auto Discovery to learn more. - - The default is `False`. - -- `exclude_beat_tasks`: - - A list of Celery Beat tasks that should be excluded from auto-instrumentation using Sentry Crons. Only applied if `monitor_beat_tasks` is set to `True`. - - The list can contain strings with the names of tasks in the Celery Beat schedule to be excluded. It can also include regular expressions to match multiple tasks. For example, if you include `"payment-check-.*"` every task starting with `payment-check-` will be excluded from auto-instrumentation. - - See Celery Beat Auto Discovery to learn more. - - The default is `None`. - -## Behavior - -### Distributed Traces - -Distributed tracing connects the trace of your Celery task to the trace of the code that started the task, giving you a complete view of the entire workflow. - -You can disable this globally with the `propagate_traces` parameter, documented above. If you set `propagate_traces` to `False`, all Celery tasks will start their own trace. - -If you want to have more fine-grained control over trace distribution, you can override the `propagate_traces` option by passing the `sentry-propagate-traces` header when starting the Celery task: - -**Note:** The `CeleryIntegration` does not utilize the `traces_sample_rate` config option for deciding if a trace should be propagated into a Celery task. - -```python -import sentry_sdk - -# Enable global distributed traces (this is the default, just to be explicit) -sentry_sdk.init( - # same as above - integrations=[ - CeleryIntegration( - propagate_traces=True - ), - ], -) - -# This will propagate the trace: -my_task_a.delay("some parameter") - -# This will propagate the trace: -my_task_b.apply_async( - args=("some_parameter", ) -) - -# This will NOT propagate the trace. The task will start its own trace: -my_task_b.apply_async( - args=("some_parameter", ), - headers={"sentry-propagate-traces": False}, -) - -# Note: overriding the tracing behaviour using `task_x.delay()` is not possible. -``` - -### Default Span Attributes - -A set of predefined span attributes will be attached to Celery transactions by default. These can also be used for sampling since they will also be accessible via the `sampling_context` dictionary in the [`traces_sampler`](/platforms/python/configuration/options/#traces_sampler). - - | Span Attribute | Description | - | --------------------------- | ----------------------------------------------------- | - | `celery.job.args.{index}` | Positional arguments provided to the task, serialized | - | `celery.job.kwargs.{kwarg}` | Keyword arguments provided to the task, serialized | - | `celery.job.task` | Task name | - -These attributes will also be sent to Sentry. If you don't want that, you can filter them out using a custom [`before_send`](/platforms/python/configuration/options/#before_send) function. - -## Supported Versions - -- Celery: 4.4.7+ -- Python: 3.7+ - - diff --git a/docs/platforms/python/integrations/rq/index__v3.x.mdx b/docs/platforms/python/integrations/rq/index__v3.x.mdx deleted file mode 100644 index 279d8f5753c8b..0000000000000 --- a/docs/platforms/python/integrations/rq/index__v3.x.mdx +++ /dev/null @@ -1,220 +0,0 @@ ---- -title: RQ (Redis Queue) -description: "Learn about using Sentry with RQ." ---- - -The RQ integration adds support for the [RQ job queue system](https://python-rq.org/). - -## Install - -Install `sentry-sdk` from PyPI with the `rq` extra: - -```bash {tabTitle:pip} -pip install "sentry-sdk[rq]" -``` -```bash {tabTitle:uv} -uv add "sentry-sdk[rq]" -``` - -## Configure - -If you have the `rq` package in your dependencies, the RQ integration will be enabled automatically when you initialize the Sentry SDK. - -Create a file called `mysettings.py` with the following content: - - -In addition to capturing errors, you can monitor interactions between multiple services or applications by [enabling tracing](/concepts/key-terms/tracing/). You can also collect and analyze performance profiles from real users with [profiling](/product/explore/profiling/). - -Select which Sentry features you'd like to install in addition to Error Monitoring to get the corresponding installation and configuration instructions below. - - - -```python {filename:mysettings.py} -# mysettings.py -import sentry_sdk - -sentry_sdk.init( - dsn="___PUBLIC_DSN___", - # Add data like request headers and IP for users, if applicable; - # see https://docs.sentry.io/platforms/python/data-management/data-collected/ for more info - send_default_pii=True, - # ___PRODUCT_OPTION_START___ performance - # Set traces_sample_rate to 1.0 to capture 100% - # of transactions for tracing. - traces_sample_rate=1.0, - # ___PRODUCT_OPTION_END___ performance - # ___PRODUCT_OPTION_START___ profiling - # To collect profiles for all profile sessions, - # set `profile_session_sample_rate` to 1.0. - profile_session_sample_rate=1.0, - # Profiles will be automatically collected while - # there is an active span. - profile_lifecycle="trace", - # ___PRODUCT_OPTION_END___ profiling -) -``` - -Start your worker with: - -```shell -rq worker \ - -c mysettings \ # module name of mysettings.py - --sentry-dsn="___PUBLIC_DSN___" # only necessary for RQ < 1.0 -``` - -The integration will automatically report errors from all RQ jobs. - -Generally, make sure that the **call to `init` is loaded on worker startup**, and not only in the module where your jobs are defined. Otherwise, the initialization happens too late and events might end up not being reported. - -In addition, make sure that **`init` is called only once** in your app. For example, if you have a `Flask` app and a worker that depends on the app, we recommend only initializing Sentry once. Note that because the Flask integration is enabled automatically, you don't need to change the configuration shown above. - - -```python {filename:app.py} -# app.py -import sentry_sdk - -sentry_sdk.init( - dsn="___PUBLIC_DSN___", - # Add data like request headers and IP for users, if applicable; - # see https://docs.sentry.io/platforms/python/data-management/data-collected/ for more info - send_default_pii=True, - # ___PRODUCT_OPTION_START___ performance - # Set traces_sample_rate to 1.0 to capture 100% - # of transactions for tracing. - traces_sample_rate=1.0, - # ___PRODUCT_OPTION_END___ performance - # ___PRODUCT_OPTION_START___ profiling - # To collect profiles for all profile sessions, - # set `profile_session_sample_rate` to 1.0. - profile_session_sample_rate=1.0, - # Profiles will be automatically collected while - # there is an active span. - profile_lifecycle="trace", - # ___PRODUCT_OPTION_END___ profiling -) -``` - -The worker configuration `mysettings.py` then becomes: - -```python {filename:mysettings.py} -# mysettings.py -# This import causes the Sentry SDK to be initialized -import app -``` - -## Verify - -To verify, create a `main.py` script that enqueues a function in RQ, then start an RQ worker to run the function: - -### Job definition: - -```python {filename:jobs.py} -# jobs.py -def hello(name): - 1 / 0 # raises an error - return f"Hello {name}!" -``` - -### Settings for worker - -```python {filename:mysettings.py} -# mysettings.py -import sentry_sdk - -# Sentry configuration for RQ worker processes -sentry_sdk.init( - dsn="___PUBLIC_DSN___", - # Add data like request headers and IP for users, if applicable; - # see https://docs.sentry.io/platforms/python/data-management/data-collected/ for more info - send_default_pii=True, - # ___PRODUCT_OPTION_START___ performance - # Set traces_sample_rate to 1.0 to capture 100% - # of transactions for tracing. - traces_sample_rate=1.0, - # ___PRODUCT_OPTION_END___ performance - # ___PRODUCT_OPTION_START___ profiling - # To collect profiles for all profile sessions, - # set `profile_session_sample_rate` to 1.0. - profile_session_sample_rate=1.0, - # Profiles will be automatically collected while - # there is an active span. - profile_lifecycle="trace", - # ___PRODUCT_OPTION_END___ profiling -) -``` - -### Main Python Script - -```python {filename:main.py} -# main.py -from redis import Redis -from rq import Queue - -from jobs import hello - -import sentry_sdk - -# Sentry configuration for main.py process -sentry_sdk.init( - dsn="___PUBLIC_DSN___", - # Add data like request headers and IP for users, if applicable; - # see https://docs.sentry.io/platforms/python/data-management/data-collected/ for more info - send_default_pii=True, - # ___PRODUCT_OPTION_START___ performance - # Set traces_sample_rate to 1.0 to capture 100% - # of transactions for tracing. - traces_sample_rate=1.0, - # ___PRODUCT_OPTION_END___ performance - # ___PRODUCT_OPTION_START___ profiling - # To collect profiles for all profile sessions, - # set `profile_session_sample_rate` to 1.0. - profile_session_sample_rate=1.0, - # Profiles will be automatically collected while - # there is an active span. - profile_lifecycle="trace", - # ___PRODUCT_OPTION_END___ profiling -) - -q = Queue(connection=Redis()) -with sentry_sdk.start_transaction(name="testing_sentry"): - result = q.enqueue(hello, "World") -``` - -When you run `python main.py` a transaction named `testing_sentry` will be created in the Performance section of [sentry.io](https://sentry.io) and spans for the enqueueing will be created. - -If you run the RQ worker with `rq worker -c mysettings`, a transaction for the execution of `hello()` will be created. Additionally, an error event will be sent to [sentry.io](https://sentry.io) and will be connected to the transaction. - -It takes a couple of moments for the data to appear in [sentry.io](https://sentry.io). - -## The `--sentry-dsn` CLI option - -Passing `--sentry-dsn=""` to RQ forcibly disables [RQ's shortcut for using Sentry](https://python-rq.org/patterns/sentry/). For RQ versions before 1.0 this is necessary to avoid conflicts, because back then RQ would attempt to use the `raven` package instead of this SDK. Since RQ 1.0 it's possible to use this CLI option and the associated RQ settings for initializing the SDK. - -We still recommend against using those shortcuts because it would be harder to provide options to the SDK at a later point. See [the GitHub issue about RQ's Sentry integration](https://github.com/rq/rq/issues/1003) for discussion. - -## Default Span Attributes - -A set of predefined span attributes will be attached to RQ transactions by default. These can also be used for sampling since they will also be accessible via the `sampling_context` dictionary in the [`traces_sampler`](/platforms/python/configuration/options/#traces_sampler). - - | Span Attribute | Description | - | ---------------------------- | ------------------------------- | - | `rq.job.args.{index}` | Positional job args, serialized | - | `rq.job.kwargs.{kwarg}` | Keyword job args, serialized | - | `rq.job.func` | Job `func`, serialized | - | `messaging.destination.name` | Queue name | - | `messaging.message.id` | Job ID | - -These attributes will also be sent to Sentry. If you don't want that, you can filter them out using a custom [`before_send`](/platforms/python/configuration/options/#before_send) function. - -## Supported Versions - -- RQ: 0.6+ -- Python: 3.7+ - - diff --git a/docs/platforms/python/integrations/tornado/index__v3.x.mdx b/docs/platforms/python/integrations/tornado/index__v3.x.mdx deleted file mode 100644 index 1b2e1a8a4479a..0000000000000 --- a/docs/platforms/python/integrations/tornado/index__v3.x.mdx +++ /dev/null @@ -1,85 +0,0 @@ ---- -title: Tornado -description: "Learn about using Sentry with Tornado." ---- - -The Tornado integration adds support for the [Tornado web framework](https://www.tornadoweb.org/). - -## Install - -Install `sentry-sdk` from PyPI with the `tornado` extra: - -```bash {tabTitle:pip} -pip install "sentry-sdk[tornado]" -``` -```bash {tabTitle:uv} -uv add "sentry-sdk[tornado]" -``` - -## Configure - -If you have the `tornado` package in your dependencies, the Tornado integration will be enabled automatically when you initialize the Sentry SDK. - - - -## Verify - -```python -import asyncio -import tornado - -sentry_sdk.init(...) # same as above - -class MainHandler(tornado.web.RequestHandler): - def get(self): - 1 / 0 # raises an error - self.write("Hello, world") - -def make_app(): - return tornado.web.Application([ - (r"/", MainHandler), - ]) - -async def main(): - app = make_app() - app.listen(8888) - await asyncio.Event().wait() - -asyncio.run(main()) -``` - -When you point your browser to [http://localhost:8888/](http://localhost:8888/) a transaction in the Performance section of [sentry.io](https://sentry.io) will be created. Additionally, an error event will be sent to [sentry.io](https://sentry.io) and will be connected to the transaction. - -It takes a couple of moments for the data to appear in [sentry.io](https://sentry.io). - -## Behavior - -- The Tornado integration will be installed for all of your apps and handlers. -- All exceptions leading to a Internal Server Error are reported. -- Request data is attached to all events: **HTTP method, URL, headers, form data, JSON payloads**. Sentry excludes raw bodies and multipart file uploads. Sentry also excludes personally identifiable information (such as user ids, usernames, cookies, authorization headers, IP addresses) unless you set `send_default_pii` to `True`. -- Each request has a separate scope. Changes to the scope within a view, for example setting a tag, will only apply to events sent as part of the request being handled. -- Logging with any logger will create breadcrumbs when the [Logging](/platforms/python/integrations/logging/) integration is enabled (done by default). - -### Tracing - -A set of predefined span attributes will be attached to Tornado transactions by default. These can also be used for sampling since they will also be accessible via the `sampling_context` dictionary in the [`traces_sampler`](/platforms/python/configuration/options/#traces_sampler). - -- `url.path` -- `url.query` -- `url.scheme` -- `url.full` -- `http.request.method` -- `http.request.header.{header}` -- `server.address` -- `server.port` -- `network.protocol.name` -- `network.protocol.version` - -These attributes will also be sent to Sentry. If you don't want that, you can filter them out using a custom [`before_send`](/platforms/python/configuration/options/#before_send) function. - -## Supported Versions - -- Tornado: 6+ -- Python: 3.8+ - - diff --git a/docs/platforms/python/integrations/wsgi/index__v3.x.mdx b/docs/platforms/python/integrations/wsgi/index__v3.x.mdx deleted file mode 100644 index 3e2c2d1cac502..0000000000000 --- a/docs/platforms/python/integrations/wsgi/index__v3.x.mdx +++ /dev/null @@ -1,120 +0,0 @@ ---- -title: WSGI -description: "Learn about the WSGI integration and how it adds support for WSGI applications." ---- - -If you use a WSGI framework not directly supported by the SDK, or you wrote a raw WSGI app, you can use this generic WSGI middleware. It captures errors and attaches a basic amount of information for incoming requests. - - - -Please check our list of [supported integrations](/platforms/python/integrations/) as there might already be a specific integration (like [Django](/platforms/python/integrations/django/) or [Flask](/platforms/python/integrations/flask/)) that is easier to use and captures more useful information than our generic WSGI middleware. If that's the case, you should use the specific integration instead of this middleware. - - -## Install - -```bash {tabTitle:pip} -pip install "sentry-sdk" -``` -```bash {tabTitle:uv} -uv add "sentry-sdk" -``` - -## Configure - -In addition to capturing errors, you can monitor interactions between multiple services or applications by [enabling tracing](/concepts/key-terms/tracing/). You can also collect and analyze performance profiles from real users with [profiling](/product/explore/profiling/). - -Select which Sentry features you'd like to install in addition to Error Monitoring to get the corresponding installation and configuration instructions below. - - - -```python -import sentry_sdk -from sentry_sdk.integrations.wsgi import SentryWsgiMiddleware - -from my_wsgi_app import app - -sentry_sdk.init( - dsn="___PUBLIC_DSN___", - # Add data like request headers and IP for users, if applicable; - # see https://docs.sentry.io/platforms/python/data-management/data-collected/ for more info - send_default_pii=True, - # ___PRODUCT_OPTION_START___ performance - # Set traces_sample_rate to 1.0 to capture 100% - # of transactions for tracing. - traces_sample_rate=1.0, - # ___PRODUCT_OPTION_END___ performance - # ___PRODUCT_OPTION_START___ profiling - # To collect profiles for all profile sessions, - # set `profile_session_sample_rate` to 1.0. - profile_session_sample_rate=1.0, - # Profiles will be automatically collected while - # there is an active span. - profile_lifecycle="trace", - # ___PRODUCT_OPTION_END___ profiling -) - -app = SentryWsgiMiddleware(app) -``` - -## Verify - -This minimal WSGI application will create a transaction and send it to Sentry as long as you have tracing enabled. The error will also be sent to Sentry and associated with the transaction: - -```python -import sentry_sdk -from sentry_sdk.integrations.wsgi import SentryWsgiMiddleware - -sentry_sdk.init(...) # same as above - -def app(env, start_response): - start_response('200 OK', [('Content-Type', 'text/plain')]) - response_body = 'Hello World' - 1 / 0 # this raises an error - return [response_body.encode()] - -app = SentryWsgiMiddleware(app) - -# Run the application in a mini WSGI server. -from wsgiref.simple_server import make_server -make_server('', 8000, app).serve_forever() -``` - -## Behavior - - - -- Request data is attached to all events: **HTTP method, URL, headers**. Sentry excludes raw bodies and multipart file uploads. Sentry also excludes personally identifiable information (such as user IDs, usernames, cookies, authorization headers, IP addresses) unless you set `send_default_pii` to `True`. - -Each request has a separate scope. Changes to the scope within a view, for example setting a tag, will only apply to events sent as part of the request being handled. - -- The WSGI middleware does not behave like a regular integration. It is not initialized through an extra parameter to `init` and is not attached to a client. When capturing or supplementing events, it just uses the currently active scopes. - -### Default Span Attributes - -A set of predefined span attributes will be attached to WSGI transactions by default. These can also be used for sampling since they will also be accessible via the `sampling_context` dictionary in the [`traces_sampler`](/platforms/python/configuration/options/#traces_sampler). - - | Span Attribute | Description | - | ------------------------------------------------- | ---------------------------------------------------------- | - | `url.path` | `PATH_INFO` from WSGI environ | - | `url.query` | `QUERY_STRING` from WSGI environ | - | `http.request.method` | `REQUEST_METHOD` from WSGI environ | - | `server.address` | `SERVER_NAME` from WSGI environ | - | `server.port` | `SERVER_PORT` from WSGI environ | - | `server.protocol.name`, `server.protocol.version` | `SERVER_PROTOCOL` from WSGI environ | - | `url.scheme` | `wsgi.url_scheme` from WSGI environ | - | `url.full` | full URL, reconstructed from individual WSGI environ parts | - | `http.request.header.{header}` | `HTTP_*` from WSGI environ | - -These attributes will also be sent to Sentry. If you don't want that, you can filter them out using a custom [`before_send`](/platforms/python/configuration/options/#before_send) function. - -## Supported Versions - -- Python: 3.7+ - - diff --git a/docs/platforms/python/migration/2.x-to-3.x.mdx b/docs/platforms/python/migration/2.x-to-3.x.mdx deleted file mode 100644 index 9f32d108facaa..0000000000000 --- a/docs/platforms/python/migration/2.x-to-3.x.mdx +++ /dev/null @@ -1,375 +0,0 @@ ---- -title: Migrate from 2.x to 3.x -sidebar_order: 8997 -description: "Learn about migrating from sentry-python 2.x to 3.x" ---- - - - -Version 3.0 of the Sentry Python SDK is currently in pre-release. If you feel like giving it a spin, check out [our most recent releases](https://pypi.org/project/sentry-sdk/#history). Your feedback at this stage is invaluable, so please let us know about your experience, whether positive or negative, [on GitHub](https://github.com/getsentry/sentry-python/discussions/3936) or [on Discord](https://discord.com/invite/Ww9hbqr): How did the migration go? Did you encounter any issues? Is everything working as expected? - - - -This guide describes the common patterns involved in migrating to version `3.x` of the `sentry-python` SDK. For the full list of changes, check out the [detailed migration guide in the repository](https://github.com/getsentry/sentry-python/blob/potel-base/MIGRATION_GUIDE.md). - - -## Python Version Support - -Sentry Python SDK `3.x` only supports Python 3.7 and higher. If you're on an older Python version, you'll need to stay on an older version of the SDK: - -- Python 2.7-3.5: SDK `1.x` -- Python 3.6: SDK `2.x` - - -## Configuration - -The `enable_tracing` option was removed. Use [`traces_sample_rate`](/platforms/python/configuration/options/#traces_sample_rate) directly, or configure a [`traces_sampler`](/platforms/python/configuration/options/#traces_sampler) for more fine-grained control over which spans should be sampled. - -```python diff - sentry_sdk.init( -- enable_tracing=True, -+ traces_sample_rate=1.0, - ) -``` - -The deprecated `propagate_traces` option was removed. Use [`trace_propagation_targets`](/platforms/python/configuration/options/#trace_propagation_targets) instead. - -```python diff - sentry_sdk.init( - # don't propagate trace info downstream -- propagate_traces=False, -+ trace_propagation_targets=[], - ) -``` - -Note that this only affects the global SDK option. The [`propagate_traces`](/platforms/python/integrations/celery/#options) option of the Celery integration remains unchanged. - -The `profiles_sample_rate` and `profiler_mode` options previously nested under `_experiments` have been removed. They're replaced by top-level options of the same name: - -```python diff - sentry_sdk.init( -- _experiments={ -- "profiles_sample_rate": 1.0, -- "profiler_mode": "thread", -- }, -+ profiles_sample_rate=1.0, -+ profiler_mode="thread", - ) -``` - -The `enable_logs` and `before_send_log` options previously nested under `_experiments` have been removed. They're replaced by top-level options of the same name: - -```python diff - - def my_before_send_log(log, hint): - ... - - sentry_sdk.init( -- _experiments={ -- "enable_logs": True, -- "before_send_log": my_before_send_log, -- }, -+ enable_logs=True, -+ before_send_log=my_before_send_log, - ) -``` - -## API Changes - -`add_attachment()` is now a part of the top-level level API and should be imported and used directly from `sentry_sdk`. - -```python diff - import sentry_sdk - -- scope = sentry_sdk.get_current_scope() -- scope.add_attachment(bytes=b"Hello World!", filename="attachment.txt") -+ sentry_sdk.add_attachment(bytes=b"Hello World!", filename="attachment.txt") -``` - -Using `sentry_sdk.add_attachment()` directly also makes sure the attachment is added to the correct scope internally. - -### Tracing - -Tracing in the Sentry Python SDK `3.x` is powered by [OpenTelemetry](https://opentelemetry.io/) in the background, which also means we're moving away from the Sentry-specific concept of transactions and towards a span-only future. `sentry_sdk.start_transaction()` is now deprecated in favor of `sentry_sdk.start_span()`. - -```python diff -- with sentry_sdk.start_transaction(): -+ with sentry_sdk.start_span(): - ... -``` - -If you start a span, it will automatically become the child of the currently active span. If you want to create a span that should instead start its own trace, use the `new_trace()` context manager. - -```python -with sentry_sdk.start_span(name="parent"): - with sentry_sdk.start_span(name="child-of-parent"): - with sentry_sdk.new_trace(): - # The first span started in this context manager will become - # a new transaction (root span) with its own trace - with sentry_sdk.start_span(name="new-parent"): - with sentry_sdk.start_span(name="child-of-new-parent"): - ... -``` - -Any spans without a parent span will become transactions by default. If you want to avoid promoting a span without a parent to a transaction, you can pass the `only_as_child_span=True` keyword argument to `sentry_sdk.start_span()`. - -`sentry_sdk.start_transaction()` and `sentry_sdk.start_span()` no longer take the following arguments: `trace_id`, `baggage`, `span_id`, `parent_span_id`, `custom_sampling_context` (see below). Use `sentry_sdk.continue_trace()` for propagating trace data. - -`sentry_sdk.continue_trace()` no longer returns a `Transaction` and is now a context manager. To continue a trace from headers or environment variables, start a new span inside `sentry_sdk.continue_trace()`: - -```python diff -- transaction = sentry_sdk.continue_trace({...}) -- with sentry_sdk.start_transaction(transaction=transaction): -- ... -+ with sentry_sdk.continue_trace({...}): -+ with sentry_sdk.start_span(): -+ ... -``` - -The functions `continue_from_headers`, `continue_from_environ` and `from_traceparent` have been removed. Use the `sentry_sdk.continue_trace()` context manager instead. - - -## Span Data - -In OpenTelemetry, there is no concept of separate categories of data on a span: everything is simply a span attribute. This is a concept the Sentry SDK is also adopting. We deprecated `set_data()` and added a new span method called `set_attribute()`: - -```python diff - with sentry_sdk.start_span(...) as span: -- span.set_data("my_attribute", "my_value") -+ span.set_attribute("my_attribute", "my_value") -``` - -You can also set attributes directly when creating the span. This has the advantage that these initial attributes will be accessible in the sampling context in your `traces_sampler`/`profiles_sampler` (see also the [Sampling section](#sampling)). - -```python -with sentry_sdk.start_span(attributes={"my_attribute": "my_value"}): - ... -``` - - - -There are important type restrictions to consider when setting attributes on a span via `span.set_attribute()` and `start_span(attributes={...})`. The keys must be non-empty strings and the values can only be several primitive types (excluding `None`) or a list of a single primitive type. See [the OpenTelemetry specification](https://opentelemetry.io/docs/specs/otel/common/#attribute) for details. - -Note that since the SDK is now exclusively using span attributes, this restriction applies to other ways of setting data on a span as well like `span.set_data()`, `span.set_measurement()`, `span.set_context()`. - - - - -## Sampling - -It's no longer possible to change the sampling decision of a span by setting `span.sampled` directly after the span has been created. Use either a custom `traces_sampler` (preferred) or the `sampled` argument to `start_span()` for determining whether a span should be sampled. - -```python -with sentry_sdk.start_span(sampled=True) as span: - ... -``` - - - -Both `traces_sampler` and the `sampled` argument will only influence whether root spans (transactions) are sampled. They can't be used for sampling child spans. - - - -The `sampling_context` argument of `traces_sampler` and `profiles_sampler` has changed considerably for spans coming from our auto-instrumented integrations. As a consequence of using OpenTelemetry under the hood, spans can only carry specific, primitive types of data. This prevents us from making custom objects, for example, the `Request` object for several web frameworks, accessible on the span. - - - The AIOHTTP integration doesn't add the `aiohttp_request` object anymore. Instead, some of the individual properties of the request are accessible, if available, as follows: - - | Request property | Sampling context key(s) | - | ----------------- | ------------------------------- | - | `path` | `url.path` | - | `query_string` | `url.query` | - | `method` | `http.request.method` | - | `host` | `server.address`, `server.port` | - | `scheme` | `url.scheme` | - | full URL | `url.full` | - | `request.headers` | `http.request.header.{header}` | - - - - The Celery integration doesn't add the `celery_job` dictionary anymore. Instead, the individual keys are now available as: - - | Dictionary keys | Sampling context key | Example | - | ---------------------- | --------------------------- | ------------------------------ | - | `celery_job["args"]` | `celery.job.args.{index}` | `celery.job.args.0` | - | `celery_job["kwargs"]` | `celery.job.kwargs.{kwarg}` | `celery.job.kwargs.kwarg_name` | - | `celery_job["task"]` | `celery.job.task` | | - - - - The Tornado integration doesn't add the `tornado_request` object anymore. Instead, some of the individual properties of the request are accessible, if available, as follows: - - | Request property | Sampling context key(s) | - | ----------------- | --------------------------------------------------- | - | `path` | `url.path` | - | `query` | `url.query` | - | `protocol` | `url.scheme` | - | `method` | `http.request.method` | - | `host` | `server.address`, `server.port` | - | `version` | `network.protocol.name`, `network.protocol.version` | - | full URL | `url.full` | - | `request.headers` | `http.request.header.{header}` | - - - - The WSGI integration doesn't add the `wsgi_environ` object anymore. Instead, the individual properties of the environment are accessible, if available, as follows: - - | Env property | Sampling context key(s) | - | ----------------- | ------------------------------------------------- | - | `PATH_INFO` | `url.path` | - | `QUERY_STRING` | `url.query` | - | `REQUEST_METHOD` | `http.request.method` | - | `SERVER_NAME` | `server.address` | - | `SERVER_PORT` | `server.port` | - | `SERVER_PROTOCOL` | `server.protocol.name`, `server.protocol.version` | - | `wsgi.url_scheme` | `url.scheme` | - | full URL | `url.full` | - | `HTTP_*` | `http.request.header.{header}` | - - - - The ASGI integration doesn't add the `asgi_scope` object anymore. Instead, the individual properties of the scope, if available, are accessible as follows: - - | Scope property | Sampling context key(s) | - | -------------- | ------------------------------- | - | `type` | `network.protocol.name` | - | `scheme` | `url.scheme` | - | `path` | `url.path` | - | `query` | `url.query` | - | `http_version` | `network.protocol.version` | - | `method` | `http.request.method` | - | `server` | `server.address`, `server.port` | - | `client` | `client.address`, `client.port` | - | full URL | `url.full` | - | `headers` | `http.request.header.{header}` | - - - - The RQ integration doesn't add the `rq_job` object anymore. Instead, the individual properties of the job and the queue, if available, are accessible as follows: - - | RQ property | Sampling context key | Example | - | --------------- | ---------------------------- | ---------------------- | - | `rq_job.args` | `rq.job.args.{index}` | `rq.job.args.0` | - | `rq_job.kwargs` | `rq.job.kwargs.{kwarg}` | `rq.job.args.my_kwarg` | - | `rq_job.func` | `rq.job.func` | | - | `queue.name` | `messaging.destination.name` | | - | `rq_job.id` | `messaging.message.id` | | - - Note that `rq.job.args`, `rq.job.kwargs`, and `rq.job.func` are serialized and not the actual objects on the job. - - - - The AWS Lambda integration doesn't add the `aws_event` and `aws_context` objects anymore. Instead, the following, if available, is accessible: - - | AWS property | Sampling context key(s) | - | ------------------------------------------- | ------------------------------- | - | `aws_event["httpMethod"]` | `http.request.method` | - | `aws_event["queryStringParameters"]` | `url.query` | - | `aws_event["path"]` | `url.path` | - | full URL | `url.full` | - | `aws_event["headers"]["X-Forwarded-Proto"]` | `network.protocol.name` | - | `aws_event["headers"]["Host"]` | `server.address` | - | `aws_context["function_name"]` | `faas.name` | - | `aws_event["headers"]` | `http.request.headers.{header}` | - - - - The GCP integration doesn't add the `gcp_env` and `gcp_event` keys anymore. Instead, the following, if available, is accessible: - - | Old sampling context key | New sampling context key | - | --------------------------------- | ------------------------------ | - | `gcp_env["function_name"]` | `faas.name` | - | `gcp_env["function_region"]` | `faas.region` | - | `gcp_env["function_project"]` | `gcp.function.project` | - | `gcp_env["function_identity"]` | `gcp.function.identity` | - | `gcp_env["function_entry_point"]` | `gcp.function.entry_point` | - | `gcp_event.method` | `http.request.method` | - | `gcp_event.query_string` | `url.query` | - | `gcp_event.headers` | `http.request.header.{header}` | - - -The ability to set `custom_sampling_context` on `start_transaction` was removed. If there is custom data that you want to have accessible in the `sampling_context` of a `traces_sampler` or `profiles_sampler`, set it on the span via the `attributes` argument, as all span attributes are now included in the `sampling_context` by default: - -```python diff -- with start_transaction(custom_sampling_context={"custom_attribute": "custom_value"}): -+ with start_span(attributes={"custom_attribute": "custom_value"}) as span: - # custom_attribute will now be accessible in the sampling context - # of your traces_sampler/profiles_sampler - ... -``` - - - -As mentioned above, span attribute keys must be non-empty strings and values can only be several primitive types (excluding `None`) or a list of a single primitive type. See [the OpenTelemetry specification](https://opentelemetry.io/docs/specs/otel/common/#attribute) for details. - - - - -## Errors - -We've updated how we handle `ExceptionGroup`s. You will now get more data if `ExceptionGroup`s appear in chained exceptions. As an indirect consequence, you might notice a change in how issues are grouped in Sentry. - - -## Integrations - -Additional integrations will now be activated automatically if the SDK detects the respective package is installed: Ariadne, ARQ, asyncpg, Chalice, clickhouse-driver, GQL, Graphene, huey, Loguru, PyMongo, Quart, Starlite, Strawberry. You can [opt-out of specific integrations with the `disabled_integrations` option](/platforms/python/integrations/#disabling-integrations). - -We no longer support Django older than 2.0, trytond older than 5.0, and Falcon older than 3.0. - -### Logging - -The logging integration, which implements out-of-the-box support for the Python standard library `logging` framework, doesn't capture error logs as events anymore by default. The original behavior can still be achieved by providing a custom `event_level` to the `LoggingIntegration`: - -```python -sentry_sdk.init( - integrations=[ - # capture error, critical, exception logs - # and send them to Sentry as errors - LoggingIntegration(event_level="ERROR"), - ], -) -``` - -### Threading - -The parameter `propagate_hub` has been removed from `ThreadingIntegration`. Use the new `propagate_scope` parameter instead. (If you had `ThreadingIntegration(propagate_hub=True)`, you can remove the parameter.) - - -### clickhouse-driver - -The query being executed is now available under the `db.query.text` span attribute (only if `send_default_pii` is `True`). - -### PyMongo - -The PyMongo integration no longer sets tags automatically. The data is still accessible via span attributes. - -The PyMongo integration doesn't set `operation_ids` anymore. The individual IDs (`operation_id`, `request_id`, `session_id`) are now accessible as separate span attributes. - -### Redis - -In Redis pipeline spans, there is no `span["data"]["redis.commands"]` that contains a dictionary `{"count": 3, "first_ten": ["cmd1", "cmd2", ...]}`. Instead, there is `span["data"]["redis.commands.count"]` (containing `3`) and `span["data"]["redis.commands.first_ten"]` (containing `["cmd1", "cmd2", ...]`). - - -## Measurements - -The `set_measurement()` API was removed. You can set custom attributes on the span instead with `set_attribute()`. - - -## Sessions - -The `auto_session_tracking()` context manager was removed. Use `track_session()` instead. - - -## Scope - -Setting `Scope.user` directly is no longer supported. Use `Scope.set_user()` instead. - - -## Metrics - -The `sentry_sdk.metrics` API doesn't exist anymore in SDK `3.x` as the [metrics beta has come to an end](https://sentry.zendesk.com/hc/en-us/articles/26369339769883-Metrics-Beta-Coming-to-an-End). The associated experimental options `enable_metrics`, `before_emit_metric` and `metric_code_locations` have been removed as well. - - -## Internals - -There is no concept of a hub anymore and all APIs and attributes that were connected to hubs have been removed. diff --git a/docs/platforms/python/tracing/configure-sampling/index__v3.x.mdx b/docs/platforms/python/tracing/configure-sampling/index__v3.x.mdx deleted file mode 100644 index 342365d2a26ba..0000000000000 --- a/docs/platforms/python/tracing/configure-sampling/index__v3.x.mdx +++ /dev/null @@ -1,337 +0,0 @@ ---- -title: Configure Sampling -description: "Learn how to configure sampling in your app." -sidebar_order: 40 ---- - -If you find that Sentry's tracing functionality is generating too much data, for example, if you notice your spans quota is quickly being exhausted, you can choose to sample your traces. - -Effective sampling is key to getting the most value from Sentry's performance monitoring while minimizing overhead. The Python SDK provides two ways to control the sampling rate. You can review the options and [examples](#traces-sampler-examples) below. - -## Sampling Configuration Options - -### 1. Uniform Sample Rate (`traces_sample_rate`) - -`traces_sample_rate` is a floating-point value between `0.0` and `1.0`, inclusive, which controls the probability with which each transaction will be sampled: - - - -With `traces_sample_rate` set to `0.25`, each transaction in your application is randomly sampled with a probability of `0.25`, so you can expect that one in every four transactions will be sent to Sentry. - -### 2. Sampling Function (`traces_sampler`) - -For more granular control, you can provide a `traces_sampler` function. This approach allows you to: - -- Apply different sampling rates to different types of transactions -- Filter out specific transactions entirely -- Make sampling decisions based on transaction data -- Control the inheritance of sampling decisions in distributed traces -- Use custom attributes to modify sampling - - - -It is strongly recommended when using a custom `traces_sampler` that you respect the parent sampling decision. This ensures your traces will be complete. - - - -In distributed systems, implementing inheritance logic when trace information is propagated between services will ensure consistent sampling decisions across your entire distributed trace. - - - -
-Traces Sampler Examples - -#### Traces Sampler Examples - -1. Prioritizing Critical User Flows - -```python -import sentry_sdk -from sentry_sdk.types import SamplingContext - -def traces_sampler(sampling_context: SamplingContext) -> float: - # Use the parent sampling decision if we have an incoming trace. - # Note: we strongly recommend respecting the parent sampling decision, - # as this ensures your traces will be complete! - parent_sampling_decision = sampling_context["parent_sampled"] - if parent_sampling_decision is not None: - return float(parent_sampling_decision) - - transaction_ctx = sampling_context["transaction_context"] - name = transaction_ctx["name"] - op = transaction_ctx["op"] - - # Sample all checkout transactions - if name and ('/checkout' in name or op == 'checkout'): - return 1.0 - - # Sample 50% of login transactions - if name and ('/login' in name or op == 'login'): - return 0.5 - - # Sample 10% of everything else - return 0.1 - -sentry_sdk.init( - dsn="your-dsn", - traces_sampler=traces_sampler, -) -``` - -2. Handling Different Environments and Error Rates - -```python -import sentry_sdk -from sentry_sdk.types import SamplingContext - -def traces_sampler(sampling_context: SamplingContext) -> float: - # Use the parent sampling decision if we have an incoming trace. - # Note: we strongly recommend respecting the parent sampling decision, - # as this ensures your traces will be complete! - parent_sampling_decision = sampling_context["parent_sampled"] - if parent_sampling_decision is not None: - return float(parent_sampling_decision) - - environment = os.environ.get("ENVIRONMENT", "development") - - # Sample all transactions in development - if environment == "development": - return 1.0 - - # Sample more transactions if there are recent errors - # Note: hasRecentErrors is a custom attribute that needs to be set - if sampling_context.get("hasRecentErrors") is True: - return 0.8 - - # Sample based on environment - if environment == "production": - return 0.05 # 5% in production - elif environment == "staging": - return 0.2 # 20% in staging - - # Default sampling rate - return 0.1 - -# Initialize the SDK with the sampling function -sentry_sdk.init( - dsn="your-dsn", - traces_sampler=traces_sampler, -) - -# Custom attributes need to be set on transaction start via the `attributes` -# argument in order to be available in the traces_sampler -with sentry_sdk.start_span( - name="GET /api/users", - op="http.request", - attributes={"hasRecentErrors": True}, -) as span: - # Your code here -``` - -3. Controlling Sampling Based on User and Transaction Properties - -```python -import sentry_sdk -from sentry_sdk.types import SamplingContext - -def traces_sampler(sampling_context: SamplingContext) -> float: - # Use the parent sampling decision if we have an incoming trace. - # Note: we strongly recommend respecting the parent sampling decision, - # as this ensures your traces will be complete! - parent_sampling_decision = sampling_context["parent_sampled"] - if parent_sampling_decision is not None: - return float(parent_sampling_decision) - - # Always sample for premium users - # Note: user.tier is a custom attribute that needs to be set - if sampling_context.get("user.tier") == "premium": - return 1.0 - - # Sample more transactions for users experiencing errors - # Note: hasRecentErrors is a custom attribute - if sampling_context.get("hasRecentErrors") is True: - return 0.8 - - # Sample less for high-volume, low-value paths - transaction_ctx = sampling_context.get("transaction_context") - if transaction_ctx and transaction_ctx["name"].startswith("/api/metrics"): - return 0.01 - - # Default sampling rate - return 0.2 - -# Initialize the SDK with the sampling function -sentry_sdk.init( - dsn="your-dsn", - traces_sampler=traces_sampler, -) - -# To set custom attributes for this example: -with sentry_sdk.start_span( - name="GET /api/users", - op="http.request", - attributes={"user.tier": "premium", "hasRecentErrors": True}, -) as span: - # Your code here -``` - -4. Complex Business Logic Sampling - -```python -import sentry_sdk -from sentry_sdk.types import SamplingContext - -def traces_sampler(sampling_context: SamplingContext) -> float: - # Use the parent sampling decision if we have an incoming trace. - # Note: we strongly recommend respecting the parent sampling decision, - # as this ensures your traces will be complete! - parent_sampling_decision = sampling_context["parent_sampled"] - if parent_sampling_decision is not None: - return float(parent_sampling_decision) - - # Always sample critical business operations - transaction_ctx = sampling_context["transaction_context"] - if transaction_ctx["op"] in ["payment.process", "order.create", "user.verify"]: - return 1.0 - - # Sample based on user segment - # Note: user.segment is a custom attribute - user_segment = sampling_context.get("user.segment") - if user_segment == "enterprise": - return 0.8 - elif user_segment == "premium": - return 0.5 - - # Sample based on transaction value - # Note: transaction.value is a custom attribute - transaction_value = sampling_context.get("transaction.value") - if transaction_value is not None and transaction_value > 1000: - return 0.7 - - # Sample based on error rate in the service - # Note: service.error_rate is a custom attribute - error_rate = sampling_context.get("service.error_rate") - if error_rate is not None and error_rate > 0.05: - return 0.9 - - # Default sampling rate - return 0.1 - -# Initialize the SDK with the sampling function -sentry_sdk.init( - dsn="your-dsn", - traces_sampler=traces_sampler, -) - -# To set custom attributes for this example: -with sentry_sdk.start_span( - name="Process Payment", - op="payment.process", - attributes={"user.segment": "enterprise", "transaction.value": 1500, "service.error_rate": 0.03}, -) as span: - # Your code here -``` - -5. Performance-Based Sampling - -```python -import sentry_sdk -from sentry_sdk.types import SamplingContext - -def traces_sampler(sampling_context: SamplingContext) -> float: - # Use the parent sampling decision if we have an incoming trace. - # Note: we strongly recommend respecting the parent sampling decision, - # as this ensures your traces will be complete! - parent_sampling_decision = sampling_context["parent_sampled"] - if parent_sampling_decision is not None: - return float(parent_sampling_decision) - - # Sample more transactions with high memory usage - # Note: memory_usage_mb is a custom attribute - memory_usage = sampling_context.get("memory_usage_mb") - if memory_usage is not None and memory_usage > 500: - return 0.8 - - # Sample more transactions with high CPU usage - # Note: cpu_percent is a custom attribute - cpu_percent = sampling_context.get("cpu_percent") - if cpu_percent is not None and cpu_percent > 80: - return 0.8 - - # Sample more transactions with high database load - # Note: db_connections is a custom attribute - db_connections = sampling_context.get("db_connections") - if db_connections is not None and db_connections > 100: - return 0.7 - - # Default sampling rate - return 0.1 - -# Initialize the SDK with the sampling function -sentry_sdk.init( - dsn="your-dsn", - traces_sampler=traces_sampler, -) - -# To set custom attributes for this example: -with sentry_sdk.start_span( - name="Process Data", - op="data.process", - attributes={"memory_usage_mb": 600, "cpu_percent": 85, "db_connections": 120}, -) as span: - # Your code here -``` -
- -## The Sampling Context Object - -When the `traces_sampler` function is called, the Sentry SDK passes a `sampling_context` object with information from the relevant span to help make sampling decisions: - -```python -{ - "transaction_context": { - "name": str, # transaction title at creation time (SDK-provided) - "op": str, # short description of transaction type (SDK-provided) - "data": Optional[dict[str, Any]] - }, - "parent_sampled": Optional[bool], # whether the parent transaction was sampled (SDK-provided) - "parent_sample_rate": Optional[float], # the sample rate used by the parent (SDK-provided) - ... # extra attributes provided to start_span -} -``` - -### SDK-Provided vs. Custom Attributes - -The sampling context contains both SDK-provided attributes and custom attributes: - -**SDK-Provided Attributes:** -- `transaction_context.name`: The name of the transaction -- `transaction_context.op`: The operation type -- `parent_sampled`: Whether the parent transaction was sampled -- `parent_sample_rate`: The sample rate used by the parent - -**Custom Attributes:** -- Any data you add to the `attributes` parameter in `start_span`. Use this for data that you want to use for sampling decisions and for sending to Sentry. Read more about sampling context [here](/platforms/python/configuration/sampling/#sampling-context). - -## Sampling Decision Precedence - -When multiple sampling mechanisms could apply, Sentry follows this order of precedence: - -1. If a sampling decision is passed to `start_span` of a root span (transaction), that decision is used -2. If `traces_sampler` is defined, its decision is used. Although the `traces_sampler` can override the parent sampling decision, most users will want to ensure their `traces_sampler` respects the parent sampling decision -3. If no `traces_sampler` is defined, but there is a parent sampling decision from an incoming distributed trace, we use the parent sampling decision -4. If neither of the above, `traces_sample_rate` is used -5. If none of the above are set, no transactions are sampled. This is equivalent to setting `traces_sample_rate=0.0` - -## How Sampling Propagates in Distributed Traces - -Sentry uses a "head-based" sampling approach: - -- A sampling decision is made in the originating service (the "head") -- This decision is propagated to all downstream services - -The two key headers are: -- `sentry-trace`: Contains trace ID, span ID, and sampling decision -- `baggage`: Contains additional trace metadata including sample rate - -The Sentry Python SDK automatically attaches these headers to outgoing HTTP requests when using auto-instrumentation with libraries like `requests`, `urllib3`, or `httpx`. For other communication channels, you can manually propagate trace information. Learn more about customizing tracing in [custom trace propagation](/platforms/python/tracing/distributed-tracing/custom-trace-propagation/) diff --git a/docs/platforms/python/tracing/span-lifecycle/index__v3.x.mdx b/docs/platforms/python/tracing/span-lifecycle/index__v3.x.mdx deleted file mode 100644 index cebccbdfeb7bb..0000000000000 --- a/docs/platforms/python/tracing/span-lifecycle/index__v3.x.mdx +++ /dev/null @@ -1,247 +0,0 @@ ---- -title: Span Lifecycle -description: "Learn how to add attributes to spans in Sentry to monitor performance and debug applications." -sidebar_order: 10 ---- - - - -To capture transactions and spans customized to your organization's needs, you must first set up tracing. - - - -To add custom performance data to your application, you need to add custom instrumentation in the form of [spans](/concepts/key-terms/tracing/distributed-tracing/#traces-transactions-and-spans). Spans are a way to measure the time it takes for a specific action to occur. For example, you can create a span to measure the time it takes for a function to execute. - - - -## Span Lifecycle - -In Python, spans are typically created using a context manager, which automatically manages the span's lifecycle. When you create a span using a context manager, the span automatically starts when entering the context and ends when exiting it. This is the recommended approach for most scenarios. - -```python -import sentry_sdk - -# Start a span for a task -with sentry_sdk.start_span(op="task", name="Create User"): - # The span will automatically end when exiting this block - user = create_user(email="user@example.com") - send_welcome_email(user) - # The span automatically ends here when the 'with' block exits -``` - -You can call the context manager's `__enter__` and `__exit__` methods to more explicitly control the span's lifecycle. - -## Span Context and Nesting - -When you create a span, it becomes the child of the current active span. This allows you to build a hierarchy of spans that represent the execution path of your application: - -```python -import sentry_sdk - -with sentry_sdk.start_span(op="process", name="Process Data"): - # This code is tracked in the "Process Data" span - - with sentry_sdk.start_span(op="task", name="Validate Input"): - # This is now a child span of "Process Data" - validate_data() - - with sentry_sdk.start_span(op="task", name="Transform Data"): - # Another child span - transform_data() -``` - -## Span Starting Options - -The following options can be used when creating spans: - -| Option | Type | Description | -| ------------- | --------------- | ----------------------------------------------- | -| `op` | `string` | The operation of the span. | -| `name` | `string` | The name of the span. | -| `start_timestamp` | `datetime/float`| The start time of the span. | - -## Using the Context Manager - -For most scenarios, we recommend using the context manager approach with `sentry_sdk.start_span()`. This creates a new span that automatically starts when entering the context and ends when exiting it. - -```python -import sentry_sdk - -with sentry_sdk.start_span(op="db", name="Query Users") as span: - # Perform a database query - users = db.query("SELECT * FROM users") - - # You can set an attribute on the span - span.set_attribute("user_count", len(users)) -``` - -The context manager also correctly handles exceptions, marking the span as failed if an exception occurs: - -```python -import sentry_sdk - -try: - with sentry_sdk.start_span(op="http", name="Call External API"): - # If this raises an exception, the span will be marked as failed - response = requests.get("https://api.example.com/data") - response.raise_for_status() -except Exception: - # The span is already marked as failed and has ended - pass -``` - -## Getting the Current Span - -You can access the currently active span using `sentry_sdk.get_current_span()`: - -```python -import sentry_sdk - -# Get the current active span -current_span = sentry_sdk.get_current_span() -if current_span: - current_span.set_attribute("key", "value") -``` - -## Working with Transactions - -[Transactions](/product/insights/overview/transaction-summary/#what-is-a-transaction), also known as root spans, are a special type of span that represent a complete operation in your application, such as a web request. A top-level span (without any parent spans in the current service) will automatically become a transaction: - -```python -import sentry_sdk - -with sentry_sdk.start_span(name="Background Task", op="task") as transaction: - # Your code here - - # You can add child spans to the transaction - with sentry_sdk.start_span(op="subtask", name="Data Processing"): - # Process data - pass -``` - -If you want to prevent a span from becoming a transaction, you can use -the `only_as_child_span` argument: - -```python -import sentry_sdk - -# This span will never be promoted to a transaction, but if there is a running -# span when this span starts, it'll be attached to it as a child span -with sentry_sdk.start_span(name="Background Task", op="task", only_as_child_span=True): - # Your code here -``` - -## Improving Span Data - -### Adding Span Attributes - -Span attributes customize information you can get through tracing. This information can be found in the traces views in Sentry, once you drill into a span. You can capture additional context with span attributes. These are key-value pairs where the keys are non-empty strings and the values are either primitive Python types (excluding `None`), or a list of a single primitive Python type. - -```python -import sentry_sdk - -with sentry_sdk.start_span(op="db", name="Query Users") as span: - # Execute the query - users = db.query("SELECT * FROM users WHERE active = true") - - # You can add more data during execution - span.set_attribute("result_count", len(users)) -``` - -You can also add attributes to an existing span: - -```python -import sentry_sdk - -# Get the current span -span = sentry_sdk.get_current_span() -if span: - # Set individual data points - span.set_attribute("user_id", user.id) - span.set_attribute("request_size", len(request.body)) -``` - -### Adding Attributes to All Spans - -To add attributes to all spans, use the `before_send_transaction` callback: - -```python -import sentry_sdk -from sentry_sdk.types import Event, Hint - -def before_send_transaction(event: Event, hint: Hint) -> Event | None: - # Add attributes to the root span (transaction) - if "trace" in event.get("contexts", {}): - if "data" not in event["contexts"]["trace"]: - event["contexts"]["trace"]["data"] = {} - - event["contexts"]["trace"]["data"].update({ - "app_version": "1.2.3", - "environment_region": "us-west-2" - }) - - # Add attributes to all child spans - for span in event.get("spans", []): - if "data" not in span: - span["data"] = {} - - span["data"].update({ - "component_version": "2.0.0", - "deployment_stage": "production" - }) - - return event - -sentry_sdk.init( - # ... - before_send_transaction=before_send_transaction -) -``` - -### Adding Span Operations ("op") - -Spans can have an operation associated with them, which helps Sentry understand the context of the span. For example, database related spans have the `db` operation, while HTTP requests use `http.client`. - -Sentry maintains a [list of well-known span operations](https://develop.sentry.dev/sdk/performance/span-operations/#list-of-operations) that you should use when applicable: - -```python -import sentry_sdk - -# HTTP client operation -with sentry_sdk.start_span(op="http.client", name="Fetch User Data"): - response = requests.get("https://api.example.com/users") - -# Database operation -with sentry_sdk.start_span(op="db", name="Save User"): - db.execute( - "INSERT INTO users (name, email) VALUES (%s, %s)", - (user.name, user.email), - ) - -# File I/O operation -with sentry_sdk.start_span(op="file.read", name="Read Config"): - with open("config.json", "r") as f: - config = json.load(f) -``` - -### Updating the Span Status - -You can update the status of a span to indicate whether it succeeded or failed: - -```python -import sentry_sdk - -with sentry_sdk.start_span(op="task", name="Process Payment") as span: - try: - result = process_payment(payment_id) - if result.success: - # Mark the span as successful - span.set_status("ok") - else: - # Mark the span as failed - span.set_status("error") - span.set_attribute("error_reason", str(result.error)) - except Exception: - # Span will automatically be marked as failed when an exception occurs - raise -``` diff --git a/docs/platforms/python/tracing/span-metrics/index__v3.x.mdx b/docs/platforms/python/tracing/span-metrics/index__v3.x.mdx deleted file mode 100644 index 9e715e5996bfa..0000000000000 --- a/docs/platforms/python/tracing/span-metrics/index__v3.x.mdx +++ /dev/null @@ -1,99 +0,0 @@ ---- -title: Sending Span Metrics -description: "Learn how to add attributes to spans in Sentry to monitor performance and debug applications " -sidebar_order: 20 ---- - - - -To use span metrics, you must first configure tracing in your application. - - - -Span metrics allow you to extend the default metrics that are collected by tracing and track custom performance data and debugging information within your application's traces. There are two main approaches to instrumenting metrics: - -1. [Adding metrics to existing spans](#adding-metrics-to-existing-spans) -2. [Creating dedicated spans with custom metrics](#creating-dedicated-metric-spans) - -## Adding Metrics to Existing Spans - -You can enhance existing spans with custom metrics by adding data. This is useful when you want to augment automatic instrumentation or add contextual data to spans you've already created. - -```python -span = sentry_sdk.get_current_span() -if span: - # Add individual metrics - span.set_attribute("database.rows_affected", 42) - span.set_attribute("cache.hit_rate", 0.85) - span.set_attribute("memory.heap_used", 1024000) - span.set_attribute("queue.length", 15) - span.set_attribute("processing.duration_ms", 127) -``` - -### Best Practices for Span Data - -When adding metrics as span data: - -- Use consistent naming conventions (for example, `category.metric_name`) -- Keep attribute names concise but descriptive -- Use appropriate data types (string, number, boolean, or an array containing only one of these types) - -## Creating Dedicated Metric Spans - -For more detailed operations, tasks, or process tracking, you can create custom dedicated spans that focus on specific metrics or attributes that you want to track. This approach provides better discoverability and more precise span configurations, however it can also create more noise in your trace waterfall. - -```python -with sentry_sdk.start_span( - op="db.metrics", - name="Database Query Metrics" -) as span: - # Set metrics after creating the span - span.set_attribute("db.query_type", "SELECT") - span.set_attribute("db.table", "users") - span.set_attribute("db.execution_time_ms", 45) - span.set_attribute("db.rows_returned", 100) - span.set_attribute("db.connection_pool_size", 5) - # Your database operation here - pass -``` - -For detailed examples of how to implement span metrics in common scenarios, see our Span Metrics Examples guide. - -## Adding Metrics to All Spans - -To consistently add metrics across all spans in your application, you can use the `before_send_transaction` callback: - -```python -import sentry_sdk -from sentry_sdk.types import Event, Hint - -def before_send_transaction(event: Event, hint: Hint) -> Event | None: - # Add metrics to the root span - if "trace" in event.get("contexts", {}): - if "data" not in event["contexts"]["trace"]: - event["contexts"]["trace"]["data"] = {} - - event["contexts"]["trace"]["data"].update({ - "app.version": "1.2.3", - "environment.region": "us-west-2" - }) - - # Add metrics to all child spans - for span in event.get("spans", []): - if "data" not in span: - span["data"] = {} - - span["data"].update({ - "app.component_version": "2.0.0", - "app.deployment_stage": "production" - }) - - return event - -sentry_sdk.init( - # ... - before_send_transaction=before_send_transaction -) -``` - -For detailed examples of how to implement span metrics in common scenarios, see our Span Metrics Examples guide. diff --git a/docs/platforms/python/tracing/troubleshooting/index__v3.x.mdx b/docs/platforms/python/tracing/troubleshooting/index__v3.x.mdx deleted file mode 100644 index d200ca951ccf7..0000000000000 --- a/docs/platforms/python/tracing/troubleshooting/index__v3.x.mdx +++ /dev/null @@ -1,95 +0,0 @@ ---- -title: Troubleshooting -description: "Learn how to troubleshoot your tracing setup." -sidebar_order: 60 ---- - -If you need help managing transactions, you can read more here. If you need additional help, you can ask on GitHub. Customers on a paid plan may also contact support. - -## Group Transactions - -When Sentry captures transactions, they are assigned a transaction name. This name is generally auto-generated by the Sentry SDK based on the framework integrations you are using. If you can't leverage the automatic transaction generation (or want to customize how transaction names are generated) you can use the scope API to set transaction names or use a custom event processor. - -For example, to set a transaction name: - -```python -import sentry_sdk - -# Setting the transaction name directly on the current scope -scope = sentry_sdk.get_current_scope() -scope.set_transaction_name("UserListView") -``` - -You can define a custom `before_send_transaction` callback to modify transaction names: - -```python -import sentry_sdk -from sentry_sdk.integrations.django import DjangoIntegration -from sentry_sdk.types import Event, Hint - -def transaction_processor(event: Event, hint: Hint) -> Event | None: - if event.get("type") == "transaction": - # Extract path from transaction name - transaction_name = event.get("transaction", "") - - # Remove variable IDs from URLs to reduce cardinality - if "/user/" in transaction_name: - # Convert /user/123/ to /user/:id/ - import re - event["transaction"] = re.sub(r'/user/\d+/', '/user/:id/', transaction_name) - - return event - -sentry_sdk.init( - dsn="your-dsn", - integrations=[DjangoIntegration()], - traces_sample_rate=1.0, - # Add your event processor during SDK initialization - before_send_transaction=transaction_processor, -) -``` - -## Control Data Truncation - -Currently, every tag has a maximum character limit of 200 characters. Tags over the 200 character limit will become truncated, losing potentially important information. To retain this data, you can split data over several tags instead. - -For example, a 200+ character tag like this: - -`https://empowerplant.io/api/0/projects/ep/setup_form/?user_id=314159265358979323846264338327&tracking_id=EasyAsABC123OrSimpleAsDoReMi&product_name=PlantToHumanTranslator&product_id=161803398874989484820458683436563811772030917980576` - -...will become truncated to: - -`https://empowerplant.io/api/0/projects/ep/setup_form/?user_id=314159265358979323846264338327&tracking_id=EasyAsABC123OrSimpleAsDoReMi&product_name=PlantToHumanTranslator&product_id=1618033988749894848` - -Using `span.set_tag` for shorter values, in combination with `span.set_attribute` maintains the details. - -```python -import sentry_sdk - -# ... - -base_url = "https://empowerplant.io" -endpoint = "/api/0/projects/ep/setup_form" -parameters = { - "user_id": 314159265358979323846264338327, - "tracking_id": "EasyAsABC123OrSimpleAsDoReMi", - "product_name": "PlantToHumanTranslator", - "product_id": 161803398874989484820458683436563811772030917980576, -} - -with sentry_sdk.start_span(op="request", name="setup form") as span: - span.set_tag("base_url", base_url) - span.set_tag("endpoint", endpoint) - for param, value in parameters.items(): - span.set_attribute(f"parameters.{param}", value) - - make_request( - "{base_url}/{endpoint}/".format( - base_url=base_url, - endpoint=endpoint, - ), - data=parameters, - ) - - # ... -``` diff --git a/docs/platforms/python/troubleshooting__v3.x.mdx b/docs/platforms/python/troubleshooting__v3.x.mdx deleted file mode 100644 index 3a1ad15f76ab9..0000000000000 --- a/docs/platforms/python/troubleshooting__v3.x.mdx +++ /dev/null @@ -1,219 +0,0 @@ ---- -title: Troubleshooting -description: "While we don't expect most users of our SDK to run into these issues, we document edge cases here." -sidebar_order: 9000 ---- - -## General - -Use the information in this page to help answer these questions: - -- "What do I do if scope data is leaking between requests?" -- "What do I do if my transaction has nested spans when they should be parallel?" -- "What do I do if the SDK has trouble sending events to Sentry?" - - - The short answer to the first two questions: make sure your `contextvars` work and - that your isolation scope was cloned for each concurrency unit. - - Python supports several distinct solutions to concurrency, including threads and - coroutines. - - The Python SDK does its best to figure out how contextual data such as tags set - with `sentry_sdk.set_tags` is supposed to flow along your control flow. In most - cases it works perfectly, but in a few situations some special care must be - taken. This is specially true when working with a code base doing concurrency - outside of the provided framework integrations. - - The general recommendation is to have one isolation scope per "concurrency unit" - (thread/coroutine/etc). The SDK ensures every thread has an independent scope via the `ThreadingIntegration`. - If you do concurrency with `asyncio` coroutines, make sure to use the `AsyncioIntegration` - which will clone the correct scope in your `Task`s. - - The general pattern of usage for creating a new isolation scope is: - - ```python - with sentry_sdk.isolation_scope() as scope: - # In this block scope refers to a new fork of the original isolation scope, - # with the same client and the same initial scope data. - ``` - - See the Threading section - for a more complete example that involves forking the isolation scope. - - - - If you are using `gevent` (older than 20.5) or `eventlet` in your application and - have configured it to monkeypatch the stdlib, the SDK will abstain from using - `contextvars` even if it is available. - - The reason for that is that both of those libraries will monkeypatch the - `threading` module only, and not the `contextvars` module. - - The real-world usecase where this actually comes up is if you're using Django - 3.0 within a `gunicorn+gevent` worker on Python 3.7. In such a scenario the - monkeypatched `threading` module will honor the control flow of a gunicorn - worker while the unpatched `contextvars` will not. - - It gets more complicated if you're using Django Channels in the same app, but a - separate server process, as this is a legitimate usage of `asyncio` for which - `contextvars` behaves more correctly. Make sure that your channels websocket - server does not import or use gevent at all (and much less call - `gevent.monkey.patch_all`), and you should be good. - - Even then there are still edge cases where this behavior is flat-out broken, - such as mixing asyncio code with gevent/eventlet-based code. In that case there - is no right, _static_ answer as to which context library to use. Even then - gevent's aggressive monkeypatching is likely to interfere in a way that cannot - be fixed from within the SDK. - - This [issue has been fixed with gevent 20.5](https://github.com/gevent/gevent/issues/1407) but continues to be one for - eventlet. - - - - -Your SDK might have issues sending events to Sentry. You might see -`"Remote end closed connection without response"`, `"Connection aborted"`, -`"Connection reset by peer"`, or similar error messages in your logs. -In case of errors and tracing data, this manifests as errors and transactions -missing in Sentry. In case of cron monitors, you might be seeing crons -marked as timed out in Sentry when you know they've run successfully. The SDK -itself might be logging errors about the connection getting reset or the -server closing the connection without response. - -If you experience this, try turning on the keep-alive configuration option, available in SDK -versions `1.43.0` and up. - -```python -import sentry_sdk - -sentry_sdk.init( - # your usual options - keep_alive=True, -) -``` - -If you need more fine-grained control over the behavior of the socket, check out -socket-options. - - - - -If you're on Python version 3.12 or greater, you might see the following deprecation warning on Linux environments since the SDK spawns several threads. - -``` -DeprecationWarning: This process is multi-threaded, use of fork() may lead to deadlocks in the child. -``` - -To remove this deprecation warning, set the [multiprocessing start method to `spawn` or `forkserver`](https://docs.python.org/3.14/library/multiprocessing.html#contexts-and-start-methods). -Remember to do this only in the `__main__` block. - -```python -import sentry_sdk -import multiprocessing -import concurrent.futures - -sentry_sdk.init() - -if __name__ == "__main__": - multiprocessing.set_start_method("spawn") - pool = concurrent.futures.ProcessPoolExecutor() - pool.submit(sentry_sdk.capture_message, "world") -``` - - - - - Currently, every tag has a maximum character limit of 200 characters. Tags over the 200 character limit will become truncated, losing potentially important information. To retain this data, you can split data over several tags instead. - - For example, a 200+ character tagged request: - - `https://empowerplant.io/api/0/projects/ep/setup_form/?user_id=314159265358979323846264338327&tracking_id=EasyAsABC123OrSimpleAsDoReMi&product_name=PlantToHumanTranslator&product_id=161803398874989484820458683436563811772030917980576` - - The 200+ character request above will become truncated to: - - `https://empowerplant.io/api/0/projects/ep/setup_form/?user_id=314159265358979323846264338327&tracking_id=EasyAsABC123OrSimpleAsDoReMi&product_name=PlantToHumanTranslator&product_id=1618033988749894848` - - - - - - -## Profiling - - - -If you don't see any profiling data in [sentry.io](https://sentry.io), you can try the following: - -- Ensure that Tracing is enabled. -- Ensure that the automatic instrumentation is sending performance data to Sentry by going to the **Performance** page in [sentry.io](https://sentry.io). -- If the automatic instrumentation is not sending performance data, try using custom instrumentation. -- Enable debug mode in the SDK and check the logs. - -### Upgrading From Older SDK Versions - -#### Transaction-Based Profiling - -The transaction-based profiling feature was experimental prior to version `1.18.0`. To update your SDK to the latest version, remove `profiles_sample_rate` from `_experiments` and set it in the top-level options. - -```python diff -sentry_sdk.init( - dsn="___PUBLIC_DSN___", - traces_sample_rate=1.0, -- _experiments={ -- "profiles_sample_rate": 1.0, # for versions before 1.18.0 -- }, -+ profiles_sample_rate=1.0, -) -``` - -#### Continuous Profiling - -The continuous profiling feature was experimental prior to version `2.24.1`. To upgrade your SDK to the latest version: - -- Remove `continuous_profiling_auto_start` from `_experiments` and set `profile_lifecycle="trace"` in the top-level options. -- Add `profile_session_sample_rate` to the top-level options. - - - - -## Crons - - - - - - You may not have linked errors to your monitor. - - - - - - The SDK might be experiencing network issues. Learn more about troubleshooting network issues. - - - - - - You may not have set up alerts for your monitor. - - - - - - Our current data retention policy is 90 days. - - - - - - Currently, we only support crontab expressions with five fields. - - - - - - Yes, just make sure you're using SDK version `1.44.1` or higher since that's when support for monitoring async functions was added. - -