Skip to content
2 changes: 1 addition & 1 deletion docs/cli/configuration.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ You can manually create an [Organization Token](https://sentry.io/orgredirect/or
You can also sign in to your Sentry account (if you're not already) and create an Auth Token directly from this page.

<Alert>
Some CLI functionality, such as [Crons Monitoring](/product/crons/getting-started/cli/), is dependent on [Data Source Name (DSN)](/concepts/key-terms/dsn-explainer/) authentication.
Some CLI functionality, such as [Crons Monitoring](/product/monitors/crons/getting-started/cli/), is dependent on [Data Source Name (DSN)](/concepts/key-terms/dsn-explainer/) authentication.
</Alert>

You can create an Auth Token from this page in one of the following three ways:
Expand Down
2 changes: 1 addition & 1 deletion docs/cli/crons.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -84,7 +84,7 @@ sentry-cli monitors run -s "0 * * * *" --check-in-margin 10 --max-runtime 5 --ti

### Specifying Monitor Environments (Optional)

If your cron monitor runs in multiple environments you can use the `-e` flag to specify which [Monitor Environment](/product/crons/job-monitoring/#multiple-environments) to send check-ins to.
If your cron monitor runs in multiple environments you can use the `-e` flag to specify which [Monitor Environment](/product/monitors/crons/job-monitoring/#multiple-environments) to send check-ins to.

```bash {tabTitle: Node.JS}
sentry-cli monitors run -e dev my-monitor-slug -- node path/to/file.js
Expand Down
2 changes: 1 addition & 1 deletion docs/platforms/dotnet/common/crons/hangfire/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ description: "Learn more about how to monitor your Hangfire jobs."
sidebar_order: 5001
---

The .NET SDK provides an integration with [Hangfire](https://www.hangfire.io/) to monitor your jobs by automatically [creating check-ins for them](/product/crons/job-monitoring/). The SDK relies on job filters that are set up when you call `UseSentry`. For example:
The .NET SDK provides an integration with [Hangfire](https://www.hangfire.io/) to monitor your jobs by automatically [creating check-ins for them](/product/monitors/crons/job-monitoring/). The SDK relies on job filters that are set up when you call `UseSentry`. For example:

```csharp
using Hangfire;
Expand Down
2 changes: 1 addition & 1 deletion docs/platforms/java/common/crons/troubleshooting.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ sidebar_order: 9000

<Expandable title="How do I send an attachment with a check-in (such as a log output)?">

Attachments aren't supported by our Java SDK yet. For now, you can use the [check-in attachments API](/product/crons/getting-started/http/#check-in-attachment-optional).
Attachments aren't supported by our Java SDK yet. For now, you can use the [check-in attachments API](/product/monitors/crons/getting-started/http/#check-in-attachment-optional).

</Expandable>

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ This integration only works in the Deno runtime.

_Import name: `Sentry.denoCronIntegration`_

[Sentry Crons](/product/crons/) allows you to monitor the uptime and performance of any scheduled, recurring job in your application.
[Sentry Crons](/product/monitors/crons/) allows you to monitor the uptime and performance of any scheduled, recurring job in your application.

The DenoCron integration sets up automatic monitoring for your cron jobs created by [`Deno.cron`](https://docs.deno.com/deploy/kv/manual/cron). It captures check-ins and sends them to Sentry.

Expand Down
2 changes: 1 addition & 1 deletion docs/platforms/javascript/guides/nextjs/manual-setup.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -492,7 +492,7 @@ module.exports = withSentryConfig(nextConfig, {

## Step 6: Instrument Vercel Cron Jobs (Optional)

You can automatically create [Cron Monitors](/product/crons/) in Sentry if you have configured [Vercel cron jobs](https://vercel.com/docs/cron-jobs).
You can automatically create [Cron Monitors](/product/monitors/crons/) in Sentry if you have configured [Vercel cron jobs](https://vercel.com/docs/cron-jobs).

Update `withSentryConfig` in your `next.config.(js|mjs)` file with the following option:

Expand Down
4 changes: 2 additions & 2 deletions docs/product/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -50,11 +50,11 @@ Our [**AI Agents Monitoring**](/product/insights/ai/agents/) feature gives you i

### Uptime Monitoring

Sentry's [**Uptime Monitoring**](/product/uptime-monitoring/) helps you maintain uptime for your web services by monitoring relevant URLs. It continuously tracks configured URLs, delivering alerts and insights to quickly identify downtime and troubleshoot issues. By leveraging [distributed tracing](/product/uptime-monitoring/uptime-tracing/), Sentry enables you to pinpoint any errors that occur during an uptime check, simplifying triage and accelerating root cause analysis. Uptime monitoring includes [uptime request spans](/product/uptime-monitoring/#uptime-request-spans) by default. These act as the root of any uptime issue's trace, giving you better context for faster debugging.
Sentry's [**Uptime Monitoring**](/product/monitors/uptime-monitoring/) helps you maintain uptime for your web services by monitoring relevant URLs. It continuously tracks configured URLs, delivering alerts and insights to quickly identify downtime and troubleshoot issues. By leveraging [distributed tracing](/product/monitors/uptime-monitoring/uptime-tracing/), Sentry enables you to pinpoint any errors that occur during an uptime check, simplifying triage and accelerating root cause analysis. Uptime monitoring includes [uptime request spans](/product/monitors/uptime-monitoring/#uptime-request-spans) by default. These act as the root of any uptime issue's trace, giving you better context for faster debugging.

### Recurring Job Monitoring

[**Cron Monitors**](/product/crons/) allows you to monitor the uptime and performance of any scheduled, recurring job in Sentry. Once implemented, it'll allow you to get alerts and metrics to help you solve errors, detect timeouts, and prevent disruptions to your service.
[**Cron Monitors**](/product/monitors/crons/) allows you to monitor the uptime and performance of any scheduled, recurring job in Sentry. Once implemented, it'll allow you to get alerts and metrics to help you solve errors, detect timeouts, and prevent disruptions to your service.

### Visibility Into Your Data Across Environments

Expand Down
4 changes: 2 additions & 2 deletions docs/product/issues/issue-details/uptime-issues/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ og_image: /og-images/product-issues-issue-details-uptime-issues.png

An uptime issue is a grouping of detected downtime events for a specific URL. A downtime event is generated by
active uptime alerts when HTTP requests fail to meet our
[uptime check criteria](/product/uptime-monitoring/#uptime-check-criteria).
[uptime check criteria](/product/monitors/uptime-monitoring/#uptime-check-criteria).

![Uptime issue details](./img/uptime-issue-details.png)

Expand All @@ -20,4 +20,4 @@ Uptime checks made against web services configured with one of Sentry's supporte

## Issue Lifecycle

Uptime issues are grouped by the monitored URL and created upon the first detected downtime. Sentry automatically resolves an ongoing uptime issue when the monitored URL returns to a healthy status and meets our [uptime check criteria](/product/uptime-monitoring/#uptime-check-criteria). If the URL experiences subsequent downtime, the issue's status will change to regressed.
Uptime issues are grouped by the monitored URL and created upon the first detected downtime. Sentry automatically resolves an ongoing uptime issue when the monitored URL returns to a healthy status and meets our [uptime check criteria](/product/monitors/uptime-monitoring/#uptime-check-criteria). If the URL experiences subsequent downtime, the issue's status will change to regressed.
55 changes: 55 additions & 0 deletions docs/product/issues/monitors-and-alerts/index.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,55 @@
---
title: Monitors and Alerts
sidebar_order: 10
description: "Learn how Monitors and Alerts help you customize your issue management process."
---

[Monitors](/product/monitors/) and [Alerts](/product/alerts/) work together to help you create issues and take action when problems occur in your project. While they serve different purposes, they're designed to work as a team.

## How They Work Together

**Monitors** are the "detectors" - they watch for specific conditions and create issues when those conditions are met. **Alerts** are the "responders" - they take action when issues are created or change state and meet the filter criteria of the Alert.

Here's the typical flow:

1. **Monitor detects a problem** → Creates an issue
2. **Issue triggers Alert** → Takes external action (sends notifications, creates tickets, calls webhooks)

[!monitors-and-alerts-flow-chart](./img/monitors-and-alerts-flow-chart.png)

## Monitors: Creating Issues

Monitors customize when errors and performance problems become issues. They can track:

- **Custom metrics** and span attributes
- **Scheduled jobs** (cron monitors)
- **HTTP endpoints** (uptime monitors)
- **Default behaviors** (errors, replays, traces, profiles)

[See all Monitor types](/product/monitors/#types-of-monitors)

When a monitor's conditions are met, it automatically creates an issue with the specified priority, assignee, and other attributes.

## Alerts: Taking Action

Alerts respond to issue state changes by performing external actions like:

- Sending notifications to Slack, email, or other channels
- Creating tickets in JIRA or other project management tools
- Calling webhooks or integrations

## The Connection

Alerts must be connected to Monitors to run. This connection ensures that:

- Issues created by Monitors can trigger appropriate responses
- You have full control over both detection and response
- Actions are taken only for the issues you care about

## Getting Started

1. **Create a Monitor** to define when issues should be created
2. **Create an Alert** to define what actions to take
3. **Connect them** so the Alert responds to issues from that Monitor

Using Monitors and Alerts gives you a complete workflow from problem detection to team notification, ticket creation, and more.
93 changes: 93 additions & 0 deletions docs/product/new-monitors-and-alerts/alerts/best-practices.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,93 @@
---
title: Alerts Best Practices
keywords: ["best practice", "alerting", "manage noise"]
sidebar_order: 30
description: "Learn best practices for creating alerts."
---

<Alert>
New Monitors and Alerts is currently in <strong>beta</strong>. Beta features are still a work in progress and may have bugs. We recognize the irony. Help improve this feature by providing feedback on our [GitHub discussion](https://github.com/getsentry/sentry/discussions/101960).
</Alert>

Alerts should notify you when there's an important problem with your application. But they shouldn't be too noisy, because that can lead to alert fatigue. The following best practices will help you create relevant alerts that notify the right people — that is, the people equipped to fix the problem.

There are two types of alerts: [issue alerts](/product/alerts/alert-types/#issue-alerts) and [metric alerts](/product/alerts/alert-types/#metric-alerts). Most of our alerting best practices are specific to issue alerts, however, the [alert conditions best practices](#alert-conditions-best-practices) apply to both issue and metric alerts.

## Issue Alerts Best Practices

An issue alert is triggered when an individual issue meets some criteria. These criteria (or "triggers") can be based on state-changes or frequency. The best practices that follow cover alerts based on state and frequency changes, as well as reducing noise, and effective routing.

### State-Change Alerts

The following triggers, or “when” conditions, capture state changes in issues:

- A new issue is created
- The issue changes state from resolved to unresolved, or a regression occurs
- The issue changes state from archived to escalating

Your first instinct might be to set an alert for every state change. However, this is likely to result in too many alerts if you're running an app with a significant number of users. In particular, regressions will be more common than you expect because Sentry auto-resolves issues after 14 days of silence (configurable), and many issues keep coming back after the 14-day window.

To deal with this, the **Issues** page includes the [**Review List**](/product/issues/states-triage/#mark-reviewed) (in the "For Review" tab), containing only issues that have had a state change in the last seven days. We recommended that you review this list once a day. If you need real-time notifications for particular types of issues, such as those affecting your enterprise customers, you can always create alerts with those filters.

### Frequency-Based Alerts

Below, we describe best practices for setting alerts using the following frequency-based triggers:

- **Number of events in an issue**: This is a very commonly used trigger, but remember that frequency isn't everything: a low-frequency error can be more important than a high-frequency one if it's in a more important part of your app.
- **Number of users affected by an issue**: Sometimes a very small number of users create a lot of errors, so in some cases alerting on users affected can be more important than error frequency. However, remember that not all errors that have user counts in Sentry may be actually user-facing, and vice versa.
- **Percent of sessions affected by an issue**: Error counts and users affected require constant manual adjustments as your traffic patterns change and are not well suited to deal with seasonality (for example, fewer errors on the weekend). Also, it can be hard to assess the impact of an issue from error counts or counts of users affected. In such cases, if you've configured your project to capture session data, you can opt for alerting when an issue affects a certain percentage of user sessions.

### Reducing Noise

One way to keep alerts from becoming too noisy is to use filters, or “if” conditions, as part of your alert configuration. Below, we describe best practices for setting alerts using the following noise-reducing filtering options:

- **Prioritize high priority issues**: If you're getting too many notifications about non-error or low priority issues, add the 'high priority' condition to your alert configuration. That way, you'll only get alerts for high-priority issues.
- **Prioritize using tags**: Filter issue alerts based on important tags, such as `customer_type=enterprise` or `url=/very/important/page`. You can find the list of tags available in your project under **[Project] > Settings > Tags**. The list is an aggregation of all tag keys (default and custom) encountered in events for that project.
- **Prioritize new issues**: If you're frequently getting alerted about old issues, filter your alerts to issues created in the last few days using the `The issue is older or newer than...` filter.
- **Filter transient issues**: Many issues exhibit a short burst of events that can trip your frequency-based alerts. To filter out these issues, use the `Issue has happened at least {X} times` filter.
- **Prioritize the latest release**: Use the `The event is from the latest release` filter to make your issue alert only apply to the latest release.
- **Archive noisy issues**: If you're seeing alerts from the same issue repeatedly, [archive the issue](/product/issues/states-triage/#archive). (This is not an alert configuration setting.)

### Routing

These routing best practices ensure that you alert the right people about a problem in your application.

- **Ownership Rules**: Use [ownership rules and code owners](/product/issues/ownership-rules/) to let Sentry automatically send alerts to the right people, as well as to ease configuration burden. You can configure ownership in **[Project] > Settings > Ownership Rules**. In the case of ownership rules, when there are no matching owners, the alert goes to all project members by default. If this is too broad, and you'd like a specific owner to be the fallback, end your ownership rules with a rule like `*:<owner>`.
- **Delivery methods for different priorities**: Use different delivery methods to separate alerts of different priorities. For example, you might route from highest to lowest priority like so:
- High priority: Page (for example, PagerDuty or OpsGenie)
- Medium priority: Notification (for example, Slack)
- Low priority: Email
- **Review List**: Found in the "For Review" tab of **Issues**, the [**Review List**](/product/issues/states-triage/#mark-reviewed) is where you can check on your lowest priority issues without receiving any alerts.
- **Build an integration**: If you would like to route alert notifications to solutions with which Sentry doesn't yet have an out-of-the-box integration, you can use our [integration platform](/organization/integrations/integration-platform/). When you create an integration, it will be available in the alert actions menu. You might want use your own integration for:
- Sending alerts to integrations not supported natively
- Aggregating alerts from your different monitoring systems
- Writing custom rules in the webhook handler to route alerts more intelligently

## Alert Conditions Best Practices

Both frequency-based issue alerts and metric alerts can notify you in two ways:

- When they cross a [fixed threshold](#fixed-thresholds)
- When they deviate from their historical behavior, based on a [dynamic threshold](#dynamic-thresholds-change-alerts), or what we call a _change alert_

### Fixed Thresholds

Fixed thresholds are most effective when you have a clear idea of what constitutes good or bad performance. Typically, they’re the type of threshold you’ll use most often when setting up alerts. Some examples of fixed thresholds are:

- When your app's crash rate exceeds 1%
- When your app's transaction volume drops to zero
- When any issue affects more than 100 enterprise users in a day
- When the response time of a key transaction exceeds 500 ms

### Dynamic Thresholds: Change Alerts

Dynamic thresholds help you detect when a metric deviates significantly from its “normal” range. For example, the percentage of sessions affected by an issue in the last 24 hours is _20% greater than one week ago_ (dynamic), rather than the percentage of sessions affected is simply _greater than 20%_ (fixed).

Dynamic thresholds are good for when it’s cumbersome to create fixed thresholds for every metric of interest, or when you don’t have an expected value for a metric, such as in the following scenarios:

- **Seasonal fluctuations**: Seasonal metrics, such as number of transactions (which fluctuates daily), are more accurately monitored by comparing them to the previous day or week, rather than a fixed value.
- **Unpredictable growth**: Fixed-threshold alerts may require continuous manual adjustment as traffic patterns change, such as with a fast-growing app. Dynamic thresholds work regardless of changing traffic patterns.

You may want to **complement** (more common) rather than **replace** (less common) fixed thresholds with dynamic thresholds.

Learn more about [change alerts for issue alerts](/product/alerts/create-alerts/issue-alert-config/#change-alerts) and [change alerts for metric alerts](/product/alerts/create-alerts/metric-alert-config/#change-alerts-percent-change) in the full documentation.
Loading
Loading