Automate the conversion, testing, and deployment of Sigma rules to Grafana Alerting rules with GitHub Actions using a Detection as Code approach using a declarative configuration file.
- Config Validator: Validates configuration files against the JSON schema to ensure proper structure and required fields before processing.
- Sigma Rule Validation: Before converting and deploying your Sigma rules, we strongly recommend validating them to ensure they conform to the Sigma specification. Use the SigmaHQ Sigma Rules Validator GitHub Action to automatically validate your rules in your CI/CD pipeline.
- Sigma Rule Converter: Converts Sigma rules to target query languages using
sigma-cli. Supports dynamic plugin installation, custom configurations, and output management, producing JSON output files containing converted queries and rule metadata. - Grafana Query Integrator: Processes the JSON output from the Sigma Rule Converter and generates Grafana-compatible alert rule configurations, bridging the gap between converted Sigma rules and Grafana alerting.
- Sigma Rule Deployer: Deploys alert rule files to Grafana, supporting both incremental deployments (only changed files) and fresh deployments (complete replacement).
- Create a GitHub repository and add the Sigma rules and pipelines you want to convert
- Following the main SigmaHQ/sigma convention, we put our rules into folders starting with
rules, and we put our Sigma pipelines in apipelinesfolder - Note that any Sigma correlation rules you want to convert must have the rules they reference in the same file (see the FAQ)
- Following the main SigmaHQ/sigma convention, we put our rules into folders starting with
- Create a Grafana service account token and add it as a secret to your GitHub repository
- Ensure the service account is either an Editor and/or has the following RBAC roles:
- Alerting: Access to alert rules provisioning API
- Alerting: Rules Reader
- Alerting: Rules Writer
- Alerting: Set provisioning status
- Data sources: Reader
- Ensure the service account is either an Editor and/or has the following RBAC roles:
- Create a configuration file that defines one or more conversions and add it to the repository
- See the sample configuration file
- See also the configuration file schema for more details
- Add a workflow to run the conversion/integration Actions on a PR commit or issue comment
- See the reusable workflow convert-integrate.yml
- Add a workflow to run the deployment Action on a push to main
- See the reusable workflow deploy.yml
- Create a PR that adds or modify a converted Sigma rule, and add a comment
sigma convert allto the PR to see the conversion and integration process in action - Once you're happy with the results, merge the PR into main, which will trigger the deployer to provision the Alerting rules to your Grafana instance
- With the alert rules successfully provisioned, set up Alerting notifications for the relevant folder and/or groups to directly contact affected users. Alternatively you can connect them to Grafana IRM and use it to manage on-call rotas and simplify alert routing
These Actions can convert rules using any Sigma backend and produce valid alert rules for any data source, however, to date they have only been thoroughly tested with Loki and Elasticsearch. In particular, converting log queries into metric queries so they can be used correctly with Grafana Managed Alerting is dependent on the backend supporting that option or by modifying the generated queries using the query_model option.
Relevent conversion backends and data sources that can be used in Grafana include:
| Sigma Backend | Data Source | Supported Integration Method |
|---|---|---|
| Grafana Loki | Loki data source | Native |
| Elasticsearch | Elasticsearch data source | Native |
| Azure KQL | Azure Monitor data source | Custom Model |
| Datadog | Datadog data source | Custom Model |
| QRadar AQL | IBM Security QRadar data source | Custom Model |
| Opensearch | Opensearch data source | Custom Model |
| Splunk | Splunk data source | Custom Model |
| SQLite | SQLite data source | Custom Model |
| SurrealQL | SurrealDB data source | Custom Model |
- Native: The data source plugin is supported by integrate action and the query model is generated automatically.
- Custom Model: The data source plugin is supported by the integrate action but the query model must be passed as a custom model in the conversion configuration.
Important
Alert rules only work with metric queries, not log queries.
Data source plugins vary in their support for metric queries and the generated query from the convert action for Sigma rules will often only produce a log query, not a metric query. In contrast, a converted Sigma Correlation rule will generally produce a metric query, which can be used directly in the alert rule.
- Native support: Some data sources, such as Loki, can apply metric functions to log queries
- Limited support: Other data source, including the Elasticsearch data source, do not support metric queries through their native query language, but their log query response can include metric metadata (e.g., counts)
For data sources that lack native metric query support, you must provide a custom query model using the query_model configuration option (see How can I use a custom query model for a data source?).
The query model is a JSON object that defines the data source query structure for query execution.
To ensure the data source plugin can execute your queries, you may need to provide a bespoke query_model in the conversion configuration. You do this by specifing a fmt.Sprintf formatted JSON string, which receives the following arguments:
- the ref ID for the query
- the UID for the data source
- the query, escaped as a JSON string
An example query model would be:
query_model: '{"refId":"%s","datasource":{"type":"my_data_source_type","uid":"%s"},"query":"%s","customKey":"customValue"}'Or for Elasticsearch:
query_model: '{"refId":"%s","datasource":{"type":"elasticsearch","uid":"%s"},"query":"%s","alias":"","metrics":[{"type":"count","id":"1"}],"bucketAggs":[{"type":"date_histogram","id":"2","settings":{"interval":"auto"}}],"intervalMs":2000,"maxDataPoints":1354,"timeField":"@timestamp"}'Other than the refId and datasource (which are required by Grafana), the keys used for the query model are data source dependent. They can be identified by testing a query against the data source with the Query inspector open, going to the Query tab, and examining the items used in the request.data.queries list.
The main restriction are they need to be valid Sigma rules, including the id and title metadata fields. If you are using Correlation rules, the rule files must contain all the referenced rules within the rule file (using YAML's multiple document feature, i.e., combined with ---).
Important
The Sigma Rules Validator action does not currently work with multiple documents in a single YAML and hence we recommend storing such rules in a separate directory from the Sigma rules. More info can be found here
This should be the UID (Unique IDentifier) of the data source, not the data source name. You can find the UID for a data source by opening the Explore page, selecting the relevant data source, and examining the page URL for the text "datasource":"XXX" - that value (i.e., XXX) is the UID.
The pySigma Loki backend supports two optional boolean flags:
add_line_filters: adds an additional line filter to each query without one, using the longest values being searched for, to help reduce the volume of results being parsedcase_sensitive: changes the default behaviour of Sigma string matches to be case sensitive
The imapct of these two flags are different:
- Line filters can basically be enabled in all contexts - it's a performance enhancement that should never affect the results a query brings back
- Changing the case sensitivity of Sigma rules carries some risk. Whilst some logs, like audit logs should be case sensitive, others may not be which could mean certain rules potentially miss logs with it enabled, and some rules may not bring back any results. In general, if there's any possibility the values being searched for in the rules are user-entered, we would strongly recommend using
case_sensitive: false(which is also the default), otherwise it can usually be true as its queries will be more performant (but you may want to try testing it with a known example)
Detection as Code (DaC) is a practice where security detection rules are:
- Stored as structured, human-readable files
- Managed in a version-controlled environment (like Git) to track all changes
- Deployed through automated pipelines to ensure consistency and traceability
The goal is to manage the entire lifecycle, from developing accurate detection rules to deploying the database queries and configuring alert systems, all within a single, versioned environment.
This project helps you achieve Detection as Code using Sigma rules, GitHub, and Grafana:
- Sigma rules provide a standardized format for storing detection logic with thousands of community examples
- Sigma CLI converts these rules into queries compatible with multiple database systems (Loki, Elasticsearch, etc.)
- Grafana executes those queries on a schedule and triggers alerts via Grafana IRM when detections occur
Sigma Rule Deployment automates this workflow: it provides GitHub Actions to convert Sigma rules to queries, validates their functionality, and provisions them to Grafana as alert rules; making security monitoring more reliable and maintainable.
To release new versions of sigma-rule-deployment, we use Git tags to denote an officially released version, and automation to push the appropriately tagged Docker image. When the main branch is in state that is ready to release, the process is as follows:
- Determine the correct version number using the Semantic Versioning methodology. All version numbers should be in the format
\d+\.\d+\.\d+(-[0-9A-Za-z-]+)? - Create a PR to update all the version tags used in the reusable workflows convert-integrate.yml and deploy.yml to the new version, and merge it into
mainonce it is approved, e.g.:
uses: grafana/sigma-rule-deployment/actions/convert@vX.X.X
- Checkout
mainand create a signed tag for the release, named the version number prefixed with a v, e.g.,git tag --sign --message="Release vX.X.X" vX.X.X - Push the tag to GitHub, e.g.,
git push --tags - Create a release in GitHub against the appropriate tag. If the version number starts with
v0, or ends with-alpha/beta/rcXetc., remember to mark it as a pre-release - Validate that the "Build Consolidated Image" action, which pushes the tagged image to the GitHub Container Repository (GHCR), has completed successfully for the Release action
