This repository is an open source project that aims to provide scripts which can be consumed within dbt
and pushed out to Snowflake
. It is also possible to remove the Jinja
tags and use as straight SQL code.
These scripts can be used and scheduled with Snowflake Alerts
or Snowflake Tasks
.
The easy and quickest way to consume and deploy these Snowflake monitors is with the Snowflake monitoring service monitorial.io (https://manager.monitorial.io) which has a Wizard where you can select the Snowflake monitors you want and monitorial.io will deploy these for you and send alerts to the destinations of your choice such as Slack, Teams, Webhook, Splunk, Azure Log Analytics, AWS Cloudwatch, Email
Add the following to your packages.yml file
- git: https://github.com/monitorial-io/monitorial-monitors.git
revision: "1.0.8"
the project namespace for this package is dbt_monitorialio_monitors
and can be used as follows:
{{ dbt_monitorialio_monitors.monitor_name() }}
for example
{{ dbt_monitorialio_monitors.failed_logins(time_filter=1400) }}
to contribute to this project please fork the repository and submit a pull request.
Please ensure the yml documentation has been updated to include:
macros:
- name: <<name of your macro which matches your filename>>
description: <<description of your macro>>
docs:
show: true
monitorial:
version: 1.0.3
defaults:
schedule: <<CRON schedule this should be run on>>
severity: "<<severity level to be associated when a result is returned>>"
message: "<<message to be sent out in the notification>>"
message_type: "<<type of message this represents (eg security)>>"
environment: "<<environment in which the monitor is to be deployed in>>"
column_filters:
datatypes: []
arguments:
- name: <<name of your argument>>
type: <<data type>>
description: <<description of your argument>>
- name: ...
type: ...
description: ...
name | Scannner | description |
---|---|---|
ensure_sso_configured | CIS-1.1 | Federated authentication enables users to connect to Snowflake using secure SSO (single sign-on). With SSO enabled, users authenticate through an external (SAML 2.0-compliant or OAuth 2.0) identity provider (IdP). Once authenticated by an IdP, users can access their Snowflake account for the duration of their IdP session without having to authenticate to Snowflake again. Users can choose to initiate their sessions from within the interface provided by the IdP or directly in Snowflake. Snowflake offers native support for federated authentication and SSO through Okta and Microsoft ADFS. Snowflake also supports most SAML 2.0-compliant vendors as an IdP, including Google G Suite, Microsoft Azure Active Directory, OneLogin, and Ping Identity PingOne. To use an IdP other than Okta or ADFS, you must define a custom application for Snowflake in the IdP. There are two ways to configure SAML: - By creating the security integration (recommended) - By setting the SAML_IDENTITY_PROVIDER account parameter (deprecated, a violation will still be reported if this method is used) |
ensure_scim_integration_configured | CIS-1.2 | The System for Cross-domain Identity Management (SCIM) is an open specification designed to help facilitate the automated management of user identities and groups (i.e. roles) in cloud applications using RESTful APIs. Snowflake supports SCIM 2.0 integration with Okta, Microsoft Azure AD and custom identity providers. Users and groups from the identity provider can be provisioned into Snowflake, which functions as the service provider. |
ensure_snowflake_password_unset_sso_users | CIS-1.3 | Ensure that Snowflake password is unset for SSO users. |
ensure_mfa_enabled_all_human_users | CIS-1.4 | Multi-factor authentication (MFA) is a security control used to add an additional layer of login security. It works by requiring the user to present two or more proofs (factors) of user identity. An MFA example would be requiring a password and a verification code delivered to the user's phone during user sign-in. The MFA feature for Snowflake users is powered by the Duo Security service. |
ensure_minimum_password_length | CIS-1.5 | To mitigate the risk of unauthorized access to a Snowflake account through easily guessable password, Snowflake enforces the following password policy as a minimum requirement while using the ALTER USER command and the web interface: - Must be at least 8 characters long. - Must contain at least 1 digit. - Must contain at least 1 uppercase letter and 1 lowercase letter. Snowflake password policies can be used to specify and enforce further constraints on password length and complexity. Snowflake supports setting a password policy for your Snowflake account and for individual users. Only one password policy can be set at any given time for your Snowflake account or a user. If a password policy exists for the Snowflake account and another password policy is set for a user in the same Snowflake account, the user-level password policy takes precedence over the account-level password policy. The password policy applies to new passwords that are set in your Snowflake account. To ensure that users with existing passwords meet the password policy requirements, require users to change their password during their next login to Snowflake as shown in Step 6: Require a Password Change. |
ensure_service_accounts_key_pair_authentication | CIS-1.6 | Service account is an identity used by scripts, jobs, applications, pipelines, etc. to talk to Snowflake. It is also sometimes known as "application user", "service principal", "system account", or "daemon user". On the platform level Snowflake does not differentiate between Snowflake users created for and used by humans and Snowflake users created for and used by services. Password-based authentication used by humans can be augmented by a second factor (MFA), e.g. a hardware token, or a security code pushed to a mobile device. Services and automation cannot be easily configured to authenticate with a second factor. Instead, for such use cases, Snowflake supports using key pair authentication as a more secure alternative to password-based authentication. Note that password-based authentication for a service account can be enabled along with a key-based authentication. To ensure that only key-based authentication is enabled for a service account, the PASSWORD parameter for that Snowflake user must be set to null . |
ensure_authentication_key_pairs_rotated | CIS-1.7 | Snowflake supports using RSA key pair authentication as an alternative to password authentication and as a primary way to authenticate service accounts. Authentication key pair rotation is a process of replacing an existing authentication key pair with a freshly generated key pair. Snowflake supports two active authentication key pairs to allow for uninterrupted key rotation. Rotate and replace your authentication key pairs based on the expiration schedule at least once every 180 days. |
ensure_inactive_users_disabled | CIS-1.8 | Access grants tend to accumulate over time unless explicitly set to expire. Regularly revoking unused access grants and disabling inactive user accounts is a good countermeasure to this dynamic. If credentials of an inactive user account are leaked or stolen, it may take longer to discover the compromise. In Snowflake an user account can be disabled by users with the ACCOUNTADMIN role. |
ensure_idle_session_timeout_with_accountadmin_securityadmin | CIS-1.9 | A session begins when a user connects to Snowflake and authenticates successfully using a Snowflake programmatic client, Snowsight, or the classic web interface. A session is maintained indefinitely with continued user activity. After a period of inactivity in the session, known as the idle session timeout, the user must authenticate to Snowflake again. Session policies can be used to modify the idle session timeout period. The idle session timeout has a maximum value of four hours. |
ensure_limit_users_accountadmin_securityadmin | CIS-1.10 | By default, ACCOUNTADMIN is the most powerful role in a Snowflake account. Users with the SECURITYADMIN role grant can trivially escalate their privileges to that of ACCOUNTADMIN . Following the principle of least privilege that prescribes limiting user's privileges to those that are strictly required to do their jobs, the ACCOUNTADMIN and SECURITYADMIN roles should be assigned to a limited number of designated users (e.g., less than 10, but at least 2 to ensure that access can be recovered if one ACCOUNTAMIN user is having login difficulties). |
ensure_users_accountadmin_email_address | CIS-1.11 | Every Snowflake user can be assigned an email address. The email addresses are then used by Snowflake features like notification integration, resource monitor and support cases to deliver email notifications to Snowflake users. In trial Snowflake accounts these email addresses are used for password reset functionality. The email addresses assigned to ACCOUNTADMIN users are used by Snowflake to notify administrators about important events related to their accounts. For example, ACCOUNTADMIN users are notified about impending expiration of SAML2 certificates or SCIM access tokens. |
ensure_no_users_accountadmin_securityadmin_default_role | CIS-1.12 | The ACCOUNTADMIN system role is the most powerful role in a Snowflake account and is intended for performing initial setup and managing account-level objects. SECURITYADMIN role can trivially escalate their privileges to that of ACCOUNTADMIN . Neither of these roles should be used for performing daily non-administrative tasks in a Snowflake account. Instead, users should be assigned custom roles containing only those privileges that are necessary for successfully completing their job responsibilities. |
ensure_no_accountadmin_securityadmin_ganted_to_custom_role | CIS-1.13 | The principle of least privilege requires that every identity is only given privileges that are necessary to complete its tasks. The ACCOUNTADMIN system role is the most powerful role in a Snowflake account and is intended for performing initial setup and managing account-level objects. SECURITYADMIN role can trivially escalate their privileges to that of ACCOUNTADMIN . Neither of these roles should be used for performing daily non-administrative tasks in a Snowflake account. |
ensure_no_tasks_owned_by_accountadmin_securityadmin | CIS-1.14 | The ACCOUNTADMIN system role is the most powerful role in a Snowflake account and is intended for performing initial setup and managing account-level objects. SECURITYADMIN role can trivially escalate their privileges to that of ACCOUNTADMIN . Neither of these roles should be used for running Snowflake tasks. A task should be running using a custom role containing only those privileges that are necessary for successful execution of the task. Snowflake executes tasks with the privileges of the task owner. The role that has OWNERSHIP privilege on the task owns the task. To avoid granting a task inappropriate privileges, the OWNERSHIP privilege on the task run as owner should be assigned to a custom role containing only those privileges that are necessary for successful execution of the task. |
ensure_no_tasks_run_with_accountadmin_securityadmin | CIS-1.15 | The ACCOUNTADMIN system role is the most powerful role in a Snowflake account and is intended for performing initial setup and managing account-level objects. SECURITYADMIN role can trivially escalate their privileges to that of ACCOUNTADMIN . Neither of these roles should be used for running Snowflake tasks. A task should be running using a custom role containing only those privileges that are necessary for successful execution of the task. |
ensure_no_stored_procedures_owned_by_accountadmin_securityadmin | CIS-1.16 | The ACCOUNTADMIN system role is the most powerful role in a Snowflake account and is intended for performing initial setup and managing account-level objects. SECURITYADMIN role can trivially escalate their privileges to that of ACCOUNTADMIN . Neither of these roles should be used for running Snowflake stored procedures. A stored procedure should be running using a custom role containing only those privileges that are necessary for successful execution of the stored procedure. Snowflake executes stored procedures with the privileges of the stored procedure owner or the caller. Role that has OWNERSHIP privilege on the stored procedure owns it. To avoid granting a stored procedure inappropriate privileges, the OWNERSHIP privilege on the stored procedure run as owner should be assigned to a custom role containing only those privileges that are necessary for successful execution of the stored procedure. |
ensure_no_stored_procedures_run_with_accountadmin_securityadmin | CIS-1.17 | The ACCOUNTADMIN system role is the most powerful role in a Snowflake account; it is intended for performing initial setup and managing account-level objects. Users and stored procedures with the SECURITYADMIN role can escalate their privileges to ACCOUNTADMIN . Snowflake stored procedures should not run with the ACCOUNTADMIN or SECURITYADMIN roles. Instead, stored procedures should be run using a custom role containing only those privileges that are necessary for successful execution of the stored procedure. |
monitoring_accountadmin_securityadmin_role_grants | CIS-2.1 | By default, ACCOUNTADMIN is the most powerful role in a Snowflake account and users with SECURITYADMIN role grant can trivially escalate their privileges to that of ACCOUNTADMIN . Following the principle of least privilege that prescribes limiting user's privileges to those that are strictly required to do their jobs, the ACCOUNTADMIN and SECURITYADMIN roles should be assigned to a limited number of designated users. Any new ACCOUNTADMIN and SECURITYADMIN role grants should be scrutinized. |
monitoring_manage_grants | CIS-2.2 | The MANAGE GRANTS privilege is one of the most powerful privileges in the Snowflake environment. This privilege gives the ability to grant or revoke privileges on any object as if the invoking role were the owner of the object. A custom role with the MANAGE GRANTS privilege on account level will not be able to grant privileges on the account level as that privilege is implicitly reserved for the ACCOUNTADMIN and SECURITYADMIN roles. However, such custom roles will be able to grant any privileges on any objects below the account level. Following the principle of least privilege and given how powerful the MANAGE GRANTS privilege is, any new MANAGE GRANTS privilege grants should be scrutinized. |
monitoring_password_signins_sso_users | CIS-2.3 | The security benefit of SSO is to relieve users from having to set up and manage distinct sets of credentials for distinct applications and services. It also allows security administrators to focus on hardening and defending only one identity storage and limited number of user credentials. |
monitoring_password_signin_without_mfa | CIS-2.4 | Multi-factor authentication (MFA) is a security control used to add an additional layer of login security. It works by requiring the user to present two or more proofs (factors) of user identity. An MFA example would be requiring a password and a verification code delivered to the user's phone during user sign-in. The MFA feature for Snowflake users is powered by the Duo Security service. |
monitoring_security_integrations | CIS-2.5 | Security integration object is used to configure SSO and SCIM integrations. |
monitoring_network_policies | CIS-2.6 | Network policies allow restricting access to a Snowflake account based on source IP addresses. A network policy can be configured either on the account level, for all users of the account, or on the user level, for a specific user. In the presence of both account-level and user-level policies the latter takes precedence. A network policy can also be configured on the SCIM and Snowflake OAuth security integrations to restrict the list of source IP addresses allowed when exchanging an authorization code for an access or refresh token and when using a refresh token to obtain a new access token. If network policy is not set on the security integration of the aforementioned types, the account-level network policy, if any, is used. |
monitoring_scim_token_creation | CIS-2.7 | The System for Cross-domain Identity Management (SCIM) is an open specification designed to help facilitate the automated management of user identities and groups (i.e. roles) in cloud applications using RESTful APIs. Snowflake supports SCIM 2.0 integration with Okta, Microsoft Azure AD and custom identity providers. Users and groups from the identity provider can be provisioned into Snowflake, which functions as the service provider. SCIM access token is a bearer token used by SCIM clients to authenticate to Snowflake SCIM server. |
monitoring_new_share_exposures | CIS-2.8 | Snowflake tables, views and UDFs can be shared across Snowflake accounts using share objects created by data providers and imported by data consumers. To expose a share to another account, the share provider account needs to add or set consumer accounts on a share using the ALTER SHARE command. The consumer account can then import the share using the CREATE DATABASE FROM SHARE command. |
monitoring_unsupported_snowflake_connector | CIS-2.9 | Snowflake provides client software (drivers, connectors, etc.) for connecting to Snowflake and using certain Snowflake features (e.g. Apache Kafka for loading data, Apache Hive metadata for external tables). The clients must be installed on each local workstation or system from which you wish to connect. The Snowflake Connector for Python, JDBC and ODBC drivers are some of the most used Snowflake clients. Old versions of drivers and connectors may contain security vulnerabilities that have been fixed in the latest version. To ensure that only up-to-date software is used, you should actively monitor session logins coming from unsupported clients and upgrade those to the latest available versions. |
network_policy_configured_to_allow_acces_from_trusted_ip_addresses | CIS-3.1 | Network policies allow restricting access to a Snowflake account based on source IP addresses. A network policy can be configured either on the account level, for all users of the account, or on the user level, for a specific user. In the presence of both account-level and user-level policies, the user-level policies take precedence. A network policy can also be configured on the SCIM and Snowflake OAuth security integrations to restrict the list of source IP addresses allowed when exchanging an authorization code for an access or refresh token and when using a refresh token to obtain a new access token. If network policy is not set on the security integration of the aforementioned types, the account-level network policy is set, if used. |
network_policy_configured_for_service_accounts | CIS-3.2 | Network policies allow restricting access to a Snowflake account based on source IP addresses. A network policy can be configured either on the account level, for all users of the account, or on the user level, for a specific user. In the presence of both account-level and user-level policies, the user-level policies take precedence. A service account is a Snowflake user whose credentials are used by scripts, jobs, applications, pipelines, etc. to talk to Snowflake. Other names include "application user", "service principal", "system account", or "daemon user". Service account is not a Snowflake specific term. |
rekeying_enabled_for_account | CIS-4.1 | All Snowflake customer data is encrypted by default using the latest security standards and best practices. Snowflake uses strong AES 256-bit encryption with a hierarchical key model rooted in a hardware security module. All Snowflake-managed keys are automatically rotated when they are more than 30 days old. Furthermore, data can be automatically re-encrypted ("rekeyed") on a yearly basis. Data encryption and key rotation is entirely transparent and requires no configuration or management. Key rotation transitions an active encryption key to a retired state. Practically this means transitioning of the active encryption key from being used for encrypting new data and decrypting data encrypted with that key to only decrypting data encrypted with that key. Rekeying transitions a retired encryption key to being destroyed. Practically this means re-encryption of the data encrypted by a retired key with a new key and destroying the disposing of the retired key. |
aes_encryption_size | CIS-4.2 | All ingested data stored in Snowflake tables is encrypted using 256-bit long AES encryption keys. However, data uploaded to internal stages is by default encrypted with 128-bit long AES encryption keys. |
data_retention_time_in_days | CIS-4.3 | Snowflake Time Travel enables accessing historical data (i.e., data that has been changed or deleted) at any point within a defined period. It relies on configuring a data retention period for your critical data assets. The DATA_RETENTION_TIME_IN_DAYS object parameter is used to set data retention period on the account, database, schema, or table level. When the MIN_DATA_RETENTION_TIME_IN_DAYS parameter is set at the account level, the effective minimum data retention period for an object is determined by MAX(DATA_RETENTION_TIME_IN_DAYS, MIN_DATA_RETENTION_TIME_IN_DAYS) . |
min_data_retention_time_in_days | CIS-4.4 | The MIN_DATA_RETENTION_TIME_IN_DAYS account parameter can be set by users with the ACCOUNTADMIN role to set a minimum retention period for the account. This parameter does not alter or replace the DATA_RETENTION_TIME_IN_DAYS parameter value. However it may change the effective data retention time. When this parameter is set at the account level, the effective minimum data retention period for an object is determined by MAX(DATA_RETENTION_TIME_IN_DAYS, MIN_DATA_RETENTION_TIME_IN_DAYS) . |
require_storage_integration_for_stage_creation | CIS-4.5 | Ensure that creating an external stage to access a private cloud storage location requires referencing a storage integration object as cloud credentials. |
require_storage_integration_for_stage_operation | CIS-4.6 | Ensure that loading data from or unloading data to a private cloud storage location requires using a named external stage that references a storage integration object. If this parameter is not set, then users can specify the explicit cloud provider credentials directly in the COPY statement. |
external_stages_have_storage_integrations | CIS-4.7 | External stage is a Snowflake object used for loading data from external storage locations into Snowflake tables and unloading data from Snowflake tables into external storage locations. Currently supported external storage locations are Amazon S3 buckets, Google Cloud Storage buckets and Microsoft Azure containers. Storage integration is a Snowflake object that encapsulates external storage authentication configuration as well as an optional set of allowed or blocked storage locations. When configuring an external stage, a storage integration can be referenced in lieu of storage service credentials. |
prevent_unload_to_inline_url | CIS-4.8 | Prevent ad hoc data unload operations to external cloud storage by enabling the PREVENT_UNLOAD_TO_INLINE_URL account parameter. |
tri_secret_secure_enabled | CIS-4.9 | Tri-Secret Secure is the combination of a Snowflake-maintained key and a customer-managed key in the cloud provider platform that hosts your Snowflake account to create a composite master key to protect your Snowflake data. The composite master key acts as an account master key and wraps all of the keys in the hierarchy; however, the composite master key never encrypts raw data. |
data_masking_enabled_for_sensitive_data | CIS-4.10 | Data masking policy is a fine-grained access control used to protect sensitive data from unauthorized access by selectively masking plain-text data in table and view columns at query time. |
row_level_policies_configured_for_sensitive_data | CIS-4.11 | Row access policies are used to determine which rows to return in the query result. Row access policies can include conditions and functions in the policy expression to transform the data at query runtime when those conditions are met. |
name | description |
---|---|
blocked_ip_address_events | Blocked ip address login failures (this requires Network Policies to be configured) |
login_failures_by_ip_address | Count of login failures by ip address |
blocked_ip_address_aggregate | Blocked ip address login failures aggregated by username, ip address, driver and authentication type (this requires Network Policies to be configured) |
changes_to_network_policies | Monitor changes to Network Policies and associated objects |
network_policy_exists | Monitors for the presence of network policy |
altered_client_sessions | Monitor for client applications that are keeping sessions open longer than desired by policy |
public_role_grants | The public role should have the fewest possible grants (read none). Every user in a Snowflake account has the public role granted to them. Monitor QUERY_HISTORY for alterations or grants to the public role |
unauthorized_privilege_grants | Snowflake recommends using a designated role for all user management tasks. Monitor that all user and role grants originate from this role, and that this role is only granted to appropriate users |
admin_roles_query_check | Monitor for all instances of a user using the default Snowflake admin roles to ensure their use is appropriate |
user_creation | Monitors for the creation of users |
user_creation_non_admin | Monitors for user creation by non admin roles |
user_altered | Monitors occurrences of altered users |
user_altered_key_pair | Monitors occurrences of altered users key pair auth removal |
user_altered_mfa_bypass | Monitors occurrences of altered users mfa bypass time period |
enabled_user_previously_disabled | Monitors instances where a previously disabled user has been enabled |
user_altered_to_plaintext_password | Monitor for the enablement of plaintext user passwords |
scim_api_calls | "Applicable if SCIM user-provisioning via the REST API is configured. Monitor SCIM API calls to ensure API requests comply with policy https://docs.snowflake.com/en/user-guide/scim-intro#auditing-with-scim" |
high_privilege_grants | Monitors high privilege query activity that involves elevated privileges in your Snowflake Account |
accountadmin_role_grants | The Snowflake role ACCOUNTADMIN should be closely monitored for granting to new users |
authentication_method_by_user | Monitors the number of times each user authenticated and the authentication method they used |
not_using_sso_auth | Monitor if users who have used SSO before are using other authentication methods instead After users successfully authenticate using SSO, they should not be using other methods |
by_key_pair_auth | Monitor the use of key pair authentication by querying login attempt |
has_key_pair_and_password | Monitor if exclusive Key Pair authentication users are configured to use other authentication methods. Users who have key pair authentication should be using it exclusively |
has_key_pair_using_other | Monitor if exclusive Key Pair authentication users are configured to use other authentication methods. Users who have key pair authentication should be using it exclusively |
has_key_pair_using_password | Monitor if users who have used key/pair authentication before are using password methods instead After users successfully authenticate using key/pair, they should not be using passwords |
frequently_authenticated_users | Identifying Users who login frequently can help spot anomalies or unexpected behavior |
scim_token_creation | SCIM access tokens have a six-month lifespan so it is important to track how many were generated This monitor needs accountadmin right to run, so careful planning required to implement this monitor |
failed_login_attempts_concurrent | The following approach returns results based on either the FAILED_LOGINS count or the log in failure rate (AVERAGE_SECONDS_BETWEEN_LOGIN_ATTEMPTS). This approach helps distinguish a brute force attack from a human who is struggling to remember their password. There are inline comments on how to adjust the query to limit results |
failed_login_attempts | Failed login monitor grouped by user and first auth method |
mfa_auth_stats | Multi factor authentication stats |
password_login_with_mfa | Most recent logins with password when MFA is enabled |
periodic_rekey_enabled | Checks that automatic data rekeying is turned on to provide additional data security |
periodic_rekey_changes | Changes to this setting are rare and deserving of scrutiny |
integration_object_changes | Because integrations can enable a new means of access to Snowflake data, closely monitor for new integrations or the modification of existing integrations |
security_integration_changes | Because security integrations can enable a new means of access to Snowflake data, closely monitor for new integrations or the modification of existing security integrations |
name | description |
---|---|
scim_token_expiry | Checks to see when the SCIM Access Token was created and sends a notification if its greater than the specified warn_after_days |
orphaned_roles | Checks to see if there are any orphaned roles in your account excluding those specified. If any are found, a notification will be sent. |
login_failures_by_username_detailed | Count of login failures by username |
login_attempts_suspect_clients | Logins detected from suspect clients |
login_attempts_suspect_ip_addresses | Logins detected from suspect ip address |
login_attempts_unseen_ip_address_password | Logins detected (with password) from previously unseen ip address |
name | description |
---|---|
long_running_queries | Returns a list of queries that have been running for more than the specified timeframe |
rbar_detection | Detect row-by-row processing queries repeatedly executed within the same session as these are a high priority for tuning. Ref: https://www.analytics.today/blog/top-3-snowflake-performance-tuning-tactics |
name | description |
---|---|
pipe_channel_error | Checks to see if an error message has been produced when attempting to read messages from the associated Google Cloud Pub/Sub queue or Microsoft Azure Event Grid storage queue. If there is an error then a notification will be sent |
pipe_freshness | Checks to see when the most recent file was loaded successfully by Snowpipe into the destination table. If the file is older than the freshness_threshold then a notification will be sent |
pipe_outstanding_messages | Checks the number of messages in the queue that have been queued but not received yet and number of files queued for loading by the pipe. If either of these values exceed the threshold then a notification will be sent |
pipe_status | Checks the current execution state of a pipe. Any status except those in the exception list will trigger an notification to be sent |
streams_gone_invalid | Checks to see if there are any streams cannot be queried successfully for the given databases. If any are found, a notification will be sent. |
streams_with_invalid_tables | Checks to see if there are any streams have invalid base tables for the given databases. If any are found, a notification will be sent. |
streams_gone_stale | Checks to see if there are any streams have gone stale for the given databases. If any are found, a notification will be sent. |
streams_going_stale | Checks to see if there are any streams which may become stale if they aren't consumed from for the given databases. If any are found, a notification will be sent. |
not_null | Checks to see the presence of a null value. If the results contains a null then a notification will be sent |
unique_check | Checks to see if there are non unqiue records in a table. If the results contains a non unique records then a notification will be sent |
source_freshness | Checks to see when data was retrieved last, if the time exeeds the specified expectation then a notification will be sent |
expect_column_values_to_be_between | Checks to see if the column has a value between those specified, if any rows exceed the limits then a notification will be sent |
expect_column_value_lengths_to_be_between | Checks to see if column entries to be strings with length between a min_length value and a max_length value (inclusive). If any rows falls outside this range then a notification will be sent |
expect_column_value_lengths_to_equal | Checks to see if column entries to be strings with a specific length. If any rows don't match outside this range then a notification will be sent |
name | description |
---|---|
omnata_sync_failures | Checks for failures in the Omnata data pipeline and sends a notification if any failures are detected |
omnata_sync_incomplete | Checks for incomplete syncs in the Omnata data pipeline and sends a notification if any failures are detected |