You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We should add the ability to check for recent data at a grouping level via a group_by parameter.
For example, a user might expect recent rows for every product in a fact table. So, a recency check should look at grouped rows for each product_id that does not meet the recency requirement.
e.g.
with latest_grouped_timestamps as (
select
product_id,
max(timestamp_column) as latest_timestamp_column
from
fact_table
group by1
),
validation_errors as (
select*from
latest_grouped_timestamps
where
latest_timestamp_column < { threshold_timestamp }
)
selectcount(*) from validation_errors
Note: this would only cover grouped values that already exists in the model to be tested. So, for example, products that have never sold would not be checked. This would require a left join to a dimension table and is currently out of scope for this test.
The text was updated successfully, but these errors were encountered:
@jeffwhite-indeed yes, this works with any single table, as long as you have column you want to group by and a timestamp or date column to check for recent values for each grouped row. So fact tables or type 2 dimensions in a dimension model, or other tables like that.
We should add the ability to check for recent data at a grouping level via a
group_by
parameter.For example, a user might expect recent rows for every product in a fact table. So, a recency check should look at grouped rows for each product_id that does not meet the recency requirement.
e.g.
Name:
expect_grouped_row_values_to_have_recent_data
?Note: this would only cover grouped values that already exists in the model to be tested. So, for example, products that have never sold would not be checked. This would require a left join to a dimension table and is currently out of scope for this test.
The text was updated successfully, but these errors were encountered: