Utility functions for dbt projects.
Clone or download
Latest commit 9b22332 Nov 30, 2018
Type Name Latest commit message Commit time
Failed to load latest commit information.
.circleci Have circleci skip the tests that the makefile skips Oct 17, 2018
etc add logo Sep 21, 2017
integration_tests Add expression_is_true schema test Nov 21, 2018
macros Merge pull request #92 from fishtown-analytics/expression-is-true-test Nov 30, 2018
.gitignore initial commit Jul 16, 2017
LICENSE Create LICENSE Mar 1, 2018
README.md Fix typo Nov 21, 2018
dbt_project.yml update readme Aug 21, 2018


This dbt package contains macros that can be (re)used across dbt projects.



current_timestamp (source)

This macro returns the current timestamp.


{{ dbt_utils.current_timestamp() }}

dateadd (source)

This macro adds a time/day interval to the supplied date/timestamp. Note: The datepart argument is database-specific.


{{ dbt_utils.dateadd(datepart='day', interval=1, from_date_or_timestamp='2017-01-01') }}

datediff (source)

This macro calculates the difference between two dates.


{{ dbt_utils.datediff("'2018-01-01'", "'2018-01-20'", 'day') }}

split_part (source)

This macro splits a string of text using the supplied delimiter and returns the supplied part number (1-indexed).


{{ dbt_utils.split_part(string_text='1,2,3', delimiter_text=',', part_number=1) }}

date_trunc (source)

Truncates a date or timestamp to the specified datepart. Note: The datepart argument is database-specific.


{{ dbt_utils.date_trunc(datepart, date) }}

last_day (source)

Gets the last day for a given date and datepart. Notes:

  • The datepart argument is database-specific.
  • This macro currently only supports dateparts of month and quarter.


{{ dbt_utils.last_day(date, datepart) }}


date_spine (source)

This macro returns the sql required to build a date spine.


{{ dbt_utils.date_spine(
    start_date="to_date('01/01/2016', 'mm/dd/yyyy')",
    end_date="dateadd(week, 1, current_date)"


haversine_distance (source)

This macro calculates the haversine distance between a pair of x/y coordinates.


{{ dbt_utils.haversine_distance(lat1=<float>,lon1=<float>,lat2=<float>,lon2=<float>) }}

Schema Tests

equality (source)

This schema test asserts the equality of two relations.


version: 2

  - name: model_name
      - dbt_utils.equality:
          compare_model: ref('other_table_name')

expression_is_true (source)

This schema test asserts that a valid sql expression is true for all records. This is useful when checking integrity across columns, for example, that a total is equal to the sum of its parts, or that at least one column is true.


version: 2

  - name: model_name
      - dbt_utils.expression_is_true:
          expression: "col_a + col_b = total"

recency (source)

This schema test asserts that there is data in the referenced model at least as recent as the defined interval prior to the current timestamp.


version: 2

  - name: model_name
      - dbt_utils.recency:
          datepart: day
          field: created_at
          interval: 1

at_least_one (source)

This schema test asserts if column has at least one value.


version: 2

  - name: model_name
      - name: col_name
          - dbt_utils.at_least_one

not_constant (source)

This schema test asserts if column does not have same value in all rows.


version: 2

  - name: model_name
      - name: column_name
          - dbt_utils.not_constant

cardinality_equality (source)

This schema test asserts if values in a given column have exactly the same cardinality as values from a different column in a different model.


version: 2

  - name: model_name
      - name: from_column
          - dbt_utils.cardinality_equality:
              field: other_column_name
              to: ref('other_model_name')

SQL helpers

get_column_values (source)

This macro returns the unique values for a column in a given table.


-- Returns a list of the top 50 states in the `users` table
{% set states = dbt_utils.get_column_values(table=ref('users'), column='state', max_records=50) %}

{% for state in states %}
{% endfor %}


get_tables_by_prefix (source)

This macro returns a list of tables that match a given prefix, with an optional exclusion pattern. It's particularly handy paired with union_tables.


-- Returns a list of tables that match schema.prefix%
{{ set tables = dbt_utils.get_tables_by_prefix('schema', 'prefix')}}

-- Returns a list of tables as above, excluding any with underscores
{{ set tables = dbt_utils.get_tables_by_prefix('schema', 'prefix', '%_%')}}

group_by (source)

This macro build a group by statement for fields 1...N


{{ dbt_utils.group_by(n=3) }} --> group by 1,2,3

star (source)

This macro generates a list of all fields that exist in the from relation, excluding any fields listed in the except argument. The construction is identical to select * from {{ref('my_model')}}, replacing star (*) with the star macro. This macro also has an optional relation_alias argument that will prefix all generated fields with an alias.


{{ dbt_utils.star(from=ref('my_model'), except=["exclude_field_1", "exclude_field_2"]) }}
from {{ref('my_model')}}

union_tables (source)

This macro implements an "outer union." The list of tables provided to this macro will be unioned together, and any columns exclusive to a subset of these tables will be filled with null where not present. The column_override argument is used to explicitly assign the column type for a set of columns.


{{ dbt_utils.union_tables(
    tables=[ref('table_1'), ref('table_2')],
    column_override={"some_field": "varchar(100)"},
) }}

generate_series (source)

This macro implements a cross-database mechanism to generate an arbitrarily long list of numbers. Specify the maximum number you'd like in your list and it will create a 1-indexed SQL result set.


{{ dbt_utils.generate_series(upper_bound=1000) }}

surrogate_key (source)

Implements a cross-database way to generate a hashed surrogate key using the fields specified.


{{ dbt_utils.surrogate_key('field_a', 'field_b'[,...]) }}

pivot (source)

This macro pivots values from rows to columns.


{{ dbt_utils.pivot(<column>, <list of values>) }}


Input: public.test

| size | color |
| S    | red   |
| S    | blue  |
| S    | red   |
| M    | red   |

  {{ dbt_utils.pivot('color', dbt_utils.get_column_values('public.test',
                                                         'color')) }}
from public.test
group by size


| size | red | blue |
| S    | 2   | 1    |
| M    | 1   | 0    |


- column: Column name, required
- values: List of row values to turn into columns, required
- alias: Whether to create column aliases, default is True
- agg: SQL aggregation function, default is sum
- cmp: SQL value comparison, default is =
- prefix: Column alias prefix, default is blank
- suffix: Column alias postfix, default is blank
- then_value: Value to use if comparison succeeds, default is 1
- else_value: Value to use if comparison fails, default is 0

unpivot (source)

This macro "un-pivots" a table from wide format to long format. Functionality is similar to pandas melt function.


{{ dbt_utils.unpivot(table=ref('table_name'), cast_to='datatype', exclude=[<list of columns to exclude from unpivot>]) }}


Input: orders

| date       | size | color | status     |
| 2017-01-01 | S    | red   | complete   |
| 2017-03-01 | S    | red   | processing |

{{ dbt_utils.unpivot(ref('orders'), cast_to='varchar', exclude=['date','status']) }}


| date       | status     | field_name | value |
| 2017-01-01 | complete   | size       | S     |
| 2017-01-01 | complete   | color      | red   |
| 2017-03-01 | processing | size       | S     |
| 2017-03-01 | processing | color      | red   |


- table: Table name, required
- cast_to: The data type to cast the unpivoted values to, default is varchar
- exclude: A list of columns to exclude from the unpivot.


get_url_parameter (source)

This macro extracts a url parameter from a column containing a url.


{{ dbt_utils.get_url_parameter(field='page_url', url_parameter='utm_source') }}


insert_by_period (source)

insert_by_period allows dbt to insert records into a table one period (i.e. day, week) at a time.

This materialization is appropriate for event data that can be processed in discrete periods. It is similar in concept to the built-in incremental materialization, but has the added benefit of building the model in chunks even during a full-refresh so is particularly useful for models where the initial run can be problematic.

Should a run of a model using this materialization be interrupted, a subsequent run will continue building the target table from where it was interrupted (granted the --full-refresh flag is omitted).

Progress is logged in the command line for easy monitoring.


    materialized = "insert_by_period",
    period = "day",
    timestamp_field = "created_at",
    start_date = "2018-01-01",
    stop_date = "2018-06-01")

with events as (

  select *
  from {{ ref('events') }}
  where __PERIOD_FILTER__ -- This will be replaced with a filter in the materialization code


....complex aggregates here....

Configuration values:

  • period: period to break the model into, must be a valid datepart (default='Week')
  • timestamp_field: the column name of the timestamp field that will be used to break the model into smaller queries
  • start_date: literal date or timestamp - generally choose a date that is earlier than the start of your data
  • stop_date: literal date or timestamp (default=current_timestamp)


  • This materialization is compatible with dbt 0.10.1.
  • This materialization has been written for Redshift.
  • This materialization can only be used for a model where records are not expected to change after they are created.
  • Any model post-hooks that use {{ this }} will fail using this materialization. For example:
        post-hook: "grant select on {{ this }} to db_reader"

A useful workaround is to change the above post-hook to:

        post-hook: "grant select on {{ this.schema }}.{{ this.name }} to db_reader"


We welcome contributions to this repo! To contribute a new feature or a fix, please open a Pull Request with 1) your changes, 2) updated documentation for the README.md file, and 3) a working integration test. See this page for more information.

Getting started with dbt

Code of Conduct

Everyone interacting in the dbt project's codebases, issue trackers, chat rooms, and mailing lists is expected to follow the PyPA Code of Conduct.