Skip to content

Releases: dbt-labs/dbt-core

dbt 0.5.2

25 Oct 01:56
Compare
Choose a tag to compare

Patch release fixing a bug that arises when profiles are overridden on the command line with the --profile flag.

See https://github.com/analyst-collective/dbt/releases/tag/v0.5.1

dbt version 0.5.1

21 Oct 17:11
Compare
Choose a tag to compare

0. tl;dr

  1. Raiders of the Lost Archive -- version your raw data to make historical queries more accurate
  2. Column type resolution for incremental models (no more Value too long for character type errors)
  3. Postgres support
  4. Top-level configs applied to your project + all dependencies
  5. --threads CLI option + better multithreaded output

1. Source table archival #183

Commonly, analysts need to "look back in time" at some previous state of data in their mutable tables. Imagine a users table which is synced to your data warehouse from a production database. This users table is a representation of what your users look like now. Consider what happens if you need to look at revenue by city for each of your users trended over time. Specifically, what happens if a user moved from, say, Philadelphia to New York? To do this correctly, you need to archive snapshots of the users table on a recurring basis. With this release, dbt now provides an easy mechanism to store such snapshots.

To use this new feature, declare the tables you want to archive in your dbt_project.yml file:

archive:
    - source_schema: synced_production_data  # schema to look for tables in (declared below)
      target_schema: dbt_archive             # where to archive the data to
      tables:                                # list of tables to archive
        - source_table: users                # table to archive
          target_table: users_archived       # table to insert archived data into
          updated_at: updated_at             # used to determine when data has changed
          unique_key: id                     # used to generate archival query

        - source_table: some_other_table
           target_table: some_other_table_archive
           updated_at: "updatedAt"
           unique_key: "expressions || work || LOWER(too)"

    - source_schema: some_other_schema
      ....

The archived tables will mirror the schema of the source tables they're generated from. In addition, three fields are added to the archive table:

  1. valid_from: The timestamp when this archived row was inserted (and first considered valid)
  2. valid_to: The timestamp when this archived row became invalidated. The first archived record for a given unique_key has valid_to = NULL. When newer data is archived for that unique_key, the valid_to field of the old record is set to the valid_from field of the new record!
  3. scd_id: A unique key generated for each archive record. Scd = Slowly Changing Dimension.

dbt models can be built on top of these archived tables. The most recent record for a given unique_key is the one where valid_to is null.

To run this archive process, use the command dbt archive. After testing and confirming that the archival works, you should schedule this process through cron (or similar).

2. Incremental column expansion #175

Incremental tables are a powerful dbt feature, but there was at least one edge case which makes working with them difficult. During the first run of an incremental model, Redshift will infer a type for every column in the table. Subsequent runs can insert new data which does not conform to the expected type. One example is a varchar(16) field which is inserted into a varchar(8) field.
In practice, this error looks like:

Value too long for character type
DETAIL:
  -----------------------------------------------
  error:  Value too long for character type
  code:      8001
  context:   Value too long for type character varying(8)
  query:     3743263
  location:  funcs_string.hpp:392
  process:   query4_35 [pid=18194]
  -----------------------------------------------

With this release, dbt will detect when column types are incongruent and will attempt to reconcile these different types if possible. Specifically, dbt will alter the incremental model table schema from character varying(x) to character varying(y) for some y > x. This should drastically reduce the occurrence of this class of error.

3. First-class Postgres support #183

With this release, Postgres became a first-class dbt target. You can configure a postgres database target in your ~/.dbt/profiles.yml file:

warehouse:
  outputs:
    dev:
      type: postgres    # configure a target for Postgres
      host: localhost
      user: Drew
      ....
  run-target: dev

While Redshift is built on top of Postgres, the two are subtly different. For instance, Redshift supports sort and dist keys, while Postgres does not! dbt will use the database target type parameter to generate the appropriate SQL for the target database.

4. Root-level configs #161

Configurations in dbt_project.yml can now be declared at the models: level. These configurations will apply to the primary project, as well as any dependency projects. This feature is particularly useful for setting pre- or post- hooks that run for every model. In practice, this looks like:

name: 'My DBT Project'

models:
    post-hook:
        - "grant select on {{this}} to looker_user"     # Applied to 'My DBT Project' and 'Snowplow' dependency
    'My DBT Project':
        enabled: true
    'Snowplow':
        enabled: true

5. --threads CLI option #143

The number of threads that DBT uses can now be overridden with a CLI argument. The number of threads used must be between 1 and 8.

dbt run --threads 1    # fine
# or
dbt run --threads 4    # great
# or
dbt run --threads 42    # too many!

In addition to this new CLI argument, the output from multi-threaded dbt runs should be a little more orderly now. Models won't show as STARTed until they're actually queued to run. Previously, the output here was a little confusing. Happy threading!

Upgrading

To upgrade to version 0.5.1 of dbt, run:

pip install --upgrade dbt

And another thing

  • Join us on slack with questions or comments

Made with ♥️ by 🐟🏙 📈

dbt version 0.5.0

28 Sep 01:17
Compare
Choose a tag to compare

0. tl;dr

  • use a temp table when executing incremental models
  • arbitrary configuration (using config variables)
  • specify branches for dependencies
  • more & better docs

1. new incremental model generation #138

In previous versions of dbt, an edge case existed which caused the sql_where query to select different rows in the delete and insert steps. As a result, it was possible to construct incremental models which would insert duplicate records into the specified table. With this release, DBT uses a temp table which will 1) circumvent this issue and 2) improve query performance. For more information, check out the GitHub issue: #138

2. Arbitrary configuration #146

Configuration in dbt is incredibly powerful: it is what allows models to change their behavior without changing their code. Previously, all configuration was done using built-in parameters, but that actually limits the user in the power of configuration.

With this release, you can inject variables from dbt_project.yml into your top-level and dependency models. In practice, variables work like this:

# dbt_project.yml

models:
  my_project:
    vars:
      exclude_ip: '192.168.1.1'
-- filtered_events.sql

-- source code
select * from public.events where ip_address != '{{ var("exclude_ip") }}'

-- compiles to
select * from public.events where ip_address != '192.168.1.1'

The vars parameter in dbt_project.yml is compiled, so you can use jinja templating there as well! The primary use case for this is specifying "input" models to a dependency.

Previously, dependencies used ref(...) to select from a project's base models. That interface was brittle, and the idea that dependency code had unbridled access to all of your top-level models made us a little uneasy. As of this release, we're deprecating the ability for dependencies to ref(...) top-level models. Instead, the recommended way for this to work is with vars! An example:

-- dbt_modules/snowplow/models/events.sql

select * from {{ var('snowplow_events_table') }}

and

models:
  Snowplow:
    vars:
      snowplow_events_table: "{{ ref('base_events') }}"

This effectively mirrors the previous behavior, but it much more explicit about what's happening under the hood!

3. specify a dependency branch #165

With this release, you can point DBT to a specific branch of a dependency repo. The syntax looks like this:

repositories:
    - https://github.com/analyst-collective/dbt-audit.git@development # use the "development" branch

4. More & Better Docs!

Check em out! And let us know if there's anything you think we can improve upon!

Upgrading

To upgrade to version 0.5.0 of dbt, run:

pip install --upgrade dbt

dbt version 0.4.7

20 Sep 16:17
Compare
Choose a tag to compare

0. tl;dr

  • --version command
  • pre- and post- run hooks
  • windows support
  • event tracking

1. --version #135

The --version command was added to help aid debugging. Further, organizations can use it to ensure that everyone in their org is up-to-date with dbt.

$ dbt --version
installed version: 0.4.7
   latest version: 0.4.7
Up to date!

2. pre-and-post-hooks #147

With this release, you can now specify pre- and post- hooks that are run before and after a model is run, respectively. Hooks are useful for running grant statements, inserting a log of runs into an audit table, and more! Here's an example of a grant statement implemented using a post-hook:

models:
  my_project:
    post-hook: "grant select on table {{this}} to looker_user"
    my_model:
       materialized: view
    some_model:
      materialized: table
      post-hook: "insert into my_audit_table (model_name, run_at) values ({{this.name}}, getdate())"

Hooks are recursively appended, so the my_model model will only receive the grant select... hook, whereas the some_model model will receive both the grant select... and insert into... hooks.

Finally, note that the grant statement uses the (hopefully familiar) {{this}} syntax whereas the insert statement uses the {{this.name}} syntax. When DBT creates a model:

  • A temp table is created
  • The original model is dropped
  • The temp table is renamed to the final model name

DBT will intelligently uses the right table/view name when you invoke {{this}}, but you have a couple of more specific options available if you need them:

{{this}} : "schema"."table__dbt_tmp"
{{this.schema}}: "schema"
{{this.table}}: "table__dbt_tmp"
{{this.name}}: "table"

3. Event tracking #89

We want to build the best version of DBT possible, and a crucial part of that is understanding how users work with DBT. To this end, we've added some really simple event tracking to DBT (using Snowplow). We do not track credentials, model contents or model names (we consider these private, and frankly none of our business). This release includes basic event tracking that reports 1) when dbt is invoked 2) when models are run, and 3) basic platform information (OS + python version). The schemas for these events can be seen here

You can opt out of event tracking at any time by adding the following to the top of you ~/.dbt/profiles.yml file:

config:
    send_anonymous_usage_stats: False

4. Windows support #154

windows

dbt version 0.4.1

22 Aug 04:49
Compare
Choose a tag to compare

dbt v0.4.1 provides improvements to incremental models, performance improvements, and ssh support for db connections.

0. tl;dr

  • slightly modified dbt command structure
  • unique_key setting for incremental models
  • connect to your db over ssh
  • no more model-defaults
  • multithreaded schema tests

If you encounter an SSL/cryptography error while upgrading to this version of dbt, check that your version of pip is up-to-date

pip install -U pip
pip install -U dbt

1. new dbt command structure #109

# To run models
dbt run # same as before

# to dry-run models 
dbt run --dry # previously dbt test

# to run schema tests
dbt test # previously dbt test --validate

2. Incremental model improvements #101

Previously, dbt calculated "new" incremental records to insert by querying for rows which matched some sql_where condition defined in the model configuration. This works really well for atomic datasets like a clickstream event log -- once inserted, these records will never change. Other datasets, like a sessions table comprised of many pageviews for many users, can change over time. Consider the following scenario:

User 1 Session 1 Event 1 @ 12:00
User 1 Session 1 Event 2 @ 12:01
-- dbt run --
User 1 Session 1 Event 3 @ 12:02

In this scenario, there are two possible outcomes depending on the sql_where chosen: 1) Event 3 does not get included in the Session 1 record for User 1 (bad), or 2) Session 1 is duplicated in the sessions table (bad). Both of these outcomes are inadequate!

With this release, you can now add a unique_key expression to an incremental model config. Records matching the unique_key will be deleted from the incremental table, then inserted as usual. This makes it possible to maintain data accuracy without recalculating the entire table on every run.

The unique_key can be any expression which uniquely defines the row, eg:

sessions:
  materialized: incremental
  sql_where: "session_end_tstamp > (select max(session_end_tstamp) from {{this}})"
  unique_key: user_id || session_index

3. Run schema validations concurrently #100

The threads run-target config now applies to schema validations too. Try it with dbt test

4. Connect to database over ssh #93

Add an ssh-host parameter to a run-target to connect to a database over ssh. The ssh-host parameter should be the name of a Host in your ~/.ssh/config file more info

warehouse:
  outputs:
    dev:
      type: redshift
      host: my-redshift.amazonaws.com
      port: 5439
      user: my-user
      pass: my-pass
      dbname: my-db
      schema: dbt_dbanin
      threads: 8
      ssh-host: ssh-host-name  # <------ Add this line 
  run-target: dev

Remove the model-defaults config #111

The model-defaults config doesn't make sense in a dbt world with dependencies. To apply default configs to your package, add the configs immediately under the package definition:

models:
    My_Package:
        enabled: true
        materialized: table
        snowplow:
            ...

dbt version 0.4.0

16 Aug 07:11
Compare
Choose a tag to compare

dbt v0.4.0

dbt v0.4.0 provides new ways to materialize models in your database.

0. tl;dr

  • new types of materializations: incremental and ephemeral
  • if upgrading, change materialized: true|false to materialized: table|view|incremental|ephemeral
  • optionally specify model configs within the SQL file

1. Feature: {{this}} template variable #81

The {{this}} template variable expands to the name of the model being compiled. For example:

-- my_model.sql
select 'the fully qualified name of this model is {{ this }}'
-- compiles to
select 'the fully qualified name of this model is "the_schema"."my_model"'

2. Feature: materialized: incremental #90

After initially creating a table, incremental models will insert new records into the table on subsequent runs. This drastically speeds up execution time for large, append-only datasets.

Each execution of dbt run will:

  • create the model table if it doesn't exist
  • insert new records into the table

New records are identified by a sql_where model configuration option. In practice, this looks like:

sessions:
    materialized: incremental
    sql_where: "session_start_time > (select max(session_start_time) from {{this}})"

There are a couple of new things here. Previously, materialized could either be set to true or false. Now, the valid options include view, table, incremental, and ephemeral (more on this last one below). Also note that incremental models generally require use of the {{this}} template variable to identify new records.

The sql_where field is supplied as a where condition on a subquery containing the model definition. This resultset is then inserted into the target model. This looks something like:

insert into schema.model (
    select * from (
        -- compiled model definition
    ) where {{sql_where}}
)

3. Feature: materialized: ephemeral #78

Ephemeral models are injected as CTEs (with statements) into any model that references them. Ephemeral models are part of the dependency graph and generally function like any other model, except ephemeral models are not compiled to their own files or directly created in the database. This is useful for intermediary models which are shared by other downstream models, but shouldn't be queried directly from outside of dbt.

To make a model ephemeral:

employees:
    materialized: ephemeral

Suppose you wanted to exclude employees from your users table, but you don't want to clutter your analytics schema with an employees table.

-- employees.sql
select * from public.employees where is_deleted = false

-- users.sql
select *
from {{ref('users')}}
where email not in (select email from {{ref('employees')}})

The compiled SQL would look something like:

with __dbt__CTE__employees as (
  select * from public.employees where is_deleted = false
)
select *
from users
where email not in (select email from __dbt__CTE__employees)

Ephemeral models play nice with other ephemeral models, incremental models, and regular table/view models. Feel free to mix and match different materialization options to optimize for performance and simplicity.

4. Feature: In-model configs #88

Configurations can now be specified directly inside of models. These in-model configs work exactly the same as configs inside of the dbt_project.yml file.

An in-model-config looks like this:

-- users.sql

-- python function syntax
{{ config(materialized="incremental", sql_where="id > (select max(id) from {{this}})") }}
-- OR json syntax
{{ 
    config({"materialized:" "incremental", "sql_where" : "id > (select max(id) from {{this}})"})
}}

select * from public.users

The config resolution order is:

  1. dbt_project.yml model-defaults
  2. in-model config
  3. dbt_project.yml models config

5. Fix: dbt seed null values #102

Previously, dbt seed would insert empty CSV cells as "None", whereas they should have been NULL. Not anymore!

dbt version 0.3.0

04 Aug 04:26
Compare
Choose a tag to compare

dbt v0.3.0

Version 0.3.0 comes with the following updates:

1. Parallel model creation #83

dbt will analyze the model dependency graph and can create models in parallel if possible. In practice, this can significantly speed up the amount of time it takes to complete dbt run. The number of threads dbt uses must be between 1 and 8. To configure the number of threads dbt uses, add the threads key to your dbt target in ~/.dbt/profiles.yml, eg:

user:
  outputs:
    my-redshift:
      type: redshift
      threads: 4         # execute up to 4 models concurrently
      host: localhost
      ...
  run-target: my-redshift

For a complete example, check out a sample profiles.yml file

2. Fail only within a single dependency chain #63

If a model cannot be created, it won't crash the entire dbt run process. The errant model will fail and all of its descendants will be "skipped". Other models which do not depend on the failing model (or its descendants) will still be created.

3. Logging #64, #65

dbt will log output from the dbt run and dbt test commands to a configurable logging directory. By default, this directory is called logs/. The log filename is dbt.log and it is rotated on a daily basic. Logs are kept for 7 days.

To change the name of the logging directory, add the following line to your dbt_project.yml file:

log-path: "my-logging-directory" # will write logs to my-logging-directory/dbt.log

4. Minimize time models are unavailable in the database #68

Previously, dbt would create models by:

  1. dropping the existing model
  2. creating the new model

This resulted in a significant amount of time in which the model was inaccessible to the outside world. Now, dbt creates models by:

  1. creating a temporary model {model-name}__dbt_tmp
  2. dropping the existing model
  3. renaming the tmp model name to the actual model name

5. Arbitrarily deep nesting #50

Previously, all models had to be located in a directory matching models/{model group}/{model_name}.sql. Now, these models can be nested arbitrarily deeply within a given dbt project. For instance, models/snowplow/sessions/transformed/transformed_sessions.sql is a totally valid model location with this release.

To configure these deeply-nested models, just nest the config options within the dbt_project.yml file. The only caveat is that you need to specify the dbt project name as the first key under the models object, ie:

models:
  'Your Project Name':
    snowplow:
      sessions:
        transformed:
          transformed_sessions:
            enabled: true

More information is available on the issue and in the sample dbt_project.yml file

6. don't try to create a schema if it already exists #66

dbt run would execute create schema if not exists {schema}. This would fail if the dbt user didn't have sufficient permissions to create the schema, even if the schema already existed! Now, dbt checks for the schema existence and only attempts to create the schema if it doesn't already exist.

7. Semantic Versioning

The previous release of dbt was v0.2.3.0 which isn't a semantic version. This and all future dbt releases will conform to semantic version in the format {major}.{minor}.{patch}.

dbt version 0.2.3.0

15 Jul 17:12
Compare
Choose a tag to compare

dbt v0.2.3.0

Version 0.2.3.0 of dbt comes with the following updates:

1. Fix: Flip referential integrity arguments (breaking)

Referential integrity validations in a schema.yml file were previously defined relative to the parent table:

account:
  constraints:
    relationships:
      - {from: id, to: people, field: account_id}

Now, these validations are specified relative to the child table

people:
  constraints:
    relationships:
      - {from: account_id, to: accounts, field: id}

For more information, run dbt test -h

2. Feature: seed tables from a CSV

Previously, auxiliary data needed to be shoehorned into a view comprised of union statements, eg.

select 22 as "type", 'Chat Transcript' as type_name, 'chatted via olark' as event_name union all
select 21, 'Custom Redirect', 'clicked a custom redirect' union all
select 6, 'Email', 'email sent' union all
...

That's not a scalable solution. Now you can load CSV files into your data warehouse:

  1. Add a CSV file (with a header) to the data/ directory
  2. Run dbt seed to create a table from the CSV file!
  3. The table name with be the filename (sans .csv) and it will be placed in your run-target's schema

Subsequent calls to dbt seed will truncate the seeded tables (if they exist) and re-insert the data. If the table schema changes, you can run dbt seed --drop-existing to drop the table and recreate it.

For more information, run dbt seed -h

3. Feature: compile analytical queries

Versioning your SQL models with dbt is a great practice, but did you know that you can also version your analyses? Any SQL files in the analysis/ dir will be compiled (ie. table names will be interpolated) and placed in the target/build-analysis/ directory. These analytical queries will not be run against your data warehouse with dbt run -- you should copy/paste them into the data analysis tool of your choice.

4. Feature: accepted values validation

In your schema.yml file, you can now add accepted-values validations:

accounts:
  constraints:
    accepted-values:
      - {field: type, values: ['paid', 'free']}

This test will determine how many records in the accounts model have a type other than paid or free.

5. Feature: switch profiles and targets on the command line

Switch between profiles with --profile [profile-name] and switch between run-targets with --target [target-name].

Targets should be something like "prod" or "dev" and profiles should be something like "my-org" or "my-side-project"

side-project:
  outputs:
    prod:
      type: redshift
      host: localhost
      port: 5439
      user: Drew
      pass:
      dbname: data_generator
      schema: ac_drew
    dev:
      type: redshift
      host: localhost
      port: 5439
      user: Drew
      pass:
      dbname: data_generator
      schema: ac_drew_dev
  run-target: dev

To compile models using the dev environment of my side-project profile:
$ dbt compile --profile side-project --target dev
or for prod:
$ dbt compile --profile side-project --target prod

You can also add a "profile' config to the dbt_config.yml file to fix a dbt project to a specific profile:

...
test-paths: ["test"]
data-paths: ["data"]

# Fix this project to the "side-project" profile
# You can still use --target to switch between environments!
profile: "side-project"

model-defaults:
....