Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

compiled dbt test output could be more helpful #517

Open
drewbanin opened this issue Aug 22, 2017 · 5 comments

Comments

@drewbanin
Copy link
Contributor

commented Aug 22, 2017

Consider compiling a simple select statement (w/ a limit) that can be directly run to find test failures. Right now, you need to change a select count(*) to select * and sometimes remove groups, add limits, etc.

@mikekaminsky

This comment has been minimized.

Copy link
Contributor

commented Oct 29, 2018

Not sure if this belongs here or in a separate issue, but it'd be great if the test output printed out in the order of the DAG (and grouped by the model they reference). So the tests for the base models should be at the top, and those for the most downstream (and more abstract) models should be at the bottom.

@sagarvelagala

This comment has been minimized.

Copy link

commented Jul 16, 2019

might not be applicable for all use cases, but it could be useful to also output a dbt output (with appropriate "refs") into a "Test Results" folder. I can see this being helpful if a user would want to:

  • build a report in a BI tool that outputs error test results
  • include all ids from a test error into an "exclusion" query or some other exclusion logic. Then a production workflow could be something like dbt run --models base, dbt test --models base, dbt run --models mart.test_exclusions, etc.
@joshtemple

This comment has been minimized.

Copy link
Contributor

commented Jul 25, 2019

Some great thoughts on this Discourse post that I wanted to expand on.

To me, the fundamental issue is that the query used to test is often different than the query needed to debug (I think "debug" is a more accurate term for this than "audit").

This means the dbt user has to take a lot of steps to get to the bottom of a test failure:

  1. Navigate to the compiled test SQL (can be sped up with +knowledge of command line operations, like piping to clipboard or command line SQL client)
  2. Copy it into a SQL client of some kind
  3. Modify the SQL to a debug query to return the actual primary keys or rows. For some tests this is trivial, for others, it can be time-consuming.
  4. Run the debug query and examine the results

This is especially problematic when debugging a CI failure, where the compiled test queries aren't directly available to the user unless they have saved them as artifacts of the build.

It would be great if dbt did all of this and presented the test failures clearly!

For example:

  • For a uniqueness test failure, I want to see the count of rows with duplicated values for the column and potentially the count of duplications for each primary key
  • For an accepted values test failure, I want to see which unacceptable values were present and the count of rows where they occurred
  • For a not null test failure, I want to see the count of rows in my table that are null for the tested column.

@drewbanin, you suggested implementing a separate debug query from the test query, either as a CTE or as a comment. I think this is a great idea. However, this only eliminates step 3 above. Why not set this up so dbt runs the debug query on test failure, returns the first 2-3 results to the command line, and logs the full result to an audit table (re: #903)?

Something like this?

dbt test --models orders

Failure in test unique_orders_order_id (models/orders/schema.yml)

  Got 2 results, expected 0.
 
  order_id           count  
  ------------------------
  1745291294829100   2      
  1384819819499118   10     
  
  compiled test SQL at target/compiled/.../unique_orders_order_id.sql
  compiled debug SQL at target/compiled/.../unique_orders_order_id.sql

Complete test results logged to table dbt.test_failures
@foundinblank

This comment has been minimized.

Copy link

commented Jul 29, 2019

I second @joshtemple's idea of having a debug query that gets kicked off when a test fails. It would be especially useful with time-based data tests. I have several tests that only look at the last 24 hours' data, so it's a pain to convert the test SQL into debug SQL AND fix the dates so they cover the same 24-hour period as the failed test (which might've been a few days ago). A debug query kicking off right after a failed test would be able to look at the exact same time period and retain the results.

@drewbanin

This comment has been minimized.

Copy link
Contributor Author

commented Jul 29, 2019

@joshtemple I'm relatively happy to have dbt run a query here and print the results to stdout, though I do think it could get kind of messy for some types of (mostly custom) tests. The example shown here (uniqueness) makes a ton of sense, but it's less clear to me what the output should be for not_null tests! Maybe that particular test is the odd one out, I just want to make the claim that it won't always be obvious what the "results" of a failed test should be.

I still love the idea of persisting test failures in tables, but I'm unsure about the mechanisms that dbt should employ to persist & manage these tests. Databases like BQ support table expiration, but on other databases, dbt will need to clean up these test tables in order to not make a really big mess of things.

Some very practical (and tractable) questions for us to consider:

  • dbt's tests have auto-generated names that probably won't be useful as table names in a database. How should we name these tables?
  • should these test tables "expire"?
    • dbt can create the test tables all in a specific schema (like {{ model.schema }}__tests) -- should it drop that schema before invoking tests?
    • should we stick a timestamp in the schema names, then delete the schemas after some expiration?
  • how should users configure which test failures get persisted in tables? Should we add a config to schema.yml? How do we make this environment aware (do it in prod but not in dev, or similar)?

Curious about your collective thoughts on these questions, or any other ideas/questions that are conjured up as you think about this one!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
5 participants
You can’t perform that action at this time.