Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Think about how to support test queries by not having to implement the code macros ourselves and perhaps just use the dbt API for that. #65

Closed
bastienboutonnet opened this issue Jan 24, 2021 · 3 comments

Comments

@bastienboutonnet
Copy link
Member

bastienboutonnet commented Jan 24, 2021

As part of the dbt-sugar doc flow, we want to allow people to add tests to columns. While we can stick with just populating their yaml files, it would be great if we could test their model for them at add-time.

For example, if a user were to say "I want column foo to be unique" when we're about to add it to their model's schema.yml file we would actually fire the test query. If the test fails, we would tell them that it failed and not add it.

The issue with doing that is that we could end up going down the rabbit-hole or re-implementing dbt's builtins and we wouldn't be able to easily integrate or support other custom tests which users may write as macros or import via packages.

There is probably a way to just use dbt as an API or peek into the code to find the sql templates and test that way.

One potentially dirty way would be to generate subprocess calls that trigger stuff like dbt test -m model with a temporary generated yaml or so. It would work and wouldn't make us rely to much on the dbt python API which isn't yet marked as stable but it's dirty and involved because we'd have to parse logs etc.

@jtcohen6
Copy link

jtcohen6 commented Feb 9, 2021

@bastienboutonnet Interesting! Idea being, you want to warn users who are trying to add tests that currently fail?

I would definitely discourage from hooking too deeply into dbt's tasks or python methods. As you say, they're undocumented and liable to change; we're also likely to change the innards of tests in particular for v0.20.0.

That said, you could probably get away with wrapping the main point of entry, handle_and_check, in a way that's similar to how we do it for integration tests: run_dbt_and_check and run_dbt, the latter offering the extra sugar of expact_pass = False. We're actually thinking of wrapping and exposing the integration testing framework as a module for the benefit of dbt adapter maintainers (dbt-labs/dbt-adapter-tests#13).

@bastienboutonnet
Copy link
Member Author

@jtcohen6 Thanks a lot for chiming into this! Thanks for the heads up on v.0.20.0 I think I'll indeed stay away as much as possible from the insides until things get settled a bit.

I think what you propose may just work! I'll have to have a play around in the weekend or so. I'm not in a huge rush on this feature. But yeah I think this would give the ability to refer to a test that lives in dbt space or the user's macros while not having to re-implement the wheel in our code.

For example for now we are doing stuff like this: https://github.com/bitpicky/dbt-sugar/pull/79/files#diff-66b424bf20ea5820411a33a2c495ac7c26846eae9d34071bb7632917e6b0ea34R130-R136 which boils down to writing the full SQL ourselves and that just won't scale.

@bastienboutonnet
Copy link
Member Author

Resolved in #108 we will be calling dbt via subprocess until the API is more stable and usable by third parties.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Development

Successfully merging a pull request may close this issue.

2 participants