You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have searched the existing issues, and I could not find an existing issue for this bug
Current Behavior
We had errors with our dynamic tables on May 8 on dbt Cloud stating that the dynamic table has a '__DBT_BACKUP' version that already exists.
We did some investigation in Snowflake and found that dbt was running these commands: drop table if exists "OPR_THD"."DBT_THD"."OPR_PRO_CHANNEL_ORDER_DETAIL_DT__dbt_backup" cascade alter table "OPR_THD"."DBT_THD"."OPR_PRO_CHANNEL_ORDER_DETAIL_DT" rename to OPR_PRO_CHANNEL_ORDER_DETAIL_DT__dbt_backup;
It looks like dbt was creating a backup version when running a 'create or replace' command on the dynamic table. However, dbt didn't specify the database/schema when doing the renaming, and the backup table ended up being in a completely different database/schema than where the original table sits. Therefore, the drop command didn't actually drop the backup.
In our case, our backups are put in the STAGING1 database, while the production tables are in OPR_THD.
We also notice that the backups are set at a 10 minutes lag time similar to the production tables. This is causing refresh delays for the production tables since they share the same warehouse and resources with the backups. Snowflake would refresh all dynamic tables at the exact same timestamps. The production table was at around 99% meeting target lag prior to when the backups were created.
Production table
Backup
Expected Behavior
We expect that the backups are created in the same database and schema as the production tables and the drop command will drop them as needed.
We expect that the backups do not have the same target lag and won't affect the production refreshes.
Steps To Reproduce
In our 'Deployment_v1' environment, run command dbt build -s +models/operational_data_sets/opr_thd
Relevant log output
No response
Environment
- OS: Linux
- Python:3.10
- dbt-core: 1.7
- dbt-snowflake:
Additional Context
No response
The text was updated successfully, but these errors were encountered:
@kylienhu Thanks for opening — there are actually two issues here, the first being the one you mention (unqualified rename), though the underlying cause was a change that Snowflake made to the return types of show terse objects.
This comment has the latest updates, as well as what I believe to be a viable workaround to unblock yourself in the meantime:
Is this a new bug in dbt-snowflake?
Current Behavior
We had errors with our dynamic tables on May 8 on dbt Cloud stating that the dynamic table has a '__DBT_BACKUP' version that already exists.
We did some investigation in Snowflake and found that dbt was running these commands:
drop table if exists "OPR_THD"."DBT_THD"."OPR_PRO_CHANNEL_ORDER_DETAIL_DT__dbt_backup" cascade
alter table "OPR_THD"."DBT_THD"."OPR_PRO_CHANNEL_ORDER_DETAIL_DT" rename to OPR_PRO_CHANNEL_ORDER_DETAIL_DT__dbt_backup;
It looks like dbt was creating a backup version when running a 'create or replace' command on the dynamic table. However, dbt didn't specify the database/schema when doing the renaming, and the backup table ended up being in a completely different database/schema than where the original table sits. Therefore, the drop command didn't actually drop the backup.
In our case, our backups are put in the STAGING1 database, while the production tables are in OPR_THD.
We also notice that the backups are set at a 10 minutes lag time similar to the production tables. This is causing refresh delays for the production tables since they share the same warehouse and resources with the backups. Snowflake would refresh all dynamic tables at the exact same timestamps. The production table was at around 99% meeting target lag prior to when the backups were created.
Production table
Backup
Expected Behavior
We expect that the backups are created in the same database and schema as the production tables and the drop command will drop them as needed.
We expect that the backups do not have the same target lag and won't affect the production refreshes.
Steps To Reproduce
In our 'Deployment_v1' environment, run command
dbt build -s +models/operational_data_sets/opr_thd
Relevant log output
No response
Environment
Additional Context
No response
The text was updated successfully, but these errors were encountered: