You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For writing to SQL we are already explicitly specifying the column types for the database, using dtypes from the SQLAlchemy metadata object that’s generated by Package.to_sql():
# Load any tables that exist in our dictionary of dataframes into the# corresponding table in the newly create database:fortableinmd.sorted_tables:
dfs[table.name].to_sql(
table.name,
engine,
if_exists="append",
index=False,
dtype={c.name: c.typeforcintable.columns},
)
Pandas added a dtype argument to read_sql_query in v1.3.0, which does not exist in the read_sql() wrapper or read_sql_table() function. So if we are using an SQL query rather than trying to read the whole table, we can specify what data types we get in the resulting dataframe using the same metadata structures we’ve already defined.
Though this would mean that we need to define metadata for the intermediary tables and columns as well (anything that’s going to get persisted and read back out). But if the intent is for these tables to stick around for reference and re-use, then that’s something we’d already be doing.
For many of these additional tables we would not need to have descriptions, foreign key relationships, primary keys etc. – those would only make sense in the context of the “real” normalized database tables.
Question: is it just me or does it seem like well structured database tables with clear primary keys, constraints, foreign key relations, good normalization etc. are getting kinda kicked to the curb in the "Modern Data Stack" universe?
Compare with Prefect+dbt+SQL architecture
The text was updated successfully, but these errors were encountered:
Persisting dfs / Prefect Results to DB:
Package.to_sql()
:read_sql_query
in v1.3.0, which does not exist in theread_sql()
wrapper or read_sql_table() function. So if we are using an SQL query rather than trying to read the whole table, we can specify what data types we get in the resulting dataframe using the same metadata structures we’ve already defined.Compare with Prefect+dbt+SQL architecture
The text was updated successfully, but these errors were encountered: