Skip to content

Commit

Permalink
more destination postgres warnings (#38219)
Browse files Browse the repository at this point in the history
  • Loading branch information
evantahler committed May 15, 2024
1 parent 9c72d0e commit 5ecaef0
Showing 1 changed file with 4 additions and 3 deletions.
7 changes: 4 additions & 3 deletions docs/integrations/destinations/postgres.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,14 +4,15 @@ This page guides you through the process of setting up the Postgres destination

:::caution

Postgres, while an excellent relational database, is not a data warehouse.
Postgres, while an excellent relational database, is not a data warehouse. Please only consider using postgres as a destination for small data volumes (e.g. less than 10GB) or for testing purposes. For larger data volumes, we recommend using a data warehouse like BigQuery, Snowflake, or Redshift.

1. Postgres is likely to perform poorly with large data volumes. Even postgres-compatible
destinations (e.g. AWS Aurora) are not immune to slowdowns when dealing with large writes or
updates over ~500GB. Especially when using normalization with `destination-postgres`, be sure to
updates over ~100GB. Especially when using [typing and deduplication](/using-airbyte/core-concepts/typing-deduping) with `destination-postgres`, be sure to
monitor your database's memory and CPU usage during your syncs. It is possible for your
destination to 'lock up', and incur high usage costs with large sync volumes.
2. Postgres column [name length limitations](https://www.postgresql.org/docs/current/limits.html)
2. When attempting to scale a postgres database to handle larger data volumes, scaling IOPS (disk throughput) is as important as increasing memory and compute capacity.
3. Postgres column [name length limitations](https://www.postgresql.org/docs/current/limits.html)
are likely to cause collisions when used as a destination receiving data from highly-nested and
flattened sources, e.g. `{63 byte name}_a` and `{63 byte name}_b` will both be truncated to
`{63 byte name}` which causes postgres to throw an error that a duplicate column name was
Expand Down

0 comments on commit 5ecaef0

Please sign in to comment.