You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Context : I'm migrating a database from MySQL to Postgres. Two of the tables in the DB contain lots of rows with some heavy columns (containing json).
When I try to migrate the DB, the job fails with the "heap exhausted" message.
The data in these tables could be skipped in the migration because it's tracing data that will be quickly regenerated anyway.
I'm using the latest docker image of pgloader.
What I tried :
Reducing the batch-size parameter. This didn't work because the rows are so big, I couldn't get the tables to load properly.
Excluding the tables and creating materialized views based on those tables with a query like select * from heavy_table where false. This works but the materialized views don't have indices and foreign keys of the original tables and I'd need to create them manually with a risk of human error in case they change before the final production migration. Also the materialized view doesn't have the auto-increment to convert the type to serial in the postgres schema.
Clearing all the data from these tables before migrating. This is what I'm doing in staging environment. I could do this in production but I don't like doing this in production even though it's unlikely that I'd need the data.
Feature request: Having the option to migrate a table (schema, indices, foreign keys, type casting...) but skip all the data inside.
I would also take a workaround like my second option but without having to manually recreate part of the table. Materialized views are a great mechanism in pgloader but the missing auto-increment and indices is a problem, maybe I just didn't find the correct way to do it ?
Thanks in advance
The text was updated successfully, but these errors were encountered:
I am bumping into the same issue with rows with some heavy columns (containing json). Did you find a solution or did you go with second option? I have similar issue, but I need to preserve the data in those tables
Hello,
Context : I'm migrating a database from MySQL to Postgres. Two of the tables in the DB contain lots of rows with some heavy columns (containing json).
When I try to migrate the DB, the job fails with the "heap exhausted" message.
The data in these tables could be skipped in the migration because it's tracing data that will be quickly regenerated anyway.
I'm using the latest docker image of pgloader.
What I tried :
select * from heavy_table where false
. This works but the materialized views don't have indices and foreign keys of the original tables and I'd need to create them manually with a risk of human error in case they change before the final production migration. Also the materialized view doesn't have the auto-increment to convert the type to serial in the postgres schema.Feature request: Having the option to migrate a table (schema, indices, foreign keys, type casting...) but skip all the data inside.
I would also take a workaround like my second option but without having to manually recreate part of the table. Materialized views are a great mechanism in pgloader but the missing auto-increment and indices is a problem, maybe I just didn't find the correct way to do it ?
Thanks in advance
The text was updated successfully, but these errors were encountered: