This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] pg_dump/pg_restore
does not work
#5630
Comments
Cc @zackpollard |
Came to report the same issue, as our Cloudron packaging tests hit this. |
To add some more findings, looking at the postgresdump created from the database, it contains the following for earthdistance:
All other occurances of |
@vnghia What Postgres version were you using when perform db_dump and pg_restore? |
I'm using 16 but I think other versions are affected as well since |
This also happens on postgres 14 |
Does the following guide work? |
Comparing that with our case, it seems |
Reporting that streaming data out of Cloudnative PG and into a new cluster also runs into this same issue. Seems to fail on the My Cloudnative PG cluster config, trying to get data out of a PG14 database into a new cluster that has pgvecto.rs installed by default: https://github.com/ModestTG/jace-cluster/blob/main/kubernetes/apps/database/cloudnative-pg/cluster/cluster2.yaml |
To add a bit more findings, so adding This behavior to unset I guess the question is, why this |
Just want to add that |
I think it is worth noting that there are two separate things that need to happen:
Normally (1) happens via the migrations, which do correctly create the required extensions. When you run the pg_dump command you have to the option to include only the data, only the schema, or both the schema and the data. The command in the docs exports the schema and the data together. We should maybe update the documentation to instead recommend:
|
The default recommended backup/restore path using |
Based on this Stackoverflow's answer and this thread, this is a The workaround is to move For a newly-created database:
For a migrated database:
This only need to be done once, no need to modify the sqldump, works well with binary dump and the easiest way to do this is adding a migration:
but moving something to I can not restore even with Happy to send a PR if needed 😄 Edit: On a second thought, moving extensions will effect only the current |
I tried the SQL statements for a newly-created database and I didn't have any success. I'm bootstrapping with Cloudnative PG using initdb. I'm still getting the I know my usecase is outside the defined docker-compose approach but I think most people running K8S are doing something similar to what I'm doing. I'd like to stream the data into a new cluster instead of just drop-in replacing my PG 14 image with another image that has pgvecto.rs enabled. I'm (probably foolishly) running all of my self-hosted apps on a shared psql cluster, so I'd like to not disturb what's already working if I can. Thanks again for the discussion. |
@ModestTG, can you try the
or
on your old PG cluster (the source database) and stream it again. Because the schema is defined inside the database so if you execute these statement on the new database, it will be overwritten with the schema from the old one. |
@vnghia That seems to have done the trick. The cluster has come up! Thank you so much! |
So I guess the best solution right now is adding a migration
WDYT @zackpollard ? |
I'll need to spend some time looking into this myself. It was mentioned above (by you) that moving things into pg_catalog is considered bad practice, so I would want to understand what implications this could have. |
So I migrated the data and I'm trying to get immich stood up in the new cluster, and I keep getting EDIT: I just noticed that all of the tables also have the wrong owner. I'll fix that. I would think that the bootstrap initdb from Cloudnative PG would keep all of that info but I guess it doesn't. Maybe I'm not understanding it correctly. |
For anyone wanting to understand the root cause a bit better, After all this I don't think its a good idea to register the extension into For a trusted environment, adding |
I was able to get immich migrated from one Cloudnative PG cluster to another Cloudnative PG cluster. Here is the method I used:
spec:
instances: 3
imageName: ghcr.io/bo0tzz/cnpgvecto.rs:14-v0.1.10 # CNPG compatible image with pgvecto.rs extension installed
bootstrap:
initdb:
import:
type: microservice
databases:
- immich
source:
externalCluster: postgres # name of cluster where existing immich data lives
(...)
externalClusters:
- name: postgres
connectionParameters:
host: postgres-ro.database.svc.cluster.local
user: postgres
dbname: immich
password:
name: cloudnative-pg-secret
key: password
|
For Cloudron we have decided to remove the From our side no immich changes are required then. |
So I think we should add a warning in "Backup and restore" section that |
I'm testing disaster recovery, and I met same problem when using I'm using the postgres 15 from debian and immich v1.90.2 backup and restore like this
|
@wlmqpsc Please try to follow the guide here https://immich.app/docs/administration/backup-and-restore/ |
OK. I tried this. @alextran1502 For my install i need to use and i got same error
|
I've also encountered this issue but I've solved it by changing this line in my SELECT pg_catalog.set_config('search_path', '', false) changed to SELECT pg_catalog.set_config('search_path', 'public, pg_catalog', true) This sed -i "s/SELECT pg_catalog.set_config('search_path', '', false);/SELECT pg_catalog.set_config('search_path', 'public, pg_catalog', true);/g" dump.sql To verify that the change has been successfully applied, grep "SELECT pg_catalog.set_config" dump.sql I also tested removing the line altogether as above mentioned, It worked without issues but I was more conformable just leaving it there and adding those 2 schemas. |
I got similar error while trying to restore the backup made with
and
should be /cc @jrasm91 |
Same error. I am on Immich v1.91.4 and I followed the guide https://immich.app/docs/administration/backup-and-restore almost to the letter. (I had to to do What procedure can be followed when access to the source database is lost? |
With the sed command, suggested by #5630 (comment) this works: Change
to:
|
Thank you so much @erikvanoosten, your solution worked for right away. |
This was not the case for me. I dumped with the suggested
I haven't found something broken in the restored instance though, should there have been some trouble? |
Ran into this using Cloudnative-PG doing a v14 to v16 migration. My Postgres is not dedicated to Immich, I was able to list all databases minus Immich and they all cleanly migrated via Cloudnative-PG bootstrap "monolith" method. Slightly different steps, but same theme overall: Step 1 - Create Manual Cluster Database Dump of PG14
Step 2 - Use Script to Extract Immich Database from the dumpall file https://nicolaiarocci.com/how-to-restore-a-single-postgres-database-from-a-pg_dumpall-dump/ Simple script:
Used as:
Step 3 - Fix Search Path:
Step 4 - Manually create Immich database and extensions as others documented. Step 5 - Import Immich Dump File:
This imported cleanly with no SQL Errors. |
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
The bug
When run
pg_dump
and laterpg_restore
, the restoration process throws an error:I think PR #5301 causes this issue.
The OS that Immich Server is running on
Ubuntu 22.04
Version of Immich Server
v1.89.0
Version of Immich Mobile App
v1.89.0
Platform with the issue
Your docker-compose.yml content
N/A.
Your .env content
Reproduction steps
Additional information
After some googling, I found this diogob/activerecord-postgres-earthdistance#30 (comment). Basically, we have to replace
earth
withpublic.earth
.The text was updated successfully, but these errors were encountered: