-
Notifications
You must be signed in to change notification settings - Fork 113
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error while upgrading database to 0.33.0 #1436
Comments
Oh no! It is probably chunks that are not referenced anymore. I think we need to run some sql to move/delete these and then you need to restore a backup, run the "patching sql" again and then try another upgrade. Can you first send me the output of |
You know, they say: there are two kinds of people: those, who do backups, and those who don't do it yet. Yes, I was still in the second type, though automatic backups were on my todo list :)) the output of
|
Ok - yes backups are really important - but only if you need them :) We should be able to solve it without, but I would do a backup right now. Not sure how comfortable you are with postgres, here would be some hints when using docker. The the rough idea is now to first look at what state your database is, then do the migration manually and finally insert a row in I would need the outputs of the following things:
You can also put it all in separate files and send me an e-mail. |
I'm not that comfortable with it, but I'm learning, while I'm doing. And that link helped too, thanks! I've sent you the results of queries per e-mail. |
Thanks! It looks like the database is in a good state, of the previous version. So if you want you can run the previous version for the time being. I'll take a closer look and come back with instructions we can try out. |
Thank you! |
That is very strange then. The failed migration did not touch the |
If you are ready to start, first create a dump 😄 Then I think we can start by applying the migration up to the point that failed. That would be running these statements: begin;
drop table if exists file_migration_temp;
create table file_migration_temp (
id bigserial primary key,
original_file varchar(254) not null unique,
cid varchar(254) not null,
category varchar(254) not null,
new_file varchar(254) not null unique
);
-- Source files
insert into file_migration_temp (original_file, cid, category, new_file)
select
rs.file_id as original_file,
i.cid,
'attachmentsource' as category,
i.cid || '/attachmentsource/' || rs.file_id as new_file
from attachment_source rs
inner join attachment ra on rs.id = ra.attachid
inner join item i on ra.itemid = i.itemid
;
-- Archive files
insert into file_migration_temp (original_file, cid, category, new_file)
select distinct
rs.file_id as original_file,
i.cid,
'attachmentsource' as category,
i.cid || '/attachmentsource/' || rs.file_id as new_file
from attachment_archive rs
inner join attachment ra on rs.id = ra.attachid
inner join item i on ra.itemid = i.itemid
;
-- Converted files
insert into file_migration_temp (original_file, cid, category, new_file)
select
ra.filemetaid as original_file,
i.cid,
'attachmentconvert' as category,
i.cid || '/attachmentconvert/' || ra.filemetaid as new_file
from attachment_source rs
inner join attachment ra on rs.id = ra.attachid
inner join item i on ra.itemid = i.itemid
where rs.file_id <> ra.filemetaid
;
-- Preview image
insert into file_migration_temp (original_file, cid, category, new_file)
select
ap.file_id as original_file,
i.cid,
'previewimage' as category,
i.cid || '/previewimage/' || ap.file_id as new_file
from attachment_preview ap
inner join attachment ra on ra.attachid = ap.id
inner join item i on i.itemid = ra.itemid
order by id
;
-- classifier
insert into file_migration_temp (original_file, cid, category, new_file)
select
file_id as original_file,
cid,
'classifier' as category,
cid || '/classifier/' || file_id as new_file
from classifier_model
;
-- save obsolete/orphaned files
drop table if exists obsolete_files;
create table obsolete_files(
file_id varchar(254) not null,
mimetype varchar(254) not null,
length bigint not null,
checksum varchar(254) not null,
created timestamp not null
);
with
missing_ids as (
select file_id from filemeta
except
select original_file as file_id from file_migration_temp)
insert into obsolete_files (file_id, mimetype, length, checksum, created)
select file_id, mimetype, length, checksum, created from filemeta
where file_id in (select file_id from missing_ids)
; Then please execute the following and post or send me the results: with
fchunk_files as (select 'chunk files all' as label, count(distinct file_id) as n from filechunk),
fmeta_files as (select 'meta all' as label, count(*) as n from filemeta),
linked_files as (select 'to migrate files', count(*) as n from file_migration_temp),
obsolete as (select 'obsolete files', count(*) as n from obsolete_files),
meta_no_chunks as (select 'meta no chunks' as label, count(file_id) as n from filemeta where file_id not in (select file_id from filechunk)),
chunk_no_meta as (select 'chunk no meta' as label, count(file_id) as n from filechunk where file_id not in (select file_id from filemeta)),
linked_chunks as (select 'to migrate chunks', count(distinct file_id) as n from filechunk where file_id in (select original_file from file_migration_temp)),
unlinked_chunks as (select 'not to migrate chunks', count(distinct file_id) as n from filechunk where file_id not in (select original_file from file_migration_temp)),
obsolete_chunks as (select 'obsolete chunks', count(distinct file_id) as n from filechunk where file_id in (select file_id from obsolete_files)),
orphaned_chunks as (select 'orphaned chunks', count(distinct file_id) from filechunk
where file_id not in (select original_file from file_migration_temp)
and file_id not in (select file_id from obsolete_files))
select * from fchunk_files
union
select * from fmeta_files
union
select * from linked_files
union
select * from obsolete
union
select * from meta_no_chunks
union
select * from chunk_no_meta
union
select * from linked_chunks
union
select * from unlinked_chunks
union
select * from obsolete_chunks
union
select * from orphaned_chunks
; The transaction was started with the first |
I think the problem are the "orhpaned chunks". I expect that the query would return a number greater than 0 for these. That's why the migration fails. Contrary to what I wrote above, I think it's easier to note the ids of these chunks and delete them. AFter this the migration would run on next startup. Make sure you are in the state as defined at the end of the previous comment and that you have existing orphaned chunks.
|
The result of that query was:
Then deleting of those 3 IDs resulted in Very thanks for all this help! |
Thanks for reporting back! I pin this issue for a while, so it's easier to discover should others run into this as well. |
I also ran into this error while trying to update, after which I rolled-back. Will this fix be included in a future version, so that one does not have to meddle with the database manually? Or maybe a maintenance script to delete orphaned chunks? |
If it's possible to skip 0.33.0, marking it buggy and letting users upgrade direct to 0.34.0, then, I think, including the fix in the future version is user friendlier. |
Yes, good point. I can add a db migration to fix this for the next release. I wanted first understand the problem a bit more. It is strange that my db doesn't have this problem. I don't know how many people are affected by this, so not sure if it's worth the effort to create a dedicated maintenance script. Well … the script is almost there - only need to copy the stuff from above into a file and let it run :) It's good that postgres supports transactions for ddl, I don't think mariadb can do this… the script could be a bit different then. |
I did not try to migrate before using your cleanup script. But I also deleted 1 obsolete chunk and then pulled the new docker image. |
If it was an "orphaned" chunk, then you would have run into this issue… I think the idea from @LightTemplar is good: to recommend waiting for the next release. I can adjust the release notes accordingly |
Ah yes orphaned. But yeah for people not willing to manualy clean up skipping might be a good option |
I pushed a fix now that is included in the current nightly. If someone is brave enough to test it on their data, that would be great! I can only run it against artificially created data. I would create a release asap then to get this out of the way :-) 🙏🏼 |
I have set up a quick test environment using a copy of my database. Does not seem to work, unfortunately :/
|
Thank you a lot for testing! Could you double check, whether the image was fetched anew? The logs look a little bit like the previous ones. I changed them slightly, there should be lines like |
I will check later. But in the |
Thanks for checking, it's just to be extra sure. The nightly version is always |
Sorry for the delay. I ran it again, here are the full logs:
|
Never worries about delays. Thank you a lot for trying this! So… I forgot to add the fixes for mariadb and h2 🤦🏼 I just pushed it to master, it would be great if you could give it another try when you have time. It is interesting to see what mariadb does now, because afaik it doesn't support transactions for ddl statements. |
Fantastic news - it worked! 🎉
|
That's great, thank you! |
I have just pulled the new 0.33.0 docker containers and after starting them I can't connect to the website and I see the following in logs.
The text was updated successfully, but these errors were encountered: