-
-
Notifications
You must be signed in to change notification settings - Fork 743
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
connect to existing repository fails #6852
Comments
If borg reports "invalid segment magic" at offset 0, it either means that the contents of the file are corrupted (it does not start with the "BORG_SEG" byte sequence) or maybe the file is just 0 bytes length and thus also not a valid borg segment file. Maybe confirm this by doing a "Segment entry checksum mismatch" that means the crc32 check of some data contained in there failed. So it looks like there is a major malfunction causing a corrupted borg regpository. Files system, kernel, block device? |
I did
Only one file has 0 bytes. That's called iprecord.txt and that's in the root of the directory along with the README file and config file. '"Segment entry checksum mismatch" that means the crc32 check of some data contained in there failed.' If the crc32 check fails on certain parts of the repository, does the repair mark those archives, so 'good' archives are still recoverable? "Files system, kernel, block device?" file system used all round is ext4. This includes the failed hard drive, hard drive B and hard drive C. kernel: 5.13.19_1 block device: yes. All drive partitions were/are block devices. Should I wait for the repair to complete and retry 'connect to existing repository'? |
borg does not put a file
the crc32 check is on chunk level and due to deduplication chunks are (potentially) shared between archives. Thus a missing or corrupt chunk may affect multiple archives. Everything that's good still should be recoverable. What I meant with "block device" is that maybe the drive (hdd or ssd) could be malfunctioning. In general, you need to first make sure that your hardware and low-level software is working correctly - before trying |
As I mentioned, did use "What I meant with "block device" is that maybe the drive (hdd or ssd) could be malfunctioning." The hard drive that now houses the borg directory is a new hard drive. I've not seen or experienced anything untoward with that drive too. "In general, you need to first make sure that your hardware and low-level software is working correctly - before trying borg check --repair (otherwise it might get worse). " I'm running With the current block device (hard drive C) I could run a disk check using |
OK, but if it is not 0-length, then what's in these files? If borg says the segment magic is invalid, it means the file does not start with "BORG_SEG" as usual. Have a look in there. It is binary, but maybe you can see what's in there. External hard drives are usually OK, I would just avoid anything with SMR (shingled magnetic recording). That's easier said than done these days though, as many manufacturers avoid to tell you the fact that they are using SMR. E.g. CMR is better: conventional magnetic recording which is proven technology and faster. If done right, SMR should not really matter for borg (except maybe speed wise), but I have the feeling that this kind of drives might have other problems than that. Had lots of troubles with some 5GB 2.5" Seagate drives myself. I personally like 3.5" Helium-filled CMR drives: they are fast, big, cool. Sadly tend to be a bit on the expensive side usually (maybe have a look at Toshiba MG08 - got some of these relatively inexpensive recently). Of course, 3.5" desktop/server drives are not very portable and need separate power supply, so I wished there were some decent 2.5" USB3 drives, but I could not find any yet. If the data volume is rather low (< 1TB), external SSDs might also be an option. A bit more expensive, but small, portable, fast and no SMR. With low-level software (and hardware) I mean all stuff that is "below" borg. borg relies on these lower levels working correctly. If they don't and you move around a lot of data, it could corrupt the data. Malfunctioning hard drives are one example, faulty memory (RAM) is another. There are also misbehaved / buggy file system drivers (but ext4 is very stable, so an issue there is rather unlikely). |
The
When I checked at about 8 am this morning, I saw:
I entered the passphrase. That was about an hour ago. Can see a solid cursor (not flashing) and it's been like that ever since. In task manager I can see two processes called:
not really taking up any processing or memory resources. Then I searched for 'repair' and can see a process:
It's using 3.2 GiB of memory and 12% CPU. Also, I had a look in the directories within the original borg directory (not the one being repaired - the repairs are done on the copy) using dolphin file manager. You're correct, there are some files with 0 bytes. Oddly enough, when I checked the
From what I can make out that's 8 bytes. If I check on dolphin the same file shows up as 0 bytes. One interesting thing I noticed was that if I navigate to the same directory on the repository being repaired. e.g. there are no files there anymore. Could it be that the borg check --repair has removed this 'inconsistent' directory (or the 'inconsistent' archive - because this directory happens to be holding the 'inconsistent' files relating to that 'inconsistent' archive) completely in order to rebuild the repository? I checked the dates and times on the original copy (directory) and the files are dating back from 20 June 2021 to 15 March 2022. Do these dates correspond to the dates of the archives in any way? e.g. do these files belong to an archive that was created on 15 March 2022? or no such conclusions can be derived from the time stamps? By the way, if we don't get anywhere with the |
Well, it sounds like the process you used for copying the files did not work ok or the filesystem / device are not working reliably or dolphin is lying about the file sizes. borg usually does not produce 0 byte files, nor does it produce 8 byte files (iirc, the minimum valid segment file would be 17 bytes). When While borg check repair is running, changes in the repo dirs are expected (like files with lower numbers getting deleted and files with higher numbers getting created). These segment files contains multiple chunks of data or metadata. There is a rough correspondence between archives created at some data and segment files having the same data, but it is not 1:1 because chunks can get moved to newer segment files due to segment compaction. I don't think there is much else to try as far as borg/vorta are concerned. What you need to find out is the root cause that caused this kind of damage to your repository files, that is not normal. Either some hw or some sw is really misbehaving here. |
Soon after my previous post,
There were several of the ' Then at the finish of the console output this:
I then tried 'connect to existing repository'?. It was looking promising because I could see in the 'Vorta for Borg Backup' window a message After the fetching and building archive index completed I clicked on the 'Archives' tab and I could see archives up to 28 June 2022. The last good archive has date/time: '2022-06-28 00:36'. So, early on 28 June 2022. This is fine, as I was able to recover the remaining files (from the faulty hard drive) between the time of the last available vorta for borg archive (28 June at 0036) and the time of the hard disk fail (29 June). I've mounted the '2022-06-28 00:36' archive and all seems good. "Well, it sounds like the process you used for copying the files did not work ok or the filesystem / device are not working reliably or dolphin is lying about the file sizes." I did check (using dolphin file explorer) several directories within '/media/data1/backup.orig/borg/data/' and remember seeing many 17 byte files. Even now when I check for example the following directory: /media/data1/backup.orig/borg/data/10/ there are 574 files within it and from what I can make out three files have the size 500.1Mib, 500.2MiB and 500.5Mib. The remainder are all 17 bytes. If I now navigate to: /media/data1/backup1/data/10/ (this 'backup1' directory is the one that has the copy of the repository and is the one that I ran the repair on) that just has three files and those three files have the file sizes 500.1Mib, 500.2MiB and 500.5Mib "....because chunks can get moved to newer segment files due to segment compaction...' Yes, the 'backup1' directory (and sub-directories) that has the repaired repository is now only 50.7 GiB, whereas the original is 64.7 GiB. "What you need to find out is the root cause that caused this kind of damage to your repository files, that is not normal. Either some hw or some sw is really misbehaving here." I've been able to mount and restore files from the repository about two weeks before the disk failure (I think a restore of a few files was done then). So, it could be that since then the disk that failed wasn't writing data to the repository correctly (as it was about to fail). Luckily the borg check repair process came out on top and I can see pretty much all the archives according to my backup/prune criteria. i.e. hourly, daily, weekly, monthly, yearly. After this experience, I've decided to run 'smartctl' hard disk diagnostics often on the hard drive with the vorta for borg backups. That way if a hard disk is about to fail, I have advance warning and can take steps to replace it. By the way, now that I've installed a new hard drive, once I check diagnostics and ensure that the new drive is running fine, is it okay to keep the repaired repository, revert to the 'good' repository on the external hard drive (from 11 June 2022) or start a new repository? Could I simply run a 'check' on the repaired repository and if the check doesn't find any errors continue to use it? Thanks for your support and guidance in resolving this by the way. |
I would be extremely careful with the repaired repo. There was major damage and borg can not do wonders and bring back data that is gone. So, e.g. if a data chunk in a file is missing, it will replace it with a same length all-zero chunk (borg check will tell you about this). In some cases, this might help (e.g. if it is a VM disk and that space is not really used or nothing important), but in other cases the lost data might well have been important. Also, just seeing some metadata that looks good, like an archive list or a file list, does not necessarily mean the contents are all good. If borg check told you about lots of troubles and lots of stuff it could not find, you have to expect lots of data being lost or damaged. Using smartctl to monitor disk health is a good idea, just do not rely on that global "passed" status. This tends to be "passed" for a long time even after signs of issues. Guess it is the manufacturers not wanting you to RMA the disk early / while it still is under warranty. So better look for reallocated sectors and pending sectors. If you have the space, keep the repaired repo, but only use it read-only and only for last-effort cases, if you do not have the data elsewhere in a better state. |
As per your suggestion, I'm now attempting to connect to the 'good' repository from 11 June 2022. See Original Post: "...Also, it's worth mentioning that I keep a backup of the repository on an external drive copied every 2 weeks or so. The last one was done on 11 June 2022. I've been able to copy that repository to the replacement hard drive (hard drive C) and connect to that repository without issues." When I attempted connecting today to that repository an error appeared: "Error: Cache, or information obtained from the security directory is newer than repository - this is either an attack or unsafe (multiple repos with same ID) I then searched online and came across this: "My repository is corrupt, how can I restore from an older copy of it? borg delete --keep-security-info /path/to/repo When I run: console output is:
|
Don't worry. I'm now connected to the 'good' repo. How? Uninstalled Vorta then re-installed. I have the flatpak version (as I'm running Void Linux). did these steps: Just in case I needed these directories I renamed ~/.var/app/com.borgbase.Vorta to ~/.var/app/com.borgbase.Vorta.old renamed ~/.config/borg to ~/.config/borg.old (after installation I checked. there isn't a new borg directory created within ~/.config yet!) then :
this did throw an error:
however, when I did:
no flatpak apps were showing (VOrta was the only flatpak app that was installed on my system) then did:
After install tried connecting to repo and it did it successfully. |
Hardware: HP Pavilion Gaming - 15-ec0050ax
filesystems used: ext4
How much data is handled by borg? - Around 40 GB
Describe the problem you're observing: The hard disk where the borg repository resides failed. However, I have a copy of the repository (copied every 6 hours) on another hard drive (hard drive B). So, after installing the replacement hard drive (hard drive C), I copied the Borg/Vorta repository from hard drive B to the replacement hard drive (hard drive C) opened Vorta for Borg and attempted to 'connect to existing repository'.
I can then see a message on screen saying 'validating existing repo'. After sometime, perhaps about 10 minutes, that message is replaced with 'Unable to add your repository'.
Can you reproduce the problem? - Yes, I can reproduce the issue as mentioned above. Also, it's worth mentioning that I keep a backup of the repository on an external drive copied every 2 weeks or so. The last one was done on 11 June 2022. I've been able to copy that repository to the replacement hard drive (hard drive C) and connect to that repository without issues. The latest data in that repository is much older though (latest files are 11 June 2022), whereas the repository that cannot be added has the latest backups. I checked the directories in the problematic repository and the latest is 29 June, which is when the hard disk failure occurred. If I can recover files up to then or even 28 June, that would be great. i.e. I don't need to recover all the archives or even need them available as I have the repository from the external hard that have most of the archives up to 11 June.
Early this morning at around 1 am I ran:
borg check /media/data1/backup1
This took about 1 hour to complete (please see console output in 'warning/errors/backtraces from the system logs + console output' section)
Then immediately afterward I started running:
borg check --repair /media/data1/backup1
It was started at about 2am. Now it's 4:22 pm and it's still running, but comparing the output from the check and the repair I can see that it is near the finish because the check finished with the sequence:
segment 28026, 28034, 28042, 28050, 28062.
The repair command has now completed segment 28026.
Include any warning/errors/backtraces from the system logs + console output
console output (please note the directories and contents of 'backup1' and 'backup.orig' are the same. i.e. exact copy. I did this so that any repairs can be done on a 'copy')
borg check /media/data1/backup1
''''
Data integrity error: Invalid segment magic [segment 652, offset 0]
Data integrity error: Invalid segment magic [segment 998, offset 0]
Data integrity error: Invalid segment magic [segment 1136, offset 0]
Data integrity error: Invalid segment magic [segment 1240, offset 0]
Data integrity error: Invalid segment magic [segment 2272, offset 0]
Data integrity error: Invalid segment magic [segment 2531, offset 0]
Data integrity error: Invalid segment magic [segment 2715, offset 0]
Data integrity error: Invalid segment magic [segment 2779, offset 0]
Data integrity error: Invalid segment magic [segment 2823, offset 0]
Data integrity error: Invalid segment magic [segment 2835, offset 0]
Data integrity error: Invalid segment magic [segment 2985, offset 0]
Data integrity error: Invalid segment magic [segment 3467, offset 0]
Data integrity error: Invalid segment magic [segment 13052, offset 0]
Data integrity error: Invalid segment magic [segment 14109, offset 0]
Data integrity error: Invalid segment magic [segment 14357, offset 0]
Data integrity error: Invalid segment magic [segment 14419, offset 0]
Data integrity error: Invalid segment magic [segment 14585, offset 0]
Data integrity error: Invalid segment magic [segment 14851, offset 0]
Data integrity error: Invalid segment magic [segment 15564, offset 0]
Data integrity error: Invalid segment magic [segment 15994, offset 0]
Data integrity error: Invalid segment magic [segment 16010, offset 0]
Data integrity error: Invalid segment magic [segment 16050, offset 0]
Data integrity error: Invalid segment magic [segment 16086, offset 0]
Data integrity error: Invalid segment magic [segment 16154, offset 0]
Data integrity error: Invalid segment magic [segment 16230, offset 0]
Data integrity error: Invalid segment magic [segment 16286, offset 0]
Data integrity error: Invalid segment magic [segment 16362, offset 0]
Data integrity error: Invalid segment magic [segment 16410, offset 0]
Data integrity error: Invalid segment magic [segment 16454, offset 0]
Data integrity error: Invalid segment magic [segment 16506, offset 0]
Data integrity error: Invalid segment magic [segment 16566, offset 0]
Data integrity error: Invalid segment magic [segment 16678, offset 0]
Data integrity error: Invalid segment magic [segment 16708, offset 0]
Data integrity error: Invalid segment magic [segment 16722, offset 0]
Data integrity error: Invalid segment magic [segment 16758, offset 0]
Data integrity error: Invalid segment magic [segment 16834, offset 0]
Data integrity error: Invalid segment magic [segment 16846, offset 0]
Data integrity error: Invalid segment magic [segment 16952, offset 0]
Data integrity error: Invalid segment magic [segment 16998, offset 0]
Data integrity error: Invalid segment magic [segment 17240, offset 0]
Data integrity error: Invalid segment magic [segment 18163, offset 0]
Data integrity error: Invalid segment magic [segment 18191, offset 0]
Data integrity error: Invalid segment magic [segment 18229, offset 0]
Data integrity error: Invalid segment magic [segment 19004, offset 0]
Data integrity error: Invalid segment magic [segment 19019, offset 0]
Data integrity error: Invalid segment magic [segment 19031, offset 0]
Data integrity error: Invalid segment magic [segment 19039, offset 0]
Data integrity error: Invalid segment magic [segment 19051, offset 0]
Data integrity error: Invalid segment magic [segment 19059, offset 0]
Data integrity error: Invalid segment magic [segment 19067, offset 0]
Data integrity error: Invalid segment magic [segment 19075, offset 0]
Data integrity error: Invalid segment magic [segment 19083, offset 0]
Data integrity error: Invalid segment magic [segment 19091, offset 0]
Data integrity error: Invalid segment magic [segment 19103, offset 0]
Data integrity error: Invalid segment magic [segment 19111, offset 0]
Data integrity error: Invalid segment magic [segment 19119, offset 0]
Data integrity error: Invalid segment magic [segment 19131, offset 0]
Data integrity error: Invalid segment magic [segment 19137, offset 0]
Data integrity error: Invalid segment magic [segment 19139, offset 0]
Data integrity error: Invalid segment magic [segment 19155, offset 0]
Data integrity error: Invalid segment magic [segment 19167, offset 0]
Data integrity error: Invalid segment magic [segment 19179, offset 0]
Data integrity error: Invalid segment magic [segment 19191, offset 0]
Data integrity error: Invalid segment magic [segment 19199, offset 0]
Data integrity error: Invalid segment magic [segment 19207, offset 0]
Data integrity error: Invalid segment magic [segment 19219, offset 0]
Data integrity error: Invalid segment magic [segment 19227, offset 0]
Data integrity error: Invalid segment magic [segment 19239, offset 0]
Data integrity error: Invalid segment magic [segment 19251, offset 0]
Data integrity error: Invalid segment magic [segment 19253, offset 0]
Data integrity error: Invalid segment magic [segment 19262, offset 0]
Data integrity error: Invalid segment magic [segment 19286, offset 0]
Data integrity error: Invalid segment magic [segment 19294, offset 0]
Data integrity error: Invalid segment magic [segment 19306, offset 0]
Data integrity error: Invalid segment magic [segment 19322, offset 0]
Data integrity error: Invalid segment magic [segment 19334, offset 0]
Data integrity error: Invalid segment magic [segment 19346, offset 0]
Data integrity error: Invalid segment magic [segment 19358, offset 0]
Data integrity error: Invalid segment magic [segment 19370, offset 0]
Data integrity error: Invalid segment magic [segment 19382, offset 0]
Data integrity error: Invalid segment magic [segment 19390, offset 0]
Data integrity error: Invalid segment magic [segment 19402, offset 0]
Data integrity error: Invalid segment magic [segment 19414, offset 0]
Data integrity error: Invalid segment magic [segment 19416, offset 0]
Data integrity error: Invalid segment magic [segment 19526, offset 0]
Data integrity error: Invalid segment magic [segment 19570, offset 0]
Data integrity error: Invalid segment magic [segment 19606, offset 0]
Data integrity error: Invalid segment magic [segment 19634, offset 0]
Data integrity error: Invalid segment magic [segment 19658, offset 0]
Data integrity error: Invalid segment magic [segment 19666, offset 0]
Data integrity error: Invalid segment magic [segment 19694, offset 0]
Data integrity error: Invalid segment magic [segment 19758, offset 0]
Data integrity error: Invalid segment magic [segment 19776, offset 0]
Data integrity error: Invalid segment magic [segment 19814, offset 0]
Data integrity error: Invalid segment magic [segment 19845, offset 0]
Data integrity error: Invalid segment magic [segment 19873, offset 0]
Data integrity error: Invalid segment magic [segment 19915, offset 0]
Data integrity error: Invalid segment magic [segment 19995, offset 0]
Data integrity error: Invalid segment magic [segment 19997, offset 0]
Data integrity error: Invalid segment magic [segment 19999, offset 0]
Data integrity error: Invalid segment magic [segment 26805, offset 0]
Data integrity error: Invalid segment magic [segment 28002, offset 0]
Data integrity error: Invalid segment magic [segment 28010, offset 0]
Data integrity error: Invalid segment magic [segment 28018, offset 0]
Data integrity error: Invalid segment magic [segment 28026, offset 0]
Data integrity error: Invalid segment magic [segment 28034, offset 0]
Data integrity error: Invalid segment magic [segment 28042, offset 0]
Data integrity error: Invalid segment magic [segment 28050, offset 0]
Data integrity error: Invalid segment magic [segment 28062, offset 0]
Data integrity error: Segment entry checksum mismatch [segment 32400, offset 134200747]
Completed repository check, errors found.
''''
borg check --repair /media/data1/backup1
''''This is a potentially dangerous function.
check --repair might lead to data loss (for kinds of corruption it is not
capable of dealing with). BE VERY CAREFUL!
Type 'YES' if you understand this and want to continue: YES
Data integrity error: Invalid segment magic [segment 652, offset 0]
Data integrity error: Invalid segment magic [segment 998, offset 0]
Data integrity error: Invalid segment magic [segment 1136, offset 0]
Data integrity error: Invalid segment magic [segment 1240, offset 0]
Data integrity error: Invalid segment magic [segment 2272, offset 0]
Data integrity error: Invalid segment magic [segment 2531, offset 0]
Data integrity error: Invalid segment magic [segment 2715, offset 0]
Data integrity error: Invalid segment magic [segment 2779, offset 0]
Data integrity error: Invalid segment magic [segment 2823, offset 0]
Data integrity error: Invalid segment magic [segment 2835, offset 0]
Data integrity error: Invalid segment magic [segment 2985, offset 0]
Data integrity error: Invalid segment magic [segment 3467, offset 0]
Data integrity error: Invalid segment magic [segment 13052, offset 0]
Data integrity error: Invalid segment magic [segment 14109, offset 0]
Data integrity error: Invalid segment magic [segment 14357, offset 0]
Data integrity error: Invalid segment magic [segment 14419, offset 0]
Data integrity error: Invalid segment magic [segment 14585, offset 0]
Data integrity error: Invalid segment magic [segment 14851, offset 0]
Data integrity error: Invalid segment magic [segment 15564, offset 0]
Data integrity error: Invalid segment magic [segment 15994, offset 0]
Data integrity error: Invalid segment magic [segment 16010, offset 0]
Data integrity error: Invalid segment magic [segment 16050, offset 0]
Data integrity error: Invalid segment magic [segment 16086, offset 0]
Data integrity error: Invalid segment magic [segment 16154, offset 0]
Data integrity error: Invalid segment magic [segment 16230, offset 0]
Data integrity error: Invalid segment magic [segment 16286, offset 0]
Data integrity error: Invalid segment magic [segment 16362, offset 0]
Data integrity error: Invalid segment magic [segment 16410, offset 0]
Data integrity error: Invalid segment magic [segment 16454, offset 0]
Data integrity error: Invalid segment magic [segment 16506, offset 0]
Data integrity error: Invalid segment magic [segment 16566, offset 0]
Data integrity error: Invalid segment magic [segment 16678, offset 0]
Data integrity error: Invalid segment magic [segment 16708, offset 0]
Data integrity error: Invalid segment magic [segment 16722, offset 0]
Data integrity error: Invalid segment magic [segment 16758, offset 0]
Data integrity error: Invalid segment magic [segment 16834, offset 0]
Data integrity error: Invalid segment magic [segment 16846, offset 0]
Data integrity error: Invalid segment magic [segment 16952, offset 0]
Data integrity error: Invalid segment magic [segment 16998, offset 0]
Data integrity error: Invalid segment magic [segment 17240, offset 0]
Data integrity error: Invalid segment magic [segment 18163, offset 0]
Data integrity error: Invalid segment magic [segment 18191, offset 0]
Data integrity error: Invalid segment magic [segment 18229, offset 0]
Data integrity error: Invalid segment magic [segment 19004, offset 0]
Data integrity error: Invalid segment magic [segment 19019, offset 0]
Data integrity error: Invalid segment magic [segment 19031, offset 0]
Data integrity error: Invalid segment magic [segment 19039, offset 0]
Data integrity error: Invalid segment magic [segment 19051, offset 0]
Data integrity error: Invalid segment magic [segment 19059, offset 0]
Data integrity error: Invalid segment magic [segment 19067, offset 0]
Data integrity error: Invalid segment magic [segment 19075, offset 0]
Data integrity error: Invalid segment magic [segment 19083, offset 0]
Data integrity error: Invalid segment magic [segment 19091, offset 0]
Data integrity error: Invalid segment magic [segment 19103, offset 0]
Data integrity error: Invalid segment magic [segment 19111, offset 0]
Data integrity error: Invalid segment magic [segment 19119, offset 0]
Data integrity error: Invalid segment magic [segment 19131, offset 0]
Data integrity error: Invalid segment magic [segment 19137, offset 0]
Data integrity error: Invalid segment magic [segment 19139, offset 0]
Data integrity error: Invalid segment magic [segment 19155, offset 0]
Data integrity error: Invalid segment magic [segment 19167, offset 0]
Data integrity error: Invalid segment magic [segment 19179, offset 0]
Data integrity error: Invalid segment magic [segment 19191, offset 0]
Data integrity error: Invalid segment magic [segment 19199, offset 0]
Data integrity error: Invalid segment magic [segment 19207, offset 0]
Data integrity error: Invalid segment magic [segment 19219, offset 0]
Data integrity error: Invalid segment magic [segment 19227, offset 0]
Data integrity error: Invalid segment magic [segment 19239, offset 0]
Data integrity error: Invalid segment magic [segment 19251, offset 0]
Data integrity error: Invalid segment magic [segment 19253, offset 0]
Data integrity error: Invalid segment magic [segment 19262, offset 0]
Data integrity error: Invalid segment magic [segment 19286, offset 0]
Data integrity error: Invalid segment magic [segment 19294, offset 0]
Data integrity error: Invalid segment magic [segment 19306, offset 0]
Data integrity error: Invalid segment magic [segment 19322, offset 0]
Data integrity error: Invalid segment magic [segment 19334, offset 0]
Data integrity error: Invalid segment magic [segment 19346, offset 0]
Data integrity error: Invalid segment magic [segment 19358, offset 0]
Data integrity error: Invalid segment magic [segment 19370, offset 0]
Data integrity error: Invalid segment magic [segment 19382, offset 0]
Data integrity error: Invalid segment magic [segment 19390, offset 0]
Data integrity error: Invalid segment magic [segment 19402, offset 0]
Data integrity error: Invalid segment magic [segment 19414, offset 0]
Data integrity error: Invalid segment magic [segment 19416, offset 0]
Data integrity error: Invalid segment magic [segment 19526, offset 0]
Data integrity error: Invalid segment magic [segment 19570, offset 0]
Data integrity error: Invalid segment magic [segment 19606, offset 0]
Data integrity error: Invalid segment magic [segment 19634, offset 0]
Data integrity error: Invalid segment magic [segment 19658, offset 0]
Data integrity error: Invalid segment magic [segment 19666, offset 0]
Data integrity error: Invalid segment magic [segment 19694, offset 0]
Data integrity error: Invalid segment magic [segment 19758, offset 0]
Data integrity error: Invalid segment magic [segment 19776, offset 0]
Data integrity error: Invalid segment magic [segment 19814, offset 0]
Data integrity error: Invalid segment magic [segment 19845, offset 0]
Data integrity error: Invalid segment magic [segment 19873, offset 0]
Data integrity error: Invalid segment magic [segment 19915, offset 0]
Data integrity error: Invalid segment magic [segment 19995, offset 0]
Data integrity error: Invalid segment magic [segment 19997, offset 0]
Data integrity error: Invalid segment magic [segment 19999, offset 0]
Data integrity error: Invalid segment magic [segment 26805, offset 0]
Data integrity error: Invalid segment magic [segment 28002, offset 0]
Data integrity error: Invalid segment magic [segment 28010, offset 0]
Data integrity error: Invalid segment magic [segment 28018, offset 0]
Data integrity error: Invalid segment magic [segment 28026, offset 0]
''''
The text was updated successfully, but these errors were encountered: