Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Migration of already existing repositories leads to 404 #235

Closed
maxkratz opened this issue May 24, 2023 · 7 comments · Fixed by restic/restic#4400
Closed

Migration of already existing repositories leads to 404 #235

maxkratz opened this issue May 24, 2023 · 7 comments · Fixed by restic/restic#4400

Comments

@maxkratz
Copy link

Output of rest-server --version

f41f6db080c5

How did you run rest-server exactly?

Using docker-compose with the following compose file:

version: '2'

services:
  restserver:
    volumes:
       - fs-restic:/data
    image: restic/rest-server:latest
    environment:
      - VIRTUAL_HOST=$DOMAIN
      - VIRTUAL_PORT=8000
      - LETSENCRYPT_HOST=$DOMAIN
      - OPTIONS=--private-repos --debug
    networks:
      - web

networks:
  web:
    external:
      name: webshare

volumes:
  fs-restic:
    driver_opts:
      type: "nfs"
      o: "addr=10.0.0.3,nolock,soft,rw,intr,rsize=8192,wsize=8192,timeo=20,retrans=3,proto=tcp"
      device: ":/mnt/storage/dockerhost/rest-server"

Using nginx-proxy, the rest-server is available on https://$DOMAIN with a valid Lets Encrypt certificate.

What backend/server/service did you use to store the repository?

Docker volume that is mounted via NFS from another VM (as it can be seen in the docker-compose.yml above).

Expected behavior

I used Minio S3 to serve the restic repository for one of my backed up systems before.
I expected to be able to migrate the folder containing the restic repository to the NFS volume and switch my backup script to use rest:https://... to continue using the already existing repository.

Actual behavior

However, if I do so and run restic cat config afterwards, the output is something like this:

repository 426ac560 opened (version 2, compression level auto)
List(lock) returned error, retrying after 552.330144ms: List failed, server response: 404 Not Found (404)
List(lock) returned error, retrying after 1.080381816s: List failed, server response: 404 Not Found (404)
List(lock) returned error, retrying after 1.31013006s: List failed, server response: 404 Not Found (404)
List(lock) returned error, retrying after 1.582392691s: List failed, server response: 404 Not Found (404)
List(lock) returned error, retrying after 2.340488664s: List failed, server response: 404 Not Found (404)
List(lock) returned error, retrying after 4.506218855s: List failed, server response: 404 Not Found (404)
List(lock) returned error, retrying after 3.221479586s: List failed, server response: 404 Not Found (404)
List(lock) returned error, retrying after 5.608623477s: List failed, server response: 404 Not Found (404)

Steps to reproduce the behavior

  • Create a restic repository and backup something to it (locally).
  • Set up rest-server with the docker-compose.yml above.
  • Create a set of credentials and input them to the htpasswd file.
  • Create the corresponding folder structure and migrate the already existing restic repo to the NFS volume of the rest-server instance.
  • Switch the backup script to use the new repository location.
  • Run restic cat output.

For reference, here is the snippet of my backup script:

#!/bin/bash

set -e

# Set crypto passphrase for encryption
export AWS_ACCESS_KEY_ID=forgejo-dev
export AWS_SECRET_ACCESS_KEY=123secret
export RESTIC_PASSWORD=456secret
export RESTIC_REPOSITORY=rest:https://$AWS_ACCESS_KEY_ID:$AWS_SECRET_ACCESS_KEY@$DOMAIN/forgejo-dev/forgejo-dev-backup
restic="/opt/restic/restic"

$restic cat config
exit 0

Interestingly, if I create a new repository with a different name using the backup script (on the rest-server location), everything works correctly and the files can be backed up. I conclude that my credentials and repo config within the script and the htpasswd file should be correct.

Do you have any idea what may have caused this?

  • Maybe I missed something.
  • Maybe file permissions? (Though, I checked these.)

Do you have an idea how to solve the issue?

  • No :(

Did rest-server help you today? Did it make you happy in any way?

rest-server is amazingly fast, especially in comparison with the Minio S3 backend :-).
Thank you for your help!

@maxkratz
Copy link
Author

Update:

The source repository was missing the folder locks (but was working just fine!) Maybe Minio S3 deletes empty folders? I don't know ...

However, reading this thread, I was tempted to create the locks folder manually on the NFS share by running this command: $ mkdir -p locks

This resolved my issue and the repository can now be used by restic accessing via rest-server.

My conclusion at this point is that my restic client refuses to work with rest-server if the folder locks is missing in the repo. This is not the case if using Minio S3 as backend.

@rawtaz
Copy link
Contributor

rawtaz commented May 24, 2023

A short summary is that restic expects the repository to be what it should be. If you or something else deletes parts of the repository that init created, then you may run into issues. In this case the problem was that the copy of the repository wasn't complete.

@rawtaz rawtaz closed this as completed May 24, 2023
@maxkratz
Copy link
Author

maxkratz commented May 24, 2023

In this case the problem was that the copy of the repository wasn't complete.

Then I'm asking the question of how restic is able to use the repository when accessing via S3. All of my repositories were in use via S3 until yesterday and restic never complained about a missing locks folder.
(I still have the S3 buckets on the original storage and can see that the folder is missing in every repository.)

@maxkratz
Copy link
Author

I've created a minimal working example to reproduce this: restic does not create a lock folder if it uses a S3 backend for the repository. I used the newest release from GitHub and ran restic init as well as restic --verbose backup ..

Should I open a new issue at the main repo?

@maxkratz
Copy link
Author

Update: I found a similar issue and posted a comment.

@MichaelEischer
Copy link
Member

MichaelEischer commented May 26, 2023

The source repository was missing the folder locks (but was working just fine!) Maybe Minio S3 deletes empty folders?

S3 uses a flat namespace, it has no concept of folders. Files in a "folder" are just files with a corresponding prefix. It is possible to simulate folders with empty placeholder files. But restic doesn't create those files. Thus, copying a repository from S3 won't include these folders.

restic ignores empty folders (see https://github.com/restic/restic/blob/bfc9c6c9712054366ae6cf1dd66839bb17359e44/internal/backend/local/local.go#L258 ), thus there's no reason why the rest-server shouldn't do the same.

@maxkratz the issue in restic is specific for the 'local' backend. It is unrelated to the 'rest' backend.

@maxkratz
Copy link
Author

S3 uses a flat namespace, it has no concept of folders. Files in a "folder" are just files with a corresponding prefix. It is possible to simulate folders with empty placeholder files. But restic doesn't create those files. Thus, copying a repository from S3 won't include these folders.

restic ignores empty folders (see https://github.com/restic/restic/blob/bfc9c6c9712054366ae6cf1dd66839bb17359e44/internal/backend/local/local.go#L258 ), thus there's no reason why the rest-server shouldn't do the same.

Thank you for your explanation. I've used an older version of Minio S3 as my restic backend that was able to use a "local"/"file system" backend and wrote all files and folders directly to disk (plus additional metadata of course).

@maxkratz the issue in restic is specific for the 'local' backend. It is unrelated to the 'rest' backend.

You are right. Does this mean I should open another issue in the restic repository regarding the rest backend to state the 404 error if the locks folder is missing? If I understand the discussion in the linked issue correctly, this bug was alreday fixed for the local backend.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants