Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Filer.backup deletes files in backup folder in incremental mode (sink.local) #2919

Closed
urkl opened this issue Apr 14, 2022 · 2 comments
Closed

Comments

@urkl
Copy link

urkl commented Apr 14, 2022

Sponsors SeaweedFS via Patreon https://www.patreon.com/seaweedfs

Describe the bug
Filer.backup service deletes files in backup directory, when using local setup (Sink.local) and is_incremental = true with docker compose.

System Setup

  • OS version: Ubuntu 20.04
  • output of weed version: version 30GB 2.97 77a7d72 linux amd64
  • docker compose

Expected behavior
As stated in documentation, files should NOT be deleted on target.

Screenshots
Log with verbose level 4
image

Additional context
I am playing with seaweed for 4 days now and have to say it's awesome project. It is very possible that the problem is my poor knowing of SeaweedFS.

My setup:

backup.toml file

[sink.local]
enabled = true
directory = "/backup"
# all replicated files are under modified time as yyyy-mm-dd directories
# so each date directory contains all new and updated files.
is_incremental = true

My docker compose file

version: '2'

services:

  master:
    image: chrislusf/seaweedfs:latest
    ports:
      - 9333:9333
      - 19333:19333
    command: "master -ip=master -volumeSizeLimitMB=1024"
    # environment:
    #   WEED_MASTER_VOLUME_GROWTH_COPY_1: 1
    #   WEED_MASTER_VOLUME_GROWTH_COPY_OTHER: 1

    
  volume1:
    image: chrislusf/seaweedfs:latest
    ports:
      - 8081:8081
      - 18081:18080
    command: "volume -mserver=master:9333 -port=8081 -ip=volume1 -preStopSeconds=1"
    depends_on:
      - master
    volumes:
    - ./volume1:/data    


  volume2:
    image: chrislusf/seaweedfs:latest
    ports:
      - 8082:8082
      - 18082:18080
    command: "volume -mserver=master:9333 -port=8082 -ip=volume2 -preStopSeconds=1"
    depends_on:
      - master
    volumes:
    - ./volume2:/data    

  volume3:
    image: chrislusf/seaweedfs:latest
    ports:
      - 8083:8083
      - 18083:18080
    command: "volume -mserver=master:9333 -port=8083 -ip=volume3 -preStopSeconds=1"
    depends_on:
      - master
    volumes:
    - ./volume3:/data   

  s3:
    image: chrislusf/seaweedfs:latest 
    ports:
      - 8888:8888
      - 18888:18888
      - 8000:8000
    command: 'filer -master="master:9333" -s3 -s3.config=/etc/seaweedfs/s3.json -s3.port=8000 -s3.allowEmptyFolder=false -s3.allowDeleteBucketNotEmpty=false'
    volumes:
      - ./s3.json:/etc/seaweedfs/s3.json
      - ./filer:/data/
    depends_on:
      - master
      - volume1
      - volume2
      - volume3
  
  replicate:
    image: chrislusf/seaweedfs:latest
    command: ' -v 4 filer.backup -filer=s3:8888   -timeAgo=500h '    
    restart: on-failure
    volumes:
      - ./backup.toml:/etc/seaweedfs/replication.toml
      - ./backup:/backup
    depends_on:
      - master
      - volume1
      - volume2
      - volume3

Java code for uploading and deleteing objects:

   AwsClientBuilder.EndpointConfiguration a = new AwsClientBuilder.EndpointConfiguration("http://localhost:8000", "Sss");
        AmazonS3 s3client = AmazonS3ClientBuilder
                .standard()
                .withCredentials(new AWSStaticCredentialsProvider(credentials))
                .withEndpointConfiguration(a)
                .build();

        String BUCKET = "FOTKE88";

        if (s3client.doesBucketExistV2(BUCKET)) {

            s3client.listObjectsV2(BUCKET).getObjectSummaries().forEach(os -> {
                System.out.println(os);
                s3client.deleteObject(BUCKET, os.getKey());
            });

            //s3client.deleteBucket(BUCKET);
        }
        Files
                .walk(Paths.get(DIDSS_MASTER_PATH))
                .filter(Files::isRegularFile)
                .forEach(f -> {
                    ObjectMetadata objectMetadata = new ObjectMetadata();
                    String uuid = UUID.randomUUID().toString();
                    objectMetadata.addUserMetadata("file_name", f.getFileName().getFileName().toString());

                    PutObjectResult result =
                            s3client.putObject(BUCKET, uuid + ".jpg", new File(f.toFile().getAbsoluteFile().getAbsolutePath()));

                    System.out.println(result.getETag());

                    return;
                });

S3 json file

{
  "identities": [

    {
      "name": "some_admin_user",
      "credentials": [
        {
          "accessKey": "some_access_key1",
          "secretKey": "some_secret_key1"
        }
      ],
      "actions": [
        "Admin",
        "Read",
        "List",
        "Tagging",
        "Write"
      ]
    }
  ]
}
@chrislusf chrislusf changed the title Filer.backup deletes files in backup folder (sink.local) Filer.backup deletes files in backup folder in incremental mode (sink.local) Apr 14, 2022
@chrislusf
Copy link
Collaborator

Added a fix. You can test it with chrislusf/seaweedfs:dev when this job is done: https://github.com/chrislusf/seaweedfs/actions/runs/2169477873

@urkl
Copy link
Author

urkl commented Apr 15, 2022

I can confirm this. It's working as expected with dev version.
Thanks. This was fast.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants