Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problem with clearing the ".upload " metadata in the postgres database when using filer metadata replication #2006

Closed
aoberest opened this issue Apr 16, 2021 · 4 comments

Comments

@aoberest
Copy link

I also noticed that the other FILER does not delete data from the temporary directory ".uploads". This directory is created when downloading files in parts, and after the file is fully loaded, it is deleted.

Check out the screenshots from DBeaver, please.
Filer 1.

Filer1_meta_10 10 10 148

Filer2.
Filer2_meta_10 11 10 76

Is this an error or normal behavior?
I would like the metadata databases to be the same.

Originally posted by @AlekseyFicht in #1957 (comment)

@aoberest
Copy link
Author

version 30GB 2.39 742ab1e linux amd64

####################################################
# Customizable filer server options
####################################################
[filer.options]
# with http DELETE, by default the filer would check whether a folder is empty.
# recursive_delete will delete all sub folders and files, similar to "rm -Rf"
recursive_delete = false
# directories under this folder will be automatically creating a separate bucket
buckets_folder = "/buckets"

####################################################
# The following are filer store options
####################################################






[postgres2]
enabled = true
createTable = """
  CREATE TABLE IF NOT EXISTS "%s" (
    dirhash   BIGINT, 
    name      VARCHAR(65535), 
    directory VARCHAR(65535), 
    meta      bytea, 
    PRIMARY KEY (dirhash, name)
  );
"""
hostname = "10.10.86.105"
port = 5432
username = "seaweedfs"
password = "password"
database = "seaweedfs"          # create or use an existing database
schema = ""
sslmode = "disable"
connection_max_idle = 100
connection_max_open = 100
connection_max_lifetime_seconds = 0
# if insert/upsert failing, you can disable upsert or update query syntax to match your RDBMS syntax:
enableUpsert = true
upsertQuery = """INSERT INTO "%[1]s" (dirhash,name,directory,meta) VALUES($1,$2,$3,$4) ON CONFLICT (dirhash,name) DO UPDATE SET meta = EXCLUDED.meta WHERE "%[1]s".meta != EXCLUDED.meta"""

@aoberest
Copy link
Author

Hi, @chrislusf

I tried the "s3.clean.uploads" command, but it didn't help me.
In the logs, I saw that the function https://github.com/chrislusf/seaweedfs/blob/master/weed/s3api/filer_util.go#L56-L74 performs cleaning.
Could you add this step to meta agregator or maybe there are other options to clean up dirty data in another filer database

@chrislusf
Copy link
Collaborator

same as #1957

@aoberest
Copy link
Author

Thank you, Chris.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants