New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
New version of seaweedfs can't list directory using s3 api #4668
Comments
I think this problem is of the same type with #4645. For the correct display of directories via s3, special metadata FolderMimeType = "httpd/unix-directory" was introduced. But it doesn't seem to be finished |
In that case, can you test if older version of seaweedfs works? Maybe we could pinpoint the exact version that this issue first appear. |
Yes, it used to be fine in the old versions |
can not reproduce. Does this problem only happen to old directories? |
Old directories work fine. Problem when creating new directories in NC. @kmlebedev should be aware of this issue |
I could not revert my production back to reproduce, but here are my points based on my observation.
# Below are working
$ mc ls S3CON/ # listing bucket
$ mc ls S3CON/bucket/ # listing inside a bucket
# Below is not working
$ mc ls S3CON/bucket/folder_inside_bucket/ # listing folder inside a bucket
|
I have the same problem when connectin Filestash to my s3 and create a folder using Filestash |
Some steps to reproduce this would be helpful. |
Here are the steps to reproduce it:
|
@renweijun followed the steps, seems working well:
|
@chrislusf Sorry, I just tested OK with 3.53 ,and I used 3.46 before |
I've spin up a new server just for this, surprisingly I could not reproduce it, The only different were
Can someone else confirm if they're using the same filer store? My production filer # A TOML config file for SeaweedFS filer store
####################################################
# Customizable filer server options
####################################################
[filer.options]
# with http DELETE, by default the filer would check whether a folder is empty.
# recursive_delete will delete all sub folders and files, similar to "rm -Rf"
recursive_delete = false
####################################################
# The following are filer store options
####################################################
# We're forced to use redis sentinel since redis cluster require at least 6 nodes (we have only 4)
[redis3_sentinel]
enabled = true
addresses = ["redis1.storage.example:26379","redis2.storage.example:26379","redis3.storage.example:26379","redis4.storage.example:26379"]
masterName = "mymaster"
username = "default"
password = "RETRACTED"
database = 0 My new filer # A TOML config file for SeaweedFS filer store
####################################################
# Customizable filer server options
####################################################
[filer.options]
# with http DELETE, by default the filer would check whether a folder is empty.
# recursive_delete will delete all sub folders and files, similar to "rm -Rf"
recursive_delete = false
####################################################
# The following are filer store options
####################################################
# We're forced to use redis sentinel since redis cluster require at least 6 nodes (we have only 4)
[leveldb2]
# local on disk, mostly for simple single-machine setup, fairly scalable
# faster than previous leveldb, recommended.
enabled = true
dir = "/etc/seaweedfs/filerldb2" # directory to store level db files PS. If I switch using my production, yes, the problem still persist. I just don't know how should I reproduce it. |
Using redis_cluster3 for filer backend and having the same reported issue. |
@agipson33 please share reproducible steps. |
This judgment is correct ,createing directory with fs.mkdir is works。my production with "s5cmd cp",
weed version 3.45
weed version 3.46~3.54
|
Hello, @chrislusf have you been able to reproduce it yet? |
Hey, I was testing SeaweedFS with our cross-platform S3 GUI (https://s3drive.app/) and can confirm that problem is present in This creates: 3.55 (INCORRECT)3.45 (CORRECT)The Alternative way - using AWS CLIInsert key
Query
Response 3.55 (INCORRECT)
Response 3.45 (CORRECT)
S3 Filer CMDI wasn't touching master or volume processes, I was only switching version of S3 Filer.
CORS missingYou would be able to try it out using our S3 browser client (https://web.s3drive.app) but it I wasn't able to setup CORS properly for SeaweedFS without running front NginX. |
@tomekit this seems opposite of what @renweijun 's finding, where implicitly created folders are having issues. |
I think we can only assume what's their folder structure, it looks like there maybe implicit (no guarantees though). |
Potentially connected to PR #4778 It seems that the functionality for listing entries with the method |
can someone confirm whether #4834 fix this issue? |
Using redis, I successfully listed the directories. |
Listing subdirectories is done recursively, the ListDirectoryPrefixedEntries method is not required.
|
@zemul Thanks for the confirmation! |
I am using this version: Test script
Actual response (list-objects-v2)
ExpectedThis comes from SeaweedFS 3.4.5, as well as any S3 provider including Backblaze
NotesThis issue isn't present on: |
What do you mean? The: #4399 uses cmd:
please note that the prefix used above ends with I am explicitly trying SeaweedFS S3 API to return some path prefixes (aka folders), but it doesn't seem to be the case with the current version. |
I found the same problem . s3api cannot list object with prefix when filer store is Cassandra |
finnally, I reproduce the problem restic initeverything is normal when I just run the weed server and just init the repo of restic
restic backupThen I use restic backup
It done well
LOG -v 4and I set -v 4 the log shows that:
and when i list the normal directory the log shows this
What the different between the two directory ? After i change to 3.4.5 fixed it
|
which version you use to reproduce the problem? |
3.5.7 3.4.6 large disk version both amd64 and arm64 has this bug 3.4.5 is all right |
I just rechecked the compatibility tests with the s3 interface and found no errors for version 3.57
|
Then it's a proof that these tests needs to be updated. |
It seems like it's a 44ad072 thing. |
This error is not shown before i run |
Nextcloud v27 default setup, external storage s3 swfs (3.57). Folder bug present. |
I review the code and found ListDirectoryPrefixedEntries method is required by
-> then
...... -> at the end it call Lines 350 to 352 in f24c7e8
which call ListDirectoryPrefixedEntries method |
@chrislusf ListDirectoryPrefixedEntries in weed/filer/filerstore_wrapper.go called. and it will try to use ListDirectoryPrefixedEntries of actualStore to get the file list and when ListDirectoryPrefixedEntries return ErrUnsupportedListDirectoryPrefixed ERROR it will fall back to ListDirectoryEntries with seaweedfs/weed/filer/filerstore_wrapper.go Lines 277 to 320 in 117fbba
it will works well in the past , but unfortunately,with #4391 restic init create only one directory named keys and it worked well, after we restic backup directory "data" and other directory created by restic. so s3api cannot list keys directory , it report the key error I am not familiar with golang , but if I thought change the limit parameter in the fall back in seaweedfs/weed/filer/filerstore_wrapper.go Lines 271 to 273 in 117fbba
will fix this bug and not need to full fill every ListDirectoryPrefixedEntries in every kind of filer store. |
I haven't seen a quick solution yet, it's a very confusing case. It may be worth increasing the limit with a reserve for the service folders in the root. But in any case, first you need to run an integration test with the redis storage |
I found another strange code may be the real problem in the loop seaweedfs/weed/filer/filerstore_wrapper.go Lines 294 to 318 in 117fbba
the programe loop to find if files in parent directory have prefix and in seaweedfs/weed/filer/filerstore_wrapper.go Lines 306 to 318 in 117fbba
it fetch more object after the lastfile what really strange is in seaweedfs/weed/filer/filerstore_wrapper.go Line 306 in 117fbba
it juge len(notPrefixed) == int(limit) and i don't know why I delete this jugement and the s3 api works fine i found it is commit in 4ee0a6f @chrislusf |
Sponsors SeaweedFS via Patreon https://www.patreon.com/seaweedfs
Report issues here. Ask questions here https://stackoverflow.com/questions/tagged/seaweedfs
Please ask questions in https://github.com/seaweedfs/seaweedfs/discussions
Describe the bug
Let me show you the behaviour, the client I used was minio client. with seaweed version 3.53 and 3.43
As you can see from the above output, s3 file list was only working when I uses the older version of seaweedfs.
My story
I was happily using seaweedfs for a long time without any problem, until yesterday that I try update my servers to newest version, that I encounter this problem. After browsing through the releases changes log, I suspect this release. My current solution is to revert the update, and continue to use 3.43 for now.
System Setup
$ uname -a Linux example-storage-2 5.15.0-76-generic #83-Ubuntu SMP Thu Jun 15 19:16:32 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux $ weed version version 30GB 3.43 3227e4175e2bf8df2ac8aeeff8cf73a819abc5a7 linux amd64 $ /usr/bin/weed master -mdir=/etc/seaweedfs/weed-meta -ip=storage2.example.com -ip.bind=0.0.0.0 -port=9333 -defaultReplication=010 -peers=storage3.example.com:9333,storage4.example.com:9333 $ /usr/bin/weed volume -mserver=storage2.example.com:9333,storage3.example.com:9333,storage4.example.com:9333 -max=0 -ip.bind=0.0.0.0 -port=8081 -dataCenter=main-site -rack=example-storage-2 -dir=/mnt/nvme1/seaweedfs,/mnt/nvme2/seaweedfs,/mnt/nvme3/seaweedfs,/mnt/nvme4/seaweedfs $ /usr/bin/weed filer -master=storage2.example.com:9333,storage3.example.com:9333,storage4.example.com:9333 -dataCenter=main-site -ip=storage2.example.com -ip.bind=0.0.0.0 -s3 -s3.dataCenter=main-site -s3.config=/etc/seaweedfs/s3-config.json -s3.cert.file=/data/letsencrypt/certificates/example-storage-2.example.com.crt -s3.key.file=/data/letsencrypt/certificates/example-storage-2.example.com.key
Expected behavior
I expect that I can use s3 api to list files
The text was updated successfully, but these errors were encountered: