-
Notifications
You must be signed in to change notification settings - Fork 999
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
s3fs hangs after successful connection to S3 bucket (ReturnHandler(110): Pool full) #1576
Comments
Hanging at the same place, it's been half an hour so far. Will it eventually work? |
@johncthomas I'm sorry for the late reply. If you can, please you try it(hangup) with the latest version v1.91 or the code in the master branch? |
Thanks for getting back. It's hanging in the same place after I compiled from the latest git. I've realised that the S3 bucket and the EC2 instance are not in the same region, so maybe that's it? I've also figured out I don't actually need the fuse now, so don't worry about this on my behalf. |
@johncthomas Thank you for your quick response. If the
However, just before that, I should get an error saying that the endpoint is wrong, such as This log is very similar to your log.(However, it does not hangup and ends normally.) In your log, |
Happening to me, latest master version as of now:
Relevant logs:
Note that this same command works on my local (Ubuntu 22.04), but not in my docker container (Debian 10 slim). Am I missing some software or is there some known compatibility here? I'm building from source, and connecting both AWS S3 and Scaleway Object Storage. Scaleway works and AWS S3 does not. Super strange. EDIT: To clarify, it hangs here, after those last logs - and never progresses or exits. EDIT2: Using similar settings on rclone mounts the bucket fine.. so there's possibly something in my s3fs config or some bug that's causing it to hang. I'd guess the latter as it doesn't seem to time out. |
@perry-mitchell |
I am having the same error here! I cannot see if the connection was successful after the command has completed, and the only way to check is by enabling the debug flag. |
Having the same issue here. |
same ... |
@xplosionmind @zhao-ji @alter |
Additional Information
The following information is very important in order to help us to help you. Omission of the following details may delay your support request or receive no attention at all.
Keep in mind that the commands we provide to retrieve information are oriented to GNU/Linux Distributions, so you could need to use others if you use s3fs on macOS or BSD
Version of s3fs being used (s3fs --version)
1.88
Version of fuse being used (pkg-config --modversion fuse, rpm -qi fuse, dpkg -s fuse)
2.9.4
Kernel information (uname -r)
4.14.193-113.317.amzn1.x86_64
GNU/Linux Distribution, if applicable (cat /etc/os-release)
command result:
NAME="Amazon Linux AMI"
VERSION="2018.03"
ID="amzn"
ID_LIKE="rhel fedora"
VERSION_ID="2018.03"
PRETTY_NAME="Amazon Linux AMI 2018.03"
ANSI_COLOR="0;33"
CPE_NAME="cpe:/o:amazon:linux:2018.03:ga"
HOME_URL="http://aws.amazon.com/amazon-linux-ami/"
s3fs command line used, if applicable
/etc/fstab entry, if applicable
s3fs syslog messages (grep s3fs /var/log/syslog, journalctl | grep s3fs, or s3fs outputs)
if you execute s3fs with dbglevel, curldbg option, you can get detail debug messages
Details about issue
I first tried mounting the S3 using IAM used to create the S3 bucket. That was unsuccessful. I got further by placing the access key information in /etc/passwd-s3fs.
After issuing the command above s3fs gets hung with the curl_handlerpool.cpp:ReturnHandler(110): Pool full: destroy the oldest handler. I do not see any errors up to this point.
My issue seems very similar to: #1518
Thanks for any help.
The text was updated successfully, but these errors were encountered: