Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

no sftp prompt on client side despite successful connection #341

Closed
nitwhiz opened this issue Oct 10, 2022 · 8 comments
Closed

no sftp prompt on client side despite successful connection #341

nitwhiz opened this issue Oct 10, 2022 · 8 comments

Comments

@nitwhiz
Copy link

nitwhiz commented Oct 10, 2022

I'm trying to connect to the container via sftp, but it just gets stuck and does nothing.

Directory structure:

upload/
keys/
  ssh_host_ed25519_key
  ssh_host_rsa_key
docker-compose.yml

docker-compose.yml:

services:

  sftp:
    image: atmoz/sftp
    volumes:
      - ./upload:/home/foo/upload
      - ./keys/ssh_host_ed25519_key:/etc/ssh/ssh_host_ed25519_key
      - ./keys/ssh_host_rsa_key:/etc/ssh/ssh_host_rsa_key
    ports:
      - "5022:22"
    command: foo:pass:1001

Server log:

➜ docker compose up            
[+] Running 1/0
 ⠿ Container sftp_testing-sftp-1  Created                                              0.0s
Attaching to sftp_testing-sftp-1
sftp_testing-sftp-1  | [/entrypoint] Executing sshd
sftp_testing-sftp-1  | Server listening on 0.0.0.0 port 22.
sftp_testing-sftp-1  | Server listening on :: port 22.
sftp_testing-sftp-1  | Accepted password for foo from 172.23.0.1 port 44642 ssh2

Client log (only the last few lines, after passwort prompt):

➜ sftp -vvv -P 5022 foo@localhost
[... some more logs ...]
[password prompt]
[... some more logs ...]
debug2: channel 0: rcvd adjust 2097152
debug3: receive packet: type 99
debug2: channel_input_status_confirm: type 99 id 0
debug2: subsystem request accepted on channel 0

Now the terminal just sits and does nothing, inputs have no results, either.

Trying to connect with FileZilla with

  • server: sftp://localhost
  • username: foo
  • password: pass
  • port: 5022

logs a successful authentication, but no directory listing. After 20 seconds, it aborts the connection and reconnects with the same result as before.

@nitwhiz
Copy link
Author

nitwhiz commented Oct 17, 2022

It works, if I mount the following config as /etc/ssh/sshd_config:

# Secure defaults
# See: https://stribika.github.io/2015/01/04/secure-secure-shell.html
Protocol 2
HostKey /etc/ssh/ssh_host_ed25519_key
HostKey /etc/ssh/ssh_host_rsa_key

# Faster connection
# See: https://github.com/atmoz/sftp/issues/11
UseDNS no

# Limited access
PermitRootLogin no
X11Forwarding no
AllowTcpForwarding no

# Force sftp and chroot jail
Subsystem sftp /usr/lib/openssh/sftp-server

# Enable this for more logs
# LogLevel VERBOSE

The main changes are Subsystem sftp /usr/lib/openssh/sftp-server and no chroot jail.

@anderso
Copy link

anderso commented Dec 14, 2022

I suspect I have the same problem and removing

ChrootDirectory %h

from the configuration fixes it (but obviously disables chroot).

What happens is that when trying to log in the sshd docker process will consume 100% cpu for about 150 seconds after which the login actually succeeds. My guess is that the FileZilla client has a timeout of 20s which means it will never succeed in that case.

I have reproduced this with a virtual fresh install of Fedora 37 inside Gnome Boxes, following the official Docker CE installation instructions, then trying the "Simplest docker run example"

$ docker run -p 22:22 -d atmoz/sftp foo:pass:::upload

followed by

$ sftp foo@localhost

and then entering the password.

Interestingly the exact same procedure but with Debian 11 does not have the same problem, with the same docker version:

$ docker --version
Docker version 20.10.21, build baeda1f

So there seems to be something related to Fedora that triggers it in this case.

I have also tried atmoz/sftp:alpine and atmoz/sftp:alpine-3.7 with the same result.

@jnovak-netsystemcz
Copy link

Try to limit count of nofiles, e.g.:

--ulimit nofile=16000:16000

I touched same issue and it solves CPU load and client receives prompt.

Background: When process forks, it tries to close all file descriptors. There is library function which handles it, but it loops from max nofile to 1. In past, maximum value for nofile was up to 65535, but latest distributions/kernels use much higher values (try ulimit -H -n) and it takes time till code loops over the range.

@anderso
Copy link

anderso commented Feb 24, 2023

I can confirm that your suggestion works for me, using this command

$ docker run --ulimit nofile=16000 -p 22:22 -d atmoz/sftp foo:pass:::upload

the 100% cpu delays are gone. On my system:

$ ulimit -H -n
524288

But I can increase nofile up to 10 000 000 before I start to see a noticable delay (that value gives a few seconds delay), so I'm not sure what nofile value docker is using per default on my system. In any case, it seems like while this workaround fixes the problem there should be a proper fix for this. @jnovak-netsystemcz Do you know what the proper fix would be?

@jnovak-netsystemcz
Copy link

I think there is no proper fix. It is how OS works... A process has many limits. Your OS has limit for max open files set to 524288 and docker inherits it. I expect it is configured in /etc/security/... in your OS. If you change limits, you must restart docker to inherit new limits.
On the other way, if you change it (e.g. to 1000), it will limit docker to open more than 1000 files in parallel. Therefore I think you don't want to limit it so much.
In my case I'm using just --ulimit for specific containers when I recognize the issue.

@anderso
Copy link

anderso commented Feb 24, 2023

Actually, 524 288 works alright but the docker daemon process has a nofile limit of 1073741816 (just over a billion), not sure how this is configured but found it by looking at /proc/PID/limits. So if application developers expect to be able to loop over this range then we have a problem and I guess you are right, the limit must be lowered to fix the problem.

Did some googling and didn't find much, but this seems to be a similar issue:

netdata/netdata#14062 (comment)

@Lustyn
Copy link

Lustyn commented Apr 15, 2023

Can confirm that lowering the file descriptors has fixed this for me on latest Docker daemon and containerd version on Arch Linux.

I retained the default subsystem and chroot sftp config:

Subsystem sftp internal-sftp
ForceCommand internal-sftp -d /stash
ChrootDirectory %h

and added ulimits to my docker compose:

services:
  sftp:
    image: atmoz/sftp:latest
    ports:
      - "2222:22"
    ulimits:
      nofile:
        soft: 65536
        hard: 65536

@nitwhiz
Copy link
Author

nitwhiz commented Apr 20, 2023

@jnovak-netsystemcz / @Lustyn's solutions work for me, thanks to all of you for your input on this!

Maybe this could be added to the README?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants