Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Documentation] Running commands that do not terminate #137

Closed
jiashenC opened this issue Sep 4, 2018 · 10 comments
Closed

[Documentation] Running commands that do not terminate #137

jiashenC opened this issue Sep 4, 2018 · 10 comments

Comments

@jiashenC
Copy link

jiashenC commented Sep 4, 2018

I am using run_command() function to run bash command remotely. My current issue is I want to start a server on remote device. The server runs forever, which blocks run_command() function.

Is there a way to use run this method as non-blocking function?

@estevopaz
Copy link

Hi jiashenC,

My option is use screen for that:
command = 'screen -d -m ' + command

This creates a terminal that you can attach later if you wish.

@jiashenC
Copy link
Author

jiashenC commented Sep 5, 2018

I see. I use multithreading in Python to bypass the issue, but I just wonder if parallel-ssh provides non-blocking functionality itself.

@estevopaz
Copy link

The problem is manage the stdin/stdout/stderr pipes, at beginning I just added " &" to the command, but execution crashed by this reason because pipes already closed.
So maybe they can add support to customize this pipes and allow local server files, things like stdout="/dev/null".

@pkittenis
Copy link
Member

Hi there,

Thanks for the interest and report. Can you clarify what you mean by blocks? Does run_command not return?

From what you are describing, there is not anything the library can do, unless run_command is really blocking - leaving issue open to confirm this. If the command never exits, join will block indefinitely or until timeout - see docs for more info.

There are two options:

  • Keep client alive for a certain amount of time to read partial output like described in documentation
  • Make the command a daemon so the shell can exit and the command remain alive

The latter can be done by running nohup <my command>, using screen, by daemonising it via OS services and so on. This is all dependent on the system, shell and command being run and is out of scope of SSH client libraries. It can also be done manually - see double fork.

There is no capability in the SSH protocol to manage shell redirection, it all has to be done within the shell. As a rule of thumb, if something happens when using the ssh binary, it will also happen with any other ssh client.

@pkittenis pkittenis changed the title how to run_command with non-blocking? Running commands that do not terminate Sep 5, 2018
@jiashenC
Copy link
Author

jiashenC commented Sep 6, 2018

Hi,

I want to use Parallel-SSH to start server on remote device from my local machine. The server will run forever and constantly print some stats. From what I observe, If I try to run some commands by client.run_command('...'), the client returns immediately without even running the commands or it just simply kills the program right after it starts.

I switch to using output = client.run_command('...'), then the client will wait for the remote server to stop, which blocks the Python program on local machine.

For non-blocking, I wonder is it possible to send the command to the remote device but keep the client alive in the background, so the local program is able to execute other codes. I think your option two also satisfies my need.

@pkittenis
Copy link
Member

Can you show code that reproduces this behaviour? The call to run_command should not block indefinitely - yes, you do need to keep output. The library is already non-blocking and works as expected with other commands that do not terminate, so will need to show code to reproduce the behaviour you are observing for any further investigation.

Making the command a daemon is out of scope for an SSH library. Try the command with nohup or similar as above.

@jiashenC
Copy link
Author

jiashenC commented Sep 6, 2018

Sure, I can post a simpler version of my code. server.py and run_system.sh sit on the remote device. ssh.py is the file I run on my local machine.

server.py

from BaseHTTPServer import BaseHTTPRequestHandler, HTTPServer
from threading import Thread

class Handler(BaseHTTPRequestHandler):
    def do_GET(self):
        self.send_response(200)
        self.send_header('Content-type','text/html')
        self.end_headers()
        self.wfile.write("Hello World !")
        return


def main():
    server = HTTPServer(('0.0.0.0', 12345), Handler)
    server.allow_reuse_address = True
    Thread(target=server.serve_forever, args=()).start()

if __name__ == '__main__':
    main()

run_system.sh

python server.py

ssh.py

from pssh.pssh_client import ParallelSSHClient
from pssh.utils import load_private_key

key = load_private_key('...')
server_hosts = ParallelSSHClient('111.111.111.111', user='test', pkey=pkey)
output = server_hosts.run_command('bash run_system.sh')

@pkittenis
Copy link
Member

Thanks @jiashenC , will try to replicate with the above and get back to you.

@pkittenis
Copy link
Member

Hi @jiashenC,

Do not see an issue with the library, the remote command is not a daemon and will therefore terminate when the client exits. Here is test case with a simple run forever remote process:

$ cat server.py

from __future__ import print_function

import time
from threading import Thread


def run_forever():
    try:
        while True:
            print("Running..")
            time.sleep(5)
    except Exception:
        return


if __name__ == "__main__":
    thread = Thread(target=run_forever)
    thread.start()

Client code:

from __future__ import print_function

from pssh.clients import ParallelSSHClient

client = ParallelSSHClient(['localhost'])

output = client.run_command('python -u server.py')

for host, host_out in output.items():
    for line in host_out.stdout:
         print(line)

Output:

Running..
Running..
Running..

Reading from output will run forever until interrupted - this is expected. When client exits, the remote command will end - this is also expected. For it to keep running after client exits the command needs to be a daemon. Eg with nohup:

output = client.run_command('nohup python -u server.py', use_pty=True)

for host, host_out in output.items():
    for line in host_out.stdout:
         print(line)

Output:

nohup: ignoring input and appending output to 'nohup.out'

Reading output returns immediately with output of nohup, not the server. client.join also returns immediately.

Then when logging in a second time to remote host, there will be a server.py process still running with its output redirected into nohup.out.

The library does not know whether a command is a daemon or not, it only interacts with a remote shell via SSH. It is up to the user to make those commands into daemons appropriately if the intention is for the client to terminate and the command to still be running. This is the case whether the command is run via the library or any other SSH client.

For things like nohup, this requires use_pty=True as nohup detaches the controlling terminal from the command and redirects file descriptors into nohup.out. If there is no controlling terminal it cannot do that.

Commands that are already daemons may not require use_pty=True but it depends on the command. Generally shell script wrappers will require it while actual daemon processes do not. Eg, running memcached directly as opposed to a shell script that runs memcached.

Will leave open to add an example to documentation as this has come up a couple times now.

@pkittenis pkittenis changed the title Running commands that do not terminate [Documentation] Running commands that do not terminate Sep 25, 2018
@jiashenC
Copy link
Author

@pkittenis Thanks for your detailed explanation! That's very helpful!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants