Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix:Before killing a process under high concurrency, it is necessary … #2261

Open
wants to merge 5 commits into
base: main
Choose a base branch
from
Open
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 17 additions & 1 deletion paramiko/proxy.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,7 @@
from select import select
import socket
import time
import logging

# Try-and-ignore import so platforms w/o subprocess (eg Google App Engine) can
# still import paramiko.
Expand Down Expand Up @@ -119,7 +120,22 @@ def recv(self, size):
raise ProxyCommandFailure(" ".join(self.cmd), e.strerror)

def close(self):
os.kill(self.process.pid, signal.SIGTERM)
# Before killing a process, it is necessary to check if it exists
# Otherwise, an error will be reported during asynchronous calls
if self.process.poll() is None:
try:
# close resources first
self.process.stdin.close()
self.process.stdout.close()
self.process.stderr.close()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are there other errors that could be raised during these stream closure actions? If anything other than a ProcessLookupError occurs, then the except won't catch correctly.

If I read the docs right, close() is idempotent and won't raise on an already-closed stream... so, there may not actually be any additional errors to worry about?

If so, augmenting your comment here to note that close() won't raise even if the streams are already closed would be good.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

close() is redundant, I have deleted it.
But it is necessary to check whether a process is still present before killing it.

Because when I use distributed task queues Celery, this process is usually killed by the system.
It is possible to report an error ProcessLookupError.

In short, it is necessary to check whether a process is still alive before killing it.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm sorry, I think I expressed myself poorly -- I wasn't requesting that you change your implementation, only that you add some comments &c. to document it more thoroughly.

Although, if that if self.closed: check will do just as well as the full set of stream .close() calls and the try/except, that's certainly much cleaner.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is no need to close IO as it will kill directly


os.kill(self.process.pid, signal.SIGTERM)

except ProcessLookupError:
# It is possible to trigger an error under high concurrency
# The process has already been killed by other means
# But it usually doesn't have a bad impact
logging.info("[ProxyCommand] The process has already been killed by other means.")

@property
def closed(self):
Expand Down