Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to cancel a SPDYExecutor stream? #554

Closed
rberrelleza opened this issue Jan 31, 2019 · 18 comments
Closed

How to cancel a SPDYExecutor stream? #554

rberrelleza opened this issue Jan 31, 2019 · 18 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@rberrelleza
Copy link

rberrelleza commented Jan 31, 2019

I'm executing a long-running command on a pod with the following code:

exec, err := remotecommand.NewSPDYExecutor(config, method, url)
	if err != nil {
		return err
	}

	return exec.Stream(remotecommand.StreamOptions{
		Stdin:             stdin,
		Stdout:            stdout,
		Stderr:            stderr,
		Tty:               tty,
		TerminalSizeQueue: terminalSizeQueue,
	})

In this implementation, exec.Stream won't return until the command finishes.

How can I cancel the call to Executor.Stream? (e.g. to react to a cancellation context, or to initiate a shutdown sequence). I searched in the docs and the code base but couldn't figure out a way to pass a context, a cancel function or something similar.

@rberrelleza rberrelleza changed the title How to cancel a NewSPDYExecutor stream? How to cancel a SPDYExecutor stream? Jan 31, 2019
@djgilcrease
Copy link

djgilcrease commented Apr 29, 2019

I have found that as soon as stdin is closed it will return. Though I also found this to be inconsistent and just switched to GetLogs and updated my containers that used to expect stdin to read from an env var that I pass into the POD when starting it.

https://gitlab.com/f5-pwe/kog/blob/master/executor.k8s.go#L197

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 28, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Aug 27, 2019
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@phanhuyn
Copy link

@rberrelleza do you find the solution to this?
I've tried closing stdin/stdout/stderr stream, the Executor.Stream() will stuck here because of empty error chan

@rberrelleza
Copy link
Author

@rberrelleza do you find the solution to this?
I've tried closing stdin/stdout/stderr stream, the Executor.Stream() will stuck here because of empty error chan

no, the proposed solution didn't work for me. I ended up launching the connection on a goroutine so it wouldn't block the rest, and just time it out if needed.

@wangjia184
Copy link

wangjia184 commented Sep 28, 2021

You can set TTY to true, and send CTRL-C(0x03) to stdin

@hainesc
Copy link

hainesc commented Apr 2, 2022

I send CTRL-D(0x04) to close it.

@zhamdoctor
Copy link

0x04

write message with ctrl-d or 0x04?

@liukelin
Copy link

liukelin commented Mar 3, 2023

How to with ctrl-d or 0x04?

@zhamdoctor
Copy link

zhamdoctor commented Mar 4, 2023

How to with ctrl-d or 0x04?

exit command from frontend,but it cant resolve the situation like vpn disconnection unexpectly.you can restore pid locally and send kill command next time the user log in the same pod

@liukelin
Copy link

liukelin commented Mar 4, 2023

请问下。我用了client-go 的新版本,提供了 StreamWithContext函数,可以通过监听前端websocket关闭,来同时控制关闭这个context。这样 Stream流倒是关闭了, 但是我发现pod里面的sh进程还存在。 不知道怎么发送 ctrl-d or 0x04,有例子吗。

@CirillaQL
Copy link

CirillaQL commented Mar 5, 2023

请问下。我用了client-go 的新版本,提供了 StreamWithContext函数,可以通过监听前端websocket关闭,来同时控制关闭这个context。这样 Stream流倒是关闭了, 但是我发现pod里面的sh进程还存在。 不知道怎么发送 ctrl-d or 0x04,有例子吗。

参考这个pr kubesphere/kubesphere#5024 在关闭时向流中写入0x04,但是还是有上面说的问题,如果网络异常,无法发送该信号到容器中;或者在容器中sh又执行bash,那0x04只会关闭最外层的shell,还是会导致有sh进程泄露。

@liukelin
Copy link

liukelin commented Mar 6, 2023

有其他好的实现吗,确实还是会sh进程泄露。不知道是不是0x04没发送成的问题

@CirillaQL
Copy link

有其他好的实现吗,确实还是会sh进程泄露。不知道是不是0x04没发送成的问题

如果只有一层shell进程,网络正常的话,0x04是好用的,我已经在我们的环境上测试完成了。

@liukelin
Copy link

liukelin commented Mar 6, 2023

是直接在这个函数里面写吗
Read(p []byte) (size int, err error)

@CirillaQL
Copy link

kubesphere/kubesphere#5024

是的,看pr的terminal.go文件修改内容呀,在需要的退出位置 copy(p, '\u0004')
kubesphere/kubesphere#5024

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

10 participants