New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.
Already on GitHub? Sign in to your account
PoC DO NOT MERGE: Implementation of swarmctl service exec
#2359
Conversation
swarmctl service exec
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is a cool proof of concept. I'm impressed at how little code it takes, though I realize this isn't streaming yet and doesn't handle shared attachments.
We've talked about a few different ways of doing attach and exec. One approach, which is probably the simplest from a design perspective, is to make something similar to "logbroker" that multiplexes the streams over gRPC. It's relatively straightforward because the design and implementation would end up very similar to the logs functionality, and there's a lot of code for logging that could be used as example or possibly reused directly. That's sort of the direction I see this PR going right now. It has a lot of things missing right now since it's in the proof of concept stage, but if all those things were added, I think it would (or should) end up looking quite similar to logbroker.
The other thing we've discussed is building this functionality on top of SSH, with the manager setting up SSH sessions between the worker and the client, but not actually sitting in between and seeing all the traffic. This has some significant performance benefits, partly because the traffic doesn't have to pass through a manager, and also because SSH is probably much better suited for interactive flows than gRPC. I believe @stevvooe did some experiments with gRPC and encountered some performance problems in the past.
If you want to keep prototyping this, a possible direction would be to implement streaming within your current approach (what I'm calling the logbroker method) and see whether the performance turns out to be acceptable. That might help inform the high-level design for this feature (i.e. SSH vs. gRPC).
ping @aluzzardi @stevvooe
@@ -29,6 +41,9 @@ service Control { | |||
rpc ListTasks(ListTasksRequest) returns (ListTasksResponse) { | |||
option (docker.protobuf.plugin.tls_authorization) = { roles: "swarm-manager" }; | |||
}; | |||
rpc Attach(stream TaskExecStream) returns (stream TaskExecStream) { | |||
option (docker.protobuf.plugin.tls_authorization) = { roles: "swarm-manager"}; | |||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If this PR moves forward, this should be part of a separate service instead of adding to controlapi. You can look at logs as an example. Similarly, TaskExecStream
and TaskExec
above should move out of this file.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for pointing me to how the logsbroker is impelemented. Probably the whole "stdout" part can be taken from here.
agent/session.go
Outdated
AttachStdout: true, | ||
AttachStderr: true, | ||
AttachStdin: true, | ||
Cmd: strings.Split(string(data.Message), " "), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would look into making this an explicit protobuf field in the exec flow instead of reusing Message
.
@@ -29,6 +41,9 @@ service Control { | |||
rpc ListTasks(ListTasksRequest) returns (ListTasksResponse) { | |||
option (docker.protobuf.plugin.tls_authorization) = { roles: "swarm-manager" }; | |||
}; | |||
rpc Attach(stream TaskExecStream) returns (stream TaskExecStream) { | |||
option (docker.protobuf.plugin.tls_authorization) = { roles: "swarm-manager"}; | |||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, since this is executing a task, not attaching to the existing task, you should call this Exec
and formally pass the command an arguments to be exec'd.
manager/controlapi/server.go
Outdated
|
||
func (s *ServerTaskExecChannels) In(containerid string) chan []byte { | ||
s.registerExecChannels(containerid) | ||
return s.in[containerid] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This works for a proof of concept, but I think a full implementation would want to handle multiple attachments to the same stream, or at the very least keep track of which ones already have someone attached, and not return the same channel again. logbroker is a good example of how to do this.
Also, this will need locking or some other concurrency strategy.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see that logbroker is using some kind of filtering, however if anyone can help with resources or anything else to understand how are impelemented would be useful
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I haven't (yet) dealt with closing contexts and locking resources.
I'm still in the phase where I need to understand step-by-step how things are 馃敤
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also I think there are multiple places where routines are just blocking in the wrong way
@aaronlehmann Today I managed to have some kind of working terminal by sending the stream byte by byte, the current things in to do are:
|
cmd/swarmctl/service/exec.go
Outdated
attachCl.Send(&api.TaskExecStream{ | ||
Containerid: containerId, | ||
Message: s.Bytes(), | ||
Message: []byte{curLine}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I know this isn't super neat but I'm experimenting around the possibility to handle the stream byte-per-byte
swarmctl service exec
swarmctl service exec
I want to address moby#1895 - Please beaware that this is just an experiment to get my hands dirty with the codebase and to understand how things works. At the current stage this basically is a gRPC service on the managers that connects the client to the client to the container executing the task of a specific service. It does not have TTYs. Feedback and What I need to go on: - I need some advice in designing this and if possible code review. - Since this is just a PoC I'm not yet writing tests, this is just something I need to understand a codebase where I'm not even that familiar to do TDD. - There are probably, I already see some of them contexts that are not closed. Try it: swarmctl service exec <service-id> (Yes I know that one expects to have `sh`) in the end or something like that. It's too early Signed-off-by: Lorenzo Fontana <lo@linux.com>
Signed-off-by: Lorenzo Fontana <lo@linux.com>
77ceb6c
to
af3ede7
Compare
I want to address #1895 - Please beaware that this is just an experiment
to get my hands dirty with the codebase and to understand how things
works.
At the current stage this basically is a gRPC service on the managers
that connects the client to the client to the container executing the
task of a specific service.
It does not support having a tty yet.
Feedback and What I need to go on:
something I need to understand a codebase where I'm not even that
familiar to do TDD.
closed.
Try it:
swarmctl service exec
(Yes I know that one expects to have
sh
) in the end or something likethat. It's too early
So why did you open the PR if you're still working?
Signed-off-by: Lorenzo Fontana lo@linux.com