Skip to content
This repository was archived by the owner on May 6, 2020. It is now read-only.

Conversation

@dlespiau
Copy link
Contributor

I don't really want to have to queue stdout/err data in the proxy.

To solve that, let's introduce a new constraint. The data path:

  shim <-> proxy <-> hyperstart

has to be fully setup before we allow newcontainer/execmd to execute a
new process. While we advise to start the shim as early as possible and
issue the newcontainer/execcmd command after that, we need to
synchronize somewhere. The easier way of doing that is to stall the
newcontainer/execcmd commands until we see the corresponding shim
registering itself.

Fixes: https://github.com/clearcontainers/proxy/issues/21
Signed-off-by: Damien Lespiau <damien.lespiau@intel.com>

@coveralls
Copy link

coveralls commented Apr 10, 2017

Coverage Status

Coverage increased (+0.7%) to 70.567% when pulling ab74862 on dlespiau:20170410-exec-should-wait-for-shim into 653860d on clearcontainers:master.

func (session *ioSession) WaitForShim() error {
select {
case <-session.shimConnected:
case <-time.After(waitForShimTimeout):

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems like a reasonable timeout, but since it is a relatively long time, I think it could be useful to display a message every second or so that shows the status. Something like:

Waiting for shim to register itself with token %s (timeout in %d seconds)

That way, it'll be clear from the log what the proxy is doing and that it hasn't just "hung".

proxy_test.go Outdated
var wg sync.WaitGroup
wg.Add(1)
go func() {
time.Sleep(20 * time.Millisecond)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Second occurence of 20 * time.Millisecond - might be worth a variable for this?

Damien Lespiau added 2 commits April 11, 2017 18:03
The two relocation handlers had some common code we could have factored
out. We're going to add more common code, so may as well start by that.

Note that this won't work if we ever need to support the createPod
command with multiple containers, but that's not planned currently.

Signed-off-by: Damien Lespiau <damien.lespiau@intel.com>
I don't really want to have to queue stdout/err data in the proxy.

To solve that, let's introduce a new constraint. The data path:

  shim <-> proxy <-> hyperstart

has to be fully setup before we allow newcontainer/execmd to execute a
new process. While we advise to start the shim as early as possible and
issue the newcontainer/execcmd command after that, we need to
synchronize somewhere. The easier way of doing that is to stall the
newcontainer/execcmd commands until we see the corresponding shim
registering itself.

Fixes: clearcontainers#21
Signed-off-by: Damien Lespiau <damien.lespiau@intel.com>
@dlespiau dlespiau force-pushed the 20170410-exec-should-wait-for-shim branch from ab74862 to 20b364a Compare April 11, 2017 17:39
@dlespiau
Copy link
Contributor Author

dlespiau commented Apr 11, 2017

PR updated:

  • Added log message indicating we're waiting for the shim (didn't implement the "every 1s" enough, that's probably over doing it)
  • Added a const for the 20ms. Actually not too convinced about that one, it's really two different constants that happen to have the same value, but ¯_(ツ)_/¯ (sorry I really wanted to use that emote!).

@coveralls
Copy link

Coverage Status

Coverage increased (+0.4%) to 70.224% when pulling 20b364a on dlespiau:20170410-exec-should-wait-for-shim into d3321c9 on clearcontainers:master.

1 similar comment
@coveralls
Copy link

coveralls commented Apr 11, 2017

Coverage Status

Coverage increased (+0.4%) to 70.224% when pulling 20b364a on dlespiau:20170410-exec-should-wait-for-shim into d3321c9 on clearcontainers:master.

@jodh-intel
Copy link

lgtm

@jodh-intel jodh-intel merged commit 8831778 into clearcontainers:master Apr 12, 2017
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants