Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

consul lock executes new handler during shutdown #800

Closed
kazeburo opened this issue Mar 19, 2015 · 5 comments
Closed

consul lock executes new handler during shutdown #800

kazeburo opened this issue Mar 19, 2015 · 5 comments
Labels
type/bug Feature does not function as expected

Comments

@kazeburo
Copy link

I found consul lock executes new handler after receiving SIGINT. Here is log of consul_lock.

[2015-03-19T11:03:39] Shutdown triggered, killing child
[2015-03-19T11:03:39] No child process to kill
[2015-03-19T11:03:39] Starting handler '/usr/local/bin/foo args'
[2015-03-19T11:03:39] Cleanup succeeded

The handler that was executed during shutdown phase remains after consul lock ended.

@armon armon added the type/bug Feature does not function as expected label Mar 19, 2015
@armon
Copy link
Member

armon commented Mar 19, 2015

Yikes! Tagged as bug!

@babbottscott
Copy link

Issue appears to be https://github.com/hashicorp/consul/blob/master/command/lock.go#L141-L144

If c.ShutdownCh signals, lu.lockFn (either Lock or Acquire) will return 'nil, nil' (https://github.com/hashicorp/consul/blob/master/api/lock.go#L156). This should be instead something like:

if lockCh == nil {
  if err != nil {
     c.Ui.Error(fmt.Sprintf("Lock acquisition failed: %s", err))
  } else {
    c.Ui.Info("Shutdown triggered before lock acquisition")
  }
  return 1
}

@babbottscott
Copy link

Also 'return code of 1 for all error states' may be something that could use enhancement.

@babbottscott
Copy link

Bump: really needing this one squashed.

@armon armon closed this as completed in 981c62c Jul 22, 2015
@mfischer-zd
Copy link
Contributor

Unfortunately #1080 didn't fix this bug. However it did improve the messaging, so it's still useful.

#1158 should fix it completely.

duckhan pushed a commit to duckhan/consul that referenced this issue Oct 24, 2021
… 15min (hashicorp#800)

On Azure, volumes sometimes could take quite long to provision.
Since we were waiting for 5min before, sometimes it's not enough time
for pods to come up and be healthy.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type/bug Feature does not function as expected
Projects
None yet
Development

No branches or pull requests

4 participants