Join GitHub today
GitHub is home to over 20 million developers working together to host and review code, manage projects, and build software together.
Make NetworkRelation resilient when related to multiple services #16
Conversation
chuckbutler
added
enhancement
help wanted
question
labels
May 14, 2015
chuckbutler
self-assigned this
May 14, 2015
|
It was suggested to implement a variable being sent from the kubes_master unit to denote we can safely ignore the error. This seems like a reasonable path forward. |
chuckbutler
locked and limited conversation to collaborators
May 15, 2015
chuckbutler
unlocked this conversation
May 15, 2015
chuckbutler
changed the title from
Ignore network relation errors, this is wrt Kubernetes network relation
to
Ignore network relation data when not on a docker host
May 15, 2015
chuckbutler
removed
the
question
label
May 15, 2015
chuckbutler
changed the title from
Ignore network relation data when not on a docker host
to
Make NetworkRelation resilient when related to multiple services
May 18, 2015
|
@chuckbutler explained the problem to me. My understanding is this fix is necessary when there are 2 "network" relations to this charm it can not tell which one to call the relation-set command with. I was concerned about the array reference of zero but I see the command is short circuited with an '| default("")'. This code change looks good to me! |
|
LGTM |
added a commit
that referenced
this pull request
May 18, 2015
chuckbutler
merged commit 79c9342
into
master
May 18, 2015
added a commit
to whitmo/kubernetes
that referenced
this pull request
May 18, 2015
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
chuckbutler commentedMay 14, 2015
I'd like a review on this w/ a possible change in direction on how we can better sniff if we're attached to another host. This might mean implementing a new host type juju-info relation, but this was the simplest path to completion for the PoC work
DO NOT MERGE
This will have a side-effect if we relate to a docker host, and we encounter a transient failure, we have to dig deep in the logs to find out why. And I don't like silently failing on a core component of this relationship model that we established to have docker manage its own config.