New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add labels/annotations to DeploymentStrategy #5149
Conversation
The use case makes sense, but I'd like to get some wider discussion on the implementation. Today, the deployer/hook pods will inherit My concern is confusion arising from inconsistent rules: for I think at the time we decided on the current behavior, we explicitly avoided adding new fields to configure the deployer pods, opting instead for conventions. Does the label requirement warrant a new API field to provide the user with a precise way to express intent, or can we get away with more conventions? |
We have to be careful about sharing all labels - because pod autoscalers On Oct 15, 2015, at 4:51 PM, Dan Mace notifications@github.com wrote: cc @Kargakis https://github.com/kargakis @smarterclayton The use case makes sense, but I'd like to get some wider discussion on the My concern is confusion arising from inconsistent rules: for nodeSelector, I think at the time we decided on the current behavior, we explicitly — |
yeah, no auto-propagation of deploymentConfig labels down to pods. this has been attempted enough times for build pods and deployer pods, we should add tests to make sure this doesn't happen |
Having some way to let a user explicitly set labels is probably ok. It On Oct 15, 2015, at 5:25 PM, Jordan Liggitt notifications@github.com yeah, no propagation of labels down to pods. this has been attempted enough — |
@liggitt @smarterclayton Would you consider propagating labels that do not have a prefix of "kubernetes.io" and "openshift.io" ? These are user defined labels and in my opinion it would be reasonable to pass them from the deployment config to the pods... or do you propose introducing a different field in the deployment config ? |
Not automatic propagation. Propagation would have to be explicitly indicated by the user, since labels on a pod can have unintentional side effects |
@liggitt I've changed the PR so that the deployer pod and hooks use the labels specified in the RC template. This gives the deployer and deployed pods a consistent set of user specified labels which i believe is simpler for the application developer/operator. |
Unfortunately, that will also result in unexpected and incorrect behavior. Those labels are used for two things:
|
@liggitt Good point. |
Before you do, I'd like @smarterclayton and @ironcladlou to give feedback on that approach. It is a little odd to let the user specify a template which we would only honor the annotations and labels from. |
A proposal upstream for similar problems is to have specific fields |
@smarterclayton A simpler solution may be to just add labels/annotations to ExecNewPodHook; re-reading the code it seems to me that the deployer pod itself is a controller that is communicating with the kube/openshift API rather than executing code; if my assumption is correct, it is the hook Pod that i need to influence via labels. Still investigating... |
That's true for some deployer pods, but not custom deployers which might On Oct 22, 2015, at 1:09 AM, Pedro Marques notifications@github.com wrote: @smarterclayton https://github.com/smarterclayton A simpler solution may — |
@smarterclayton CustomDeploymentStrategyParams also has image/env/command. It is very similar to ExecNewPodHook (suggestion for future versions of the API would be to make them the same type: e.g. PodTemplate). I would make sense to add labels/annotations to both of these types (imho). It seems to me that it is a non-intrusive addition to the API... both of these types are specifying a pod spec. It is not adding labels/annotations to the pod metadata seems a good fit. |
Can we think of a scenario under which we want the deployer pod and the On Thu, Oct 22, 2015 at 12:19 PM, Pedro Marques notifications@github.com
|
@smarterclayton My understanding is that the deployer is typically a state machine that talks to the openshift-master in order to control the settings of the RC that control the application pods. If that is the then it wouldn't need tags (labels/annotations) that have semantics... The hook pods typically tend to have the need to for similar characteristics (e.g. network/storage) as the application. |
Unless we have a very good reason to have different labels/annotations per strategy, I would prefer them on |
@@ -289,3 +289,15 @@ func (d ByLatestVersionDesc) Swap(i, j int) { d[i], d[j] = d[j], d[i] } | |||
func (d ByLatestVersionDesc) Less(i, j int) bool { | |||
return DeploymentVersionFor(&d[j]) < DeploymentVersionFor(&d[i]) | |||
} | |||
|
|||
// mergeKeyValueMap adds the k,v pairs of the source map to destination map for all keys that are not present in the destination map. | |||
func MergeKeyValueMap(dst, src map[string]string) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we have a method for this in pkg/util
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! Replaced with util.MergeInto.
Just a few comments [test]. You may have to regenerate some of you items. |
[test] |
[test] On Oct 30, 2015, at 6:57 PM, OpenShift Bot notifications@github.com wrote: continuous-integration/openshift-jenkins/test FAILURE ( — |
func (factory *DeploymentControllerFactory) setPodAttributes(strategy *deployapi.DeploymentStrategy, pod *kapi.Pod) { | ||
switch strategy.Type { | ||
case deployapi.DeploymentStrategyTypeCustom: | ||
util.MergeInto(pod.Labels, strategy.Labels, 0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In case of conflict, we expect the strategy to override? Doesn't this make it possible to break the deployer pod label lookup the controller uses to reap the pods?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good catch, this should use the constant and ignore SRC if dest already exists
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@liggitt It is not overwriting existing keys (on purpose). So the answer is no; it is not possible to use these labels to break the labels assigned by the deployer controller.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Got it, hadn't checked the MergeInto impl. Add a comment here noting that we're not overwriting existing labels on the pod?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a constant for 0 in this case? If so, we should use it, if not,
we should add it (so the behavior is obvious.
On Oct 31, 2015, at 3:24 PM, Jordan Liggitt notifications@github.com
wrote:
In pkg/deploy/controller/deployment/factory.go
#5149 (comment):
@@ -122,6 +126,15 @@ func (factory *DeploymentControllerFactory) Create() controller.RunnableControll
}
}+// setPodAttributes updates the deployer pod attributes based on the deployment strategy configuration
+func (factory *DeploymentControllerFactory) setPodAttributes(strategy *deployapi.DeploymentStrategy, pod *kapi.Pod) {
- switch strategy.Type {
- case deployapi.DeploymentStrategyTypeCustom:
util.MergeInto(pod.Labels, strategy.Labels, 0)
Got it, hadn't checked the MergeInto impl. Add a comment here noting that
we're not overwriting existing labels on the pod?
—
Reply to this email directly or view it on GitHub
https://github.com/openshift/origin/pull/5149/files#r43575255.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@smarterclayton MergeInfo has several different flags; 0 is the absence of all of them; I'll add the comment.
@ironcladlou The custom strategy is where the user is supplying the deployer; openshift supplies the deployer in the other strategies and thus (to my knowledge) they do not need custom attributes. I'm happy to change it but i do think it does make sense that as you put it "custom labels/annotations apply to the custom deployer pod".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The custom strategy is where the user is supplying the deployer; openshift supplies the deployer in the other strategies and thus (to my knowledge) they do not need custom attributes. I'm happy to change it but i do think it does make sense that as you put it "custom labels/annotations apply to the custom deployer pod".
In all cases, openshift makes the pod hosting the deployer logic and provides some manner of pod customization (whether it's using our prefab image or yours). We already support a bit of (indirect) control over the deployer pod generally by allowing the node selector of the pod template to be inherited by the deployer pod... If user added labels/annotations can't break the deployer itself (e.g. by overwriting system keys) then I guess I'm not sure why not just make the new configurations apply to deployers generally. It would be easier to explain and implement.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ironcladlou Updated diff applies labels to all deployers and adds a comment. Can you please ask the test bot to take another go at it ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd like at least one other person (@smarterclayton @liggitt @Kargakis) to validate my premise (that we can apply labels/annotations to all deployer pods). If all agree, you can remove the setPodAttributes
abstraction.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think if we are going to set labels and annotations on spec.strategy,
they should apply to pods created by the strategy (hooks + process) equally.
On Mon, Nov 2, 2015 at 2:25 PM, Dan Mace notifications@github.com wrote:
In pkg/deploy/controller/deployment/factory.go
#5149 (comment):@@ -122,6 +126,15 @@ func (factory *DeploymentControllerFactory) Create() controller.RunnableControll
}
}+// setPodAttributes updates the deployer pod attributes based on the deployment strategy configuration
+func (factory *DeploymentControllerFactory) setPodAttributes(strategy *deployapi.DeploymentStrategy, pod *kapi.Pod) {
- switch strategy.Type {
- case deployapi.DeploymentStrategyTypeCustom:
util.MergeInto(pod.Labels, strategy.Labels, 0)
I'd like at least one other person (@smarterclayton
https://github.com/smarterclayton @liggitt https://github.com/liggitt
@Kargakis https://github.com/kargakis) to validate my premise (that we
can apply labels/annotations to all deployer pods). If all agree, you can
remove the setPodAttributes abstraction.—
Reply to this email directly or view it on GitHub
https://github.com/openshift/origin/pull/5149/files#r43669338.
Where did all our recent conversation go? |
@ironcladlou you have to expand "commented on an outdated diff" |
[test] |
func assert_MapContains(t *testing.T, super, elements map[string]string) { | ||
for k, v := range elements { | ||
if value, ok := super[k]; ok { | ||
if value != v { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can get rid of the inner if:
if value, ok := super[k]; ok && value != v {
}
annotations map[string]string | ||
verifyLabels bool | ||
}{ | ||
{deploytest.OkStrategy(), map[string]string{"label1": "value1"}, map[string]string{"annotation1": "value1"}, true}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, just one more thing here... we're trying to clean up struct initializers for go vet. Can you please use explicit field names, like:
{
strategy: deploytest.OkStrategy(),
labels: map[string]string{"label1": "value1"},
annotations: map[string]string{"annotation1": "value1"},
verifyLabels: true
},
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
{name: "no overrride", strategy: deploytest.OkStrategy(), labels: map[string]string{deployapi.DeployerPodForDeploymentLabel: "ignored"}, verifyLabels: false}, | ||
} | ||
|
||
for _, test := range testCases { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not required, but for future reference it's simpler to just do something like t.Logf("evaluating test %q", test.name)
instead of passing the name around everywhere since the tests are sequential (errors for a test will appear right under that test's name)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
LGTM, thanks for all the work on this! |
@ironcladlou would you mind asking the bot to retest ? thanks. |
[test] |
Labels and annotations may have semantics that are useful when executing a custom deployer pod and/or hooks. For instance the hooks often require access to the services used by the application (example: database schema migration). This commit allows the user to specify the set of labels/annotations to be used on pods used to deploy the application.
jenkins failure was due to introduction of hook-image-pull secrets. The test was updated with a secret for the expected pod. Please ask the jenkins bot to rerun the test. |
[test] |
Evaluated for origin test up to f2ba544 |
continuous-integration/openshift-jenkins/test SUCCESS (https://ci.openshift.redhat.com/jenkins/job/test_pull_requests_origin/6910/) |
@smarterclayton Can this PR be merged now that the release is done ? thanks. |
Yes, thanks [merge] |
continuous-integration/openshift-jenkins/merge SUCCESS (https://ci.openshift.redhat.com/jenkins/job/merge_pull_requests_origin/4023/) (Image: devenv-rhel7_2720) |
Looks like a weird connectivity issue flaked it out, [merge] again |
Evaluated for origin merge up to f2ba544 |
The jenkins log seems to indicate a script issue.
|
Merged by openshift-bot
This PR allows for the deployer to work in a scenario where an application is using per-tier network segmentation.
For instance, the rails-postgresql-example app sets the label "name": rails-postgresql-example. OpenContrail creates a network segment for this project/tier. The postgresql database sits in the network "default" (by the fact that no label is specified).
The deploymentConfig uses the following labels:
This allows the deployer to be in the same application as the application front-end and deployer-hook to access the postgresql db.