Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add labels/annotations to DeploymentStrategy #5149

Merged
merged 1 commit into from Nov 17, 2015
Merged

Add labels/annotations to DeploymentStrategy #5149

merged 1 commit into from Nov 17, 2015

Conversation

pedro-r-marques
Copy link

This PR allows for the deployer to work in a scenario where an application is using per-tier network segmentation.

For instance, the rails-postgresql-example app sets the label "name": rails-postgresql-example. OpenContrail creates a network segment for this project/tier. The postgresql database sits in the network "default" (by the fact that no label is specified).

The deploymentConfig uses the following labels:

+        "labels": {
+          "name": "rails-postgresql-example",
+          "uses": "default"
+        },

This allows the deployer to be in the same application as the application front-end and deployer-hook to access the postgresql db.

@ironcladlou
Copy link
Contributor

cc @Kargakis @smarterclayton

The use case makes sense, but I'd like to get some wider discussion on the implementation. Today, the deployer/hook pods will inherit deploymentConfig.spec.template.spec.nodeSelector to allow the deployment logic to run in a pod scheduled similiarly to the application they're deploying. The label scenario seems similar conceptually: you want the deployer/hook pods to share properties of the application pods to facilitate interoperation.

My concern is confusion arising from inconsistent rules: for nodeSelector, the pod template is used, but for labels, the deploymentConfig itself is used.

I think at the time we decided on the current behavior, we explicitly avoided adding new fields to configure the deployer pods, opting instead for conventions. Does the label requirement warrant a new API field to provide the user with a precise way to express intent, or can we get away with more conventions?

@smarterclayton
Copy link
Contributor

We have to be careful about sharing all labels - because pod autoscalers
must not control the hook or process pods.

On Oct 15, 2015, at 4:51 PM, Dan Mace notifications@github.com wrote:

cc @Kargakis https://github.com/kargakis @smarterclayton
https://github.com/smarterclayton

The use case makes sense, but I'd like to get some wider discussion on the
implementation. Today, the deployer/hook pods will inherit
deploymentConfig.spec.template.spec.nodeSelector to allow the deployment
logic to run in a pod scheduled similiarly to the application they're
deploying. The label scenario seems similar conceptually: you want the
deployer/hook pods to share properties of the application pods to
facilitate interoperation.

My concern is confusion arising from inconsistent rules: for nodeSelector,
the pod template is used, but for labels, the deploymentConfig itself is
used.

I think at the time we decided on the current behavior, we explicitly
avoided adding new fields to configure the deployer pods, opting instead
for conventions. Does the label requirement warrant a new API field to
provide the user with a precise way to express intent, or can we get away
with more conventions?


Reply to this email directly or view it on GitHub
#5149 (comment).

@liggitt
Copy link
Contributor

liggitt commented Oct 15, 2015

yeah, no auto-propagation of deploymentConfig labels down to pods. this has been attempted enough times for build pods and deployer pods, we should add tests to make sure this doesn't happen

@smarterclayton
Copy link
Contributor

Having some way to let a user explicitly set labels is probably ok. It
would be an explicit set vs getting no labels (or just an Openshift scoped
one).

On Oct 15, 2015, at 5:25 PM, Jordan Liggitt notifications@github.com
wrote:

yeah, no propagation of labels down to pods. this has been attempted enough
times for build pods and deployer pods, we should add tests to make sure
this doesn't happen


Reply to this email directly or view it on GitHub
#5149 (comment).

@pedro-r-marques
Copy link
Author

@liggitt @smarterclayton Would you consider propagating labels that do not have a prefix of "kubernetes.io" and "openshift.io" ? These are user defined labels and in my opinion it would be reasonable to pass them from the deployment config to the pods... or do you propose introducing a different field in the deployment config ?
I'm happy to restructure the PR in any way you suggest.

@liggitt
Copy link
Contributor

liggitt commented Oct 15, 2015

Not automatic propagation. Propagation would have to be explicitly indicated by the user, since labels on a pod can have unintentional side effects

@pedro-r-marques pedro-r-marques changed the title Propagate deploymentConfig labels to deployer and hooks Use ControllerTemplate labels for deployer and hook pods Oct 21, 2015
@pedro-r-marques
Copy link
Author

@liggitt I've changed the PR so that the deployer pod and hooks use the labels specified in the RC template. This gives the deployer and deployed pods a consistent set of user specified labels which i believe is simpler for the application developer/operator.

@liggitt
Copy link
Contributor

liggitt commented Oct 21, 2015

Unfortunately, that will also result in unexpected and incorrect behavior. Those labels are used for two things:

  1. Ensuring the pods created by the RC fall under the RC's selector and are managed by the RC. Putting them on the deployment pod could make the deployment pod itself be "managed" by an RC and killed because it exceeds the desired replica count
  2. Including the pods created by the RC in one or more services. Putting them on the deployment pod could make the deployment pod be included in these services, which would result in broken requests to those services, since the deployment pod is not running the images and exposing the ports the RC's pods are

@pedro-r-marques
Copy link
Author

@liggitt Good point.
I'll rework the patch to introduce a "DeployerTemplate" field to DeploymentConfig and support control over both Labels and Annotations.

@liggitt
Copy link
Contributor

liggitt commented Oct 21, 2015

Before you do, I'd like @smarterclayton and @ironcladlou to give feedback on that approach. It is a little odd to let the user specify a template which we would only honor the annotations and labels from.

@smarterclayton
Copy link
Contributor

A proposal upstream for similar problems is to have specific fields deployerLabels and deployerAnnotations which have no default values and are optional. However, the more metadata we add the more likely we'll want a grouped field. Both hooks and the deployer pod are owned by the strategy, so I would expect them to be as spec.strategy.labels or spec.strategy.deployerLabels or spec.strategy.podLabels. I'm uncertain on the last two vs the first - my worry on the first is it is too generic, but on the other hand it's a more general statement "apply these labels to the strategy".

@pedro-r-marques
Copy link
Author

@smarterclayton A simpler solution may be to just add labels/annotations to ExecNewPodHook; re-reading the code it seems to me that the deployer pod itself is a controller that is communicating with the kube/openshift API rather than executing code; if my assumption is correct, it is the hook Pod that i need to influence via labels. Still investigating...

@smarterclayton
Copy link
Contributor

That's true for some deployer pods, but not custom deployers which might
need network connectivity to the pops.

On Oct 22, 2015, at 1:09 AM, Pedro Marques notifications@github.com wrote:

@smarterclayton https://github.com/smarterclayton A simpler solution may
be to just add labels/annotations to ExecNewPodHook; re-reading the code it
seems to me that the deployer pod itself is a controller that is
communicating with the kube/openshift API rather than executing code; if my
assumption is correct, it is the hook Pod that i need to influence via
labels. Still investigating...


Reply to this email directly or view it on GitHub
#5149 (comment).

@pedro-r-marques
Copy link
Author

@smarterclayton CustomDeploymentStrategyParams also has image/env/command. It is very similar to ExecNewPodHook (suggestion for future versions of the API would be to make them the same type: e.g. PodTemplate). I would make sense to add labels/annotations to both of these types (imho). It seems to me that it is a non-intrusive addition to the API... both of these types are specifying a pod spec. It is not adding labels/annotations to the pod metadata seems a good fit.

@smarterclayton
Copy link
Contributor

Can we think of a scenario under which we want the deployer pod and the
hook pods to have different labels or annotations?

On Thu, Oct 22, 2015 at 12:19 PM, Pedro Marques notifications@github.com
wrote:

@smarterclayton https://github.com/smarterclayton
CustomDeploymentStrategyParams also has image/env/command. It is very
similar to ExecNewPodHook (suggestion for future versions of the API would
be to make them the same type: e.g. PodTemplate). I would make sense to add
labels/annotations to both of these types (imho). It seems to me that it is
a non-intrusive addition to the API... both of these types are specifying a
pod spec. It is not adding labels/annotations to the pod metadata seems a
good fit.


Reply to this email directly or view it on GitHub
#5149 (comment).

@openshift-bot openshift-bot removed needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. labels Oct 22, 2015
@pedro-r-marques
Copy link
Author

@smarterclayton My understanding is that the deployer is typically a state machine that talks to the openshift-master in order to control the settings of the RC that control the application pods. If that is the then it wouldn't need tags (labels/annotations) that have semantics... The hook pods typically tend to have the need to for similar characteristics (e.g. network/storage) as the application.
For instance, if the application pod is using secrets to authenticate itself with the database, the hook pod (e.g. "database schema mirgration") would probably need the same secret...

@smarterclayton
Copy link
Contributor

Unless we have a very good reason to have different labels/annotations per strategy, I would prefer them on spec.strategy, rather than deeper level.

@pedro-r-marques pedro-r-marques changed the title Use ControllerTemplate labels for deployer and hook pods Add labels/annotations to DeploymentStrategy Oct 28, 2015
@@ -289,3 +289,15 @@ func (d ByLatestVersionDesc) Swap(i, j int) { d[i], d[j] = d[j], d[i] }
func (d ByLatestVersionDesc) Less(i, j int) bool {
return DeploymentVersionFor(&d[j]) < DeploymentVersionFor(&d[i])
}

// mergeKeyValueMap adds the k,v pairs of the source map to destination map for all keys that are not present in the destination map.
func MergeKeyValueMap(dst, src map[string]string) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we have a method for this in pkg/util

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! Replaced with util.MergeInto.

@smarterclayton
Copy link
Contributor

Just a few comments [test]. You may have to regenerate some of you items.

@smarterclayton
Copy link
Contributor

[test]

@smarterclayton
Copy link
Contributor

[test]

On Oct 30, 2015, at 6:57 PM, OpenShift Bot notifications@github.com wrote:

continuous-integration/openshift-jenkins/test FAILURE (
https://ci.openshift.redhat.com/jenkins/job/test_pull_requests_origin/6506/)


Reply to this email directly or view it on GitHub
#5149 (comment).

func (factory *DeploymentControllerFactory) setPodAttributes(strategy *deployapi.DeploymentStrategy, pod *kapi.Pod) {
switch strategy.Type {
case deployapi.DeploymentStrategyTypeCustom:
util.MergeInto(pod.Labels, strategy.Labels, 0)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In case of conflict, we expect the strategy to override? Doesn't this make it possible to break the deployer pod label lookup the controller uses to reap the pods?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch, this should use the constant and ignore SRC if dest already exists

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@liggitt It is not overwriting existing keys (on purpose). So the answer is no; it is not possible to use these labels to break the labels assigned by the deployer controller.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Got it, hadn't checked the MergeInto impl. Add a comment here noting that we're not overwriting existing labels on the pod?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a constant for 0 in this case? If so, we should use it, if not,
we should add it (so the behavior is obvious.

On Oct 31, 2015, at 3:24 PM, Jordan Liggitt notifications@github.com
wrote:

In pkg/deploy/controller/deployment/factory.go
#5149 (comment):

@@ -122,6 +126,15 @@ func (factory *DeploymentControllerFactory) Create() controller.RunnableControll
}
}

+// setPodAttributes updates the deployer pod attributes based on the deployment strategy configuration
+func (factory *DeploymentControllerFactory) setPodAttributes(strategy *deployapi.DeploymentStrategy, pod *kapi.Pod) {

  • switch strategy.Type {
  • case deployapi.DeploymentStrategyTypeCustom:
  • util.MergeInto(pod.Labels, strategy.Labels, 0)
    

Got it, hadn't checked the MergeInto impl. Add a comment here noting that
we're not overwriting existing labels on the pod?


Reply to this email directly or view it on GitHub
https://github.com/openshift/origin/pull/5149/files#r43575255.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@smarterclayton MergeInfo has several different flags; 0 is the absence of all of them; I'll add the comment.
@ironcladlou The custom strategy is where the user is supplying the deployer; openshift supplies the deployer in the other strategies and thus (to my knowledge) they do not need custom attributes. I'm happy to change it but i do think it does make sense that as you put it "custom labels/annotations apply to the custom deployer pod".

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@pedro-r-marques

The custom strategy is where the user is supplying the deployer; openshift supplies the deployer in the other strategies and thus (to my knowledge) they do not need custom attributes. I'm happy to change it but i do think it does make sense that as you put it "custom labels/annotations apply to the custom deployer pod".

In all cases, openshift makes the pod hosting the deployer logic and provides some manner of pod customization (whether it's using our prefab image or yours). We already support a bit of (indirect) control over the deployer pod generally by allowing the node selector of the pod template to be inherited by the deployer pod... If user added labels/annotations can't break the deployer itself (e.g. by overwriting system keys) then I guess I'm not sure why not just make the new configurations apply to deployers generally. It would be easier to explain and implement.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ironcladlou Updated diff applies labels to all deployers and adds a comment. Can you please ask the test bot to take another go at it ?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd like at least one other person (@smarterclayton @liggitt @Kargakis) to validate my premise (that we can apply labels/annotations to all deployer pods). If all agree, you can remove the setPodAttributes abstraction.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think if we are going to set labels and annotations on spec.strategy,
they should apply to pods created by the strategy (hooks + process) equally.

On Mon, Nov 2, 2015 at 2:25 PM, Dan Mace notifications@github.com wrote:

In pkg/deploy/controller/deployment/factory.go
#5149 (comment):

@@ -122,6 +126,15 @@ func (factory *DeploymentControllerFactory) Create() controller.RunnableControll
}
}

+// setPodAttributes updates the deployer pod attributes based on the deployment strategy configuration
+func (factory *DeploymentControllerFactory) setPodAttributes(strategy *deployapi.DeploymentStrategy, pod *kapi.Pod) {

  • switch strategy.Type {
  • case deployapi.DeploymentStrategyTypeCustom:
  •   util.MergeInto(pod.Labels, strategy.Labels, 0)
    

I'd like at least one other person (@smarterclayton
https://github.com/smarterclayton @liggitt https://github.com/liggitt
@Kargakis https://github.com/kargakis) to validate my premise (that we
can apply labels/annotations to all deployer pods). If all agree, you can
remove the setPodAttributes abstraction.


Reply to this email directly or view it on GitHub
https://github.com/openshift/origin/pull/5149/files#r43669338.

@ironcladlou
Copy link
Contributor

Where did all our recent conversation go?

@pedro-r-marques
Copy link
Author

@ironcladlou you have to expand "commented on an outdated diff"

@0xmichalis
Copy link
Contributor

[test]

func assert_MapContains(t *testing.T, super, elements map[string]string) {
for k, v := range elements {
if value, ok := super[k]; ok {
if value != v {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can get rid of the inner if:

if value, ok := super[k]; ok && value != v {
}

@openshift-bot openshift-bot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Nov 4, 2015
annotations map[string]string
verifyLabels bool
}{
{deploytest.OkStrategy(), map[string]string{"label1": "value1"}, map[string]string{"annotation1": "value1"}, true},
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, just one more thing here... we're trying to clean up struct initializers for go vet. Can you please use explicit field names, like:

{
  strategy: deploytest.OkStrategy(),
  labels: map[string]string{"label1": "value1"},
  annotations: map[string]string{"annotation1": "value1"},
  verifyLabels: true
},

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

@openshift-bot openshift-bot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Nov 4, 2015
{name: "no overrride", strategy: deploytest.OkStrategy(), labels: map[string]string{deployapi.DeployerPodForDeploymentLabel: "ignored"}, verifyLabels: false},
}

for _, test := range testCases {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not required, but for future reference it's simpler to just do something like t.Logf("evaluating test %q", test.name) instead of passing the name around everywhere since the tests are sequential (errors for a test will appear right under that test's name)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

@ironcladlou
Copy link
Contributor

LGTM, thanks for all the work on this!

@pedro-r-marques
Copy link
Author

@ironcladlou would you mind asking the bot to retest ? thanks.

@ironcladlou
Copy link
Contributor

[test]

Labels and annotations may have semantics that are useful when executing a custom
deployer pod and/or hooks. For instance the hooks often require access to the
services used by the application (example: database schema migration).
This commit allows the user to specify the set of labels/annotations to be
used on pods used to deploy the application.
@pedro-r-marques
Copy link
Author

jenkins failure was due to introduction of hook-image-pull secrets. The test was updated with a secret for the expected pod. Please ask the jenkins bot to rerun the test.

@sdodson
Copy link
Member

sdodson commented Nov 5, 2015

[test]

@openshift-bot
Copy link
Contributor

Evaluated for origin test up to f2ba544

@openshift-bot
Copy link
Contributor

continuous-integration/openshift-jenkins/test SUCCESS (https://ci.openshift.redhat.com/jenkins/job/test_pull_requests_origin/6910/)

@pedro-r-marques
Copy link
Author

@smarterclayton Can this PR be merged now that the release is done ? thanks.

@smarterclayton
Copy link
Contributor

Yes, thanks [merge]

@openshift-bot
Copy link
Contributor

continuous-integration/openshift-jenkins/merge SUCCESS (https://ci.openshift.redhat.com/jenkins/job/merge_pull_requests_origin/4023/) (Image: devenv-rhel7_2720)

@ironcladlou
Copy link
Contributor

Looks like a weird connectivity issue flaked it out, [merge] again

@openshift-bot
Copy link
Contributor

Evaluated for origin merge up to f2ba544

@pedro-r-marques
Copy link
Author

The jenkins log seems to indicate a script issue.

origin_merge_pull_requests_origin_4021_terminate -s
There was an error talking to AWS. The error message is shown
below:

Error: EC2 Machine is not available
Build step 'Execute a set of scripts' marked build as failure

openshift-bot pushed a commit that referenced this pull request Nov 17, 2015
@openshift-bot openshift-bot merged commit 08471b1 into openshift:master Nov 17, 2015
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

8 participants