Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use Manifest Labeller for helm. #2105

Closed
wants to merge 7 commits into from

Conversation

tejal29
Copy link
Member

@tejal29 tejal29 commented May 10, 2019

fixes #2074

Description of Changes:

  1. Use pkg/skaffold/deploy/kubect Manifest labeller to label deployed objects.
  2. Use helm get manifests to get all the helm deployed objects and re-use manifest labeller.
  3. Add kubectl apply command after setting labels
  4. Integration tests for helm are back
    a. helm integration tests flake  #1823: Use env template in skaffold release name
  5. During helm deploy, build is assumed and hence if no build-artifacts
    are supplied, it fails with following error
    "no build present for gcr.io/k8s-skaffold/skaffold-helm"
    Since Build and Deploy are now separate ( "skaffold deploy" is not useful #922 for notes) use the
    image value as is if not present as build artifacts.
  6. Adding Labels to Services is back
    In Remove service labeling temporarily #959, we removed adding labels to services for helm tool (only). The issue Let the service controller retry when presistUpdate returns a conflict error kubernetes/kubernetes#68087 is now fixed. The current integration test i have added has a service and it works fine.

@codecov-io
Copy link

codecov-io commented May 10, 2019

Codecov Report

Merging #2105 into master will increase coverage by 0.35%.
The diff coverage is 72%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #2105      +/-   ##
==========================================
+ Coverage   56.28%   56.63%   +0.35%     
==========================================
  Files         180      180              
  Lines        7796     7728      -68     
==========================================
- Hits         4388     4377      -11     
+ Misses       2991     2931      -60     
- Partials      417      420       +3
Impacted Files Coverage Δ
pkg/skaffold/deploy/labels.go 100% <ø> (+83.95%) ⬆️
pkg/skaffold/deploy/kubectl/manifests.go 92.59% <ø> (ø) ⬆️
pkg/skaffold/deploy/helm.go 62.69% <69.76%> (+0.14%) ⬆️
pkg/skaffold/deploy/util.go 76.92% <85.71%> (+8.92%) ⬆️
pkg/skaffold/kubernetes/client.go 0% <0%> (-58.83%) ⬇️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update aa059bd...153c217. Read the comment docs.

Copy link
Contributor

@corneliusweig corneliusweig left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In general, I like the idea to retrieve the yamls via helm. It looks elegant. However, this changeset has substantial problems. The most important one being that labels are not applied yet (only the yaml resources are patched, but not applied).
Before this change, the patched resources were applied via three-way json patch. I think this is still the best way to go. But this makes it questionable if it helps at all to retrieve the yamls in the first place.
To be honest, I'm not sure what is the best way to continue here.

@corneliusweig
Copy link
Contributor

@tejal29 Given that kokoro did not fail, we probably need a test which checks the labeller for all deployers? WDYT?

@tejal29
Copy link
Member Author

tejal29 commented May 10, 2019

@tejal29 Given that kokoro did not fail, we probably need a test which checks the labeller for all deployers? WDYT?

Thank @corneliusweig. Yes i was going to add an integration test and also verify it manually for the examples. I will do it today or Monday :).

@tejal29
Copy link
Member Author

tejal29 commented May 13, 2019

@corneliusweig and @balopat this is ready for final review.

1. Add kubectl apply command after setting labels
2. Integration tests for helms are back
   a. GoogleContainerTools#1823: Use env template in skaffold release name
3. During helm deploy, build is assumed and hence if no build-artifacts
are supplied, it fails with following error
 "no build present for gcr.io/k8s-skaffold/skaffold-helm"

  Since Build and Deploy are now separate ( GoogleContainerTools#922 for notes) use the
  image value as is if not present as build artifacts
  Fix test fot this.

TODO:
1. Add PR description
2. Figure out why we get 2 pods, 1 with valid labels which is running
   but another 1 with different labels.
Copy link
Contributor

@corneliusweig corneliusweig left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just some small nits, otherwise the code looks good to me.

However, I am a bit worried about your findings that re-labeling the pods in the deployment's podTemplateSpec will redeploy the pods once more. Is this acceptable? After all, this means that every helm deployment will trigger two deployments. And this is issue is new, because the previous labeller did not update the podTemplateSpec.

Have you also considered patching the chart templates directly?

integration/helm_test.go Outdated Show resolved Hide resolved
pkg/skaffold/deploy/helm_test.go Outdated Show resolved Hide resolved
pkg/skaffold/deploy/helm.go Outdated Show resolved Hide resolved
pkg/skaffold/deploy/helm.go Outdated Show resolved Hide resolved
pkg/skaffold/deploy/helm.go Show resolved Hide resolved
integration/helm_test.go Outdated Show resolved Hide resolved
integration/helm_test.go Outdated Show resolved Hide resolved
integration/helm_test.go Outdated Show resolved Hide resolved
integration/helm_test.go Show resolved Hide resolved
@googlebot
Copy link

So there's good news and bad news.

👍 The good news is that everyone that needs to sign a CLA (the pull request submitter and all commit authors) have done so. Everything is all good there.

😕 The bad news is that it appears that one or more commits were authored or co-authored by someone other than the pull request submitter. We need to confirm that all authors are ok with their commits being contributed to this project. Please have them confirm that here in the pull request.

Note to project maintainer: This is a terminal state, meaning the cla/google commit status will not change from this state. It's up to you to confirm consent of all the commit author(s), set the cla label to yes (if enabled on your project), and then merge this pull request when appropriate.

ℹ️ Googlers: Go here for more info.

@tejal29
Copy link
Member Author

tejal29 commented May 14, 2019

Thanks @corneliusweig
Re:

However, I am a bit worried about your findings that re-labeling the pods in the deployment's podTemplateSpec will redeploy the pods once more. Is this acceptable?

This is true. I also discovered this when testing.

Have you also considered patching the chart templates directly?

yes, thought about that, helm has a command helm template --output-dir <>
Then update all the manifests generated.
But then i could not find a way to pass this to helm install/update

@googlebot
Copy link

CLAs look good, thanks!

ℹ️ Googlers: Go here for more info.

@corneliusweig
Copy link
Contributor

@tejal29 Looking at the helm docs suggests that the template command is more for development and not for production (https://helm.sh/docs/helm/#helm-template):

(...) any values that would normally be looked up or retrieved in-cluster will be faked locally.

Other options that come to my mind:

  • Instead of patching the podTemplateSpec, we could directly patch the pods. Of course, this has the disadvantage of being transient, because the labels will be lost when a pod is evicted and recreated from the podTemplate. Nevertheless, this could be viable for dev mode.
  • Wait for helm 3 😄
  • Patch the chart templates. This will be a mess though..

@tejal29
Copy link
Member Author

tejal29 commented May 16, 2019

Thanks @corneliusweig , so i digged into the helm code and helm uses KubeClient to deploy the generated manifest. Its all done via tiller service and probably wait untill helm 3 for tillerless helm!

@tejal29 tejal29 closed this May 16, 2019
@tejal29
Copy link
Member Author

tejal29 commented May 16, 2019

Re-open #2075 and get that in!

func (k *NSKubernetesClient) GetPods() *v1.PodList {
pods, err := k.client.CoreV1().Pods(k.ns).List(meta_v1.ListOptions{})
if err != nil {
k.t.Fatalf("Could not find pods for in namespace %s", k.ns)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: "Could not find pods for in namespace"

@@ -57,6 +59,10 @@ func NewHelmDeployer(runCtx *runcontext.RunContext) *HelmDeployer {
return &HelmDeployer{
HelmDeploy: runCtx.Cfg.Deploy.HelmDeploy,
kubeContext: runCtx.KubeContext,
kubectl: kubectl.CLI{
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm a little confused by the alias you have above. you do

kubectl    kubectl.CLI

but then here you have

kubectl: kubectl.CLI{

so is it actually kubectl.CLI.CLI? maybe we can just get rid of the alias?

return nil, fmt.Errorf("error retrieving helm deployment info: %s", releaseInfo.String())
}
return bufio.NewReader(&releaseInfo), nil
}

// Retrieve info about all releases using helm get
// Skaffold labels will be applied to each deployed k8s object
// Skaffold labels will be applied to each deployed k8s object including k8object from
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: Skaffold labels will be applied to each deployed k8s object, including those created from remote charts

@tejal29
Copy link
Member Author

tejal29 commented Jul 31, 2019

with #2568 this is not needed.

@tejal29 tejal29 closed this Jul 31, 2019
@tejal29 tejal29 deleted the fix-helm branch April 15, 2021 07:33
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Helm deployments do not add labels to PodTemplates
6 participants