Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Deploy Cilium using official helm chart #9887

Closed
prashantchitta opened this issue Mar 13, 2023 · 7 comments
Closed

Deploy Cilium using official helm chart #9887

prashantchitta opened this issue Mar 13, 2023 · 7 comments
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@prashantchitta
Copy link
Contributor

What would you like to be added:
Cilium has its own official helm charts. Currently in kubespray we dont use those helm charts. Instead we copy those into this repo and manage it here.

Why is this needed:
Whenever we want to upgrade Cilium, we have to update kubespray codebase to pull in the changes from upsteam cilium helm charts. We noticed that when we upgraded Cilum to 1.12, cilium deployment inside kubespray is broken.
Look at the PRs to fix it #9856, #9735, #9876, #9880

So Instead of porting all the charts and templatizing it in kubespray, wondering if there was consideration to point kubespray to deploy cilium using upstream helm charts directly?

cc: @oomichi

@prashantchitta prashantchitta added the kind/feature Categorizes issue or PR as related to a new feature. label Mar 13, 2023
@prashantchitta prashantchitta changed the title Deploy Cilium using upstream helm chart Deploy Cilium using official helm chart Mar 13, 2023
@MrFreezeex
Copy link
Member

MrFreezeex commented Mar 14, 2023

Hi @prashantchitta the documentation is still in a PR but does the Custom CNI helps: https://github.com/kubernetes-sigs/kubespray/pull/9878/files ? It takes some arbitrary manifest to install a CNI so you could generate them via the cilium helm chart (like we do in the kubespray tests actually). There could be some opportunity to leverage the helm-apps role to extend custom CNI to use an arbitrary helm charts directly as well in the future.

I wanted to switch to a helm chart directly for a long time to deploy Cilium in Kubespray but as there is a lot of variable to configure Cilium in Kubespray it might be tricky to do this in a way that is non breaking (but maybe we should change the variables anyway?).

@oomichi
Copy link
Contributor

oomichi commented Mar 16, 2023

This is a good point.
We had some discussions we should continue maintaining deployment manifests in Kubespray repo, or just remove those manifests and use official helm charts.
For example, #3181

TBH I don't have a strong opinion about this topic.
Actually I used to deploy some apps from helm chart instead of using Kubespray own ones even if the environment is deployed from Kubespray.
I am fine to accept pull requests from contributors to update Kubespray own manifests as an open source project.
But if some manifests get rotten in long time, I think we should remove them from Kubespray repo.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 14, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jul 14, 2023
@titansmc
Copy link
Contributor

Could we leverage on custom_cni to support a migration from calico to cilium @oomichi ? So let's say I manually deploy cilium and migrate it, and later on I add the cilium charts to custom_cni

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Jan 20, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

6 participants