Skip to content
This repository has been archived by the owner on Dec 8, 2023. It is now read-only.

The project is dead but ready to get into contributor mode #846

Open
mazzy89 opened this issue Feb 11, 2022 · 47 comments
Open

The project is dead but ready to get into contributor mode #846

mazzy89 opened this issue Feb 11, 2022 · 47 comments

Comments

@mazzy89
Copy link

mazzy89 commented Feb 11, 2022

The project is officially dead after SUSE acquired Rancher.

However, I would not like to see put in the trash. I'm willing to get the role of contributor or maintainer if official maintainers would allow that.

I've already forked it and continuing to upgrade k3s. I'm limiting myself to upgrade only k3s and not make any changes to the kernel because for my clusters, and I use a baked kernel forked from the RaspberryPi.

Let me know if this would be possible.

@psviderski
Copy link

psviderski commented Feb 15, 2022

For the context, it seems the Rancher's focus has been shifted from k3os to rancher/os2 that is built using the cOS-toolkit and based on openSUSE.
@ibuildthecloud has recently announced on Twitter that he has left SUSE/Rancher: https://twitter.com/ibuildthecloud/status/1492175776057217026. So I doubt that this experimental os2 project has a bright future (same as k3os) without him pushing it forward. The harvester project has been migrated off k3os to os2 though: harvester/harvester#581

@srgvg
Copy link

srgvg commented Feb 15, 2022

fyi -
there's a discussion going on here: #838

@alexdrl
Copy link

alexdrl commented Feb 15, 2022

Sorry for probably being a bit off-topic, but as this seems bad news for the future, do you know if there is some kind of way to migrate an existing k3os installation (single-node) to another OS that is not so experimental and has better support?

I have no local storage, just fiddled with some Longhorn volumes and some NFS provisioner PVs...

I could try and set-up a multi-cluster into a ubuntu machine with just k3s, and later keep only one master, but want to avoid the cluster-init if possible, also tested quickly a k3s agent and it was not connecting to the master :(

@mazzy89
Copy link
Author

mazzy89 commented Feb 17, 2022

All the discussions in #838 are interesting but at this moment the direction is not very clear.

Sure I believe SUSE will push for its own meat but at the moment I'm fine to push for new releases of k3os. In my fork, I've made available the pre-release that can be used to feed the upgrade controller and hence upgrade fast and automatically the clusters.

@andrewwebber
Copy link

I find this interesting as this reminds me of when CoreOS inc got acquired back in the day.

CoreOS got accquired by Redhat and now produces Redhat CoreOS, Fedora CoreOS (cloud optimized kubernetes distribution)

My point being that the CoreOS community maintainers successfully forked to FlatCar Linux https://www.flatcar.org/ (which ultimately got acquired by Microsoft but Microsoft seems to be encouraging them so far). This seems to be successful looking from the outside in.

I moved to K3OS as apposed to migrating to Flarcar Linux due to the K3 developer story. For example development parity between local development in k3s, k3d to k3os.

I would encourage a fork if k3s, k3d integration/maintenance is also being continued.
Taking Flatcar Linux and Redhat CoreOS as a reference I think the maintenance of dependencies would need to be considered and not under estimated (e.g. Rancher Upgrade Controller)

@themgt
Copy link

themgt commented Mar 18, 2022

Yeah MicroOS/etc are interesting for sure, but they're not drop-in replacements for k3os and seem intentionally more generic/DIY. Combined with SUSE just abandoning k3os without (afaict) any official warning or notice or migration path, leaving the website up, etc leaves a bad taste in my mouth and doesn't incline me to spending time switching to a new experimental SUSE OS project.

It also seems like (for now at least) k3os can be rebuilt for updated k3s with a few lines of code change (again raising the question of why SUSE refuses to continue support for an EOL grace period), so I'd be very much in favor of a community fork and continuation of k3os.

@alborotogarcia
Copy link
Contributor

@mazzy89 You'd still need to replace the upstream repo in the system upgrade controller for the k3os upgrade plan in

https://github.com/mazzy89/k3os/blob/a9c866997db4f7fcf004e0dedf0aa5cc0dd37d80/overlay/share/rancher/k3s/server/manifests/system-upgrade-plans/k3os-latest.yaml#L15

Also all image builds would need to run make e2e so that it creates a docker-compose env that tests for upgrades.. The only problem I see it concerns all OS packages that may need different rancher products such as longhorn.. The good news is that the repo itself is so well done that apart from that, Dapper and k3s requirements as-is are solid enough..

Hope that @dweomer could give some advice on it..

@mazzy89
Copy link
Author

mazzy89 commented Mar 31, 2022

@alborotogarcia yes I know that but at the moment I patch it on my side and it’s working fine. If my fork is going to be adopted by a wider audience then it makes sense to create a git patch and adjust those parts.

@mazzy89
Copy link
Author

mazzy89 commented Mar 31, 2022

In the upcoming week once I have some more free time I’m going to wire a release for the k3os with k3s 1.23 to avoid to remain behind with the k8s upstream releases.

@alborotogarcia
Copy link
Contributor

@mazzy89 I'd be willing to contribute, so far k3os runs flawlessly with k3s v1.23.5 at least on arm64 👍

@alexdrl
Copy link

alexdrl commented Apr 1, 2022

@mazzy89 @alborotogarcia hey guys, I'm not much of an expert, but I want to help on getting k3os running, at least with some testing updating my cluster. Updating my master node to MicroOS was a failure 😁

@mazzy89
Copy link
Author

mazzy89 commented Apr 1, 2022

@alborotogarcia, that's great. At the moment, I'm running my fork with the release that I've cut here https://github.com/mazzy89/k3os/releases/tag/v0.22.2-k3s2r0. Then, I patched the controller so that updates could be pulled from my fork rather than the official one. It's working without issues. Now you have confirmed that the newer k3s release works too. Then I think it is time to upgrade.

My opinion is that I do not want to make the project die. The project is pretty robust, and the repository is well done. I can't see a valid reason to throw everything into the grass and leave many users uncertain.

Many words have already been spent on some possible replacements, but I could not find something as k3os.

That being said, I could see some attraction and interest by the community to continue to use the project.

I want to move the project into one separated Github organization and start with the two main things:

  • cut a new release with the newer k3s version
  • patch the controller pointing to the new Git ref
  • spin up the existing automation on the new org (Github workflows, e2e tests, etc...)

@bkraegelin
Copy link

@mazzy89 could you please document your patches to system upgrade controller? Or even better - how do I modify it using Rancher UI? I'm in the middle of a large project which should be based on K3OS/K3S and RancherUI. The situation seems catastrophic.

The actual OS / the kernel and other binaries is not essential. But we need support for system upgrade controller.

Thanks,
Birger

@mazzy89
Copy link
Author

mazzy89 commented Apr 4, 2022

As pointed out above, @bkraegelin, what you need to modify is the Plan at this line https://github.com/mazzy89/k3os/blob/a9c866997db4f7fcf004e0dedf0aa5cc0dd37d80/overlay/share/rancher/k3s/server/manifests/system-upgrade-plans/k3os-latest.yaml#L15

However, since the plan YAML manifest is included in the k3os image, at the reboot of the machine, due to its immutable nature, it will lose the change.

To make it persistent, I would need to make public a new release with the patched Plan manifest and publish the new k3os images.

About your question

how do I modify it using Rancher UI

I do not have much experience with the Rancher UI. I've seen it just a couple of times in my life but I can imagine that it's about to modify a YAML manifest line via the UI.

@mazzy89
Copy link
Author

mazzy89 commented Apr 4, 2022

Also another point

publish the new k3os images.

Since I'm not linked to Rancher and/or SUSE, I do not have access to their Docker registry. This means I need to find where to host the new artifacts for my fork. It's still something in WIP.

@mazzy89
Copy link
Author

mazzy89 commented Apr 4, 2022

I'm starting to migrate TO-DOs over here https://github.com/users/mazzy89/projects/1

@alborotogarcia
Copy link
Contributor

To make it persistent, I would need to make public a new release with the patched Plan manifest and publish the new k3os images.

@mazzy89 I am not 100% sure regarding this, IMO if you edit the current plan on kubernetes, after a reboot it should sync it from other nodes, as states wouldn't match and latest state should remain, isn't it?

I guess that we could start with an org container registry for free on Github Packages at ghcr.io

Recently the maintainer from this project @dweomer and a few folks made k3d a community project at github.com/k3d-io

So shall we naming the project itself k3os-io ?

Wrt drone.io, Idk if github actions could fit in replacement..

1 similar comment
@alborotogarcia
Copy link
Contributor

To make it persistent, I would need to make public a new release with the patched Plan manifest and publish the new k3os images.

@mazzy89 I am not 100% sure regarding this, IMO if you edit the current plan on kubernetes, after a reboot it should sync it from other nodes, as states wouldn't match and latest state should remain, isn't it?

I guess that we could start with an org container registry for free on Github Packages at ghcr.io

Recently the maintainer from this project @dweomer and a few folks made k3d a community project at github.com/k3d-io

So shall we naming the project itself k3os-io ?

Wrt drone.io, Idk if github actions could fit in replacement..

@mazzy89
Copy link
Author

mazzy89 commented Apr 4, 2022

@mazzy89 I am not 100% sure regarding this, IMO if you edit the current plan on kubernetes, after a reboot, it should sync it from other nodes, as states wouldn't match, and the latest state should remain, isn't it?

Not from my tests. I've changed the Plan, rebooted the master, and the changes were reverted.

I do not have anything against k3os-io. We could pick this organization's name and move the project there. I'd vote for that.

Wrt drone.io, I would pick Github actions, so we would not need to host Drone.io runners. I've plenty of computing powers on my premises, but I'd like to avoid having a high bus factor. I'd need to check whether we could have an 1:1 migration with Github actions though.

@mazzy89
Copy link
Author

mazzy89 commented Apr 4, 2022

Also I've opened a topic here in the fork to discuss where we would like to move the next discussions.

@agracey
Copy link

agracey commented Apr 10, 2022

Hey everyone!

I'm on the product manager team at SUSE Rancher and wanted to let everyone here know that we are working on a similar project that will fill the same needs as k3os but in a way that we can maintain long term as well as give additional value to our users. (As many of you are aware, SUSE has expertise in building operating systems and can help fill this gap)

I'll be back with more details on timing, feature set, and potential migration paths but for now check out the work being done in these repos for a peak of what's being built:

I do have some instructions on usage of these components for downstream clusters but need to document more generally: https://gist.github.com/agracey/0f8301e5b01076f53831bd860873de92

@mazzy89
Copy link
Author

mazzy89 commented Apr 11, 2022

Hi @agracey

Thanks for breaking the radio silence at your side and clarifying the future of this project. It was essential to have a direction from you to avoid unnecessary work.

I was trying to pick up the legacy of k3os and bring the project back to life. However, I believe now, with your announcement, it is not needed anymore.

At this point, it's interesting then to follow the development of the new OS (and all the ecosystem around it) and hope that we'll get not only a feature parity with k3os but new cutting-edge features.

I'll look at the gist and try to put the piece together.

Is there any official channel where discussions and communications take place?

@bkraegelin
Copy link

That's very good news. We all need support and at least migration paths.

From my point of view K3OS has two interesting features:

  • very easy installation of K3S clusters
  • self contained automatic updates (using system upgrade controller)

That's nothing you may not implement using other ways. Essential is some kind of migration path without disrupting running systems. If you change the base OS, that's ok. If you replace system upgrade controller, that's ok too.

But please implement a way to automagically migrate a running K3OS/K3S cluster with system upgrade controller to the new base technology.

We are in the mid of rolling out more than 100 single node K3S clusters all around Germany centrally managed using Rancher UI, I don't want to loose them all.

Thanks,
Birger

@zewelor
Copy link

zewelor commented Apr 24, 2022

@alborotogarcia I saw you forked repo and made new release, are you planning also amd64 version ?

@alborotogarcia
Copy link
Contributor

@alborotogarcia I saw you forked repo and made new release, are you planning also amd64 version ?

The problem I face @zewelor is that it takes ages to compile k3os-kernel (~more than 6h) on baremetal and neither k3os-kernel nor dapper aren't getting updates from rancher.. so that the whole project is getting obsolete..
Good news is that golang yet provides fully backward compatibility at any version..

I upgraded k3os, as neither os2 release nor quay costoolkit images aren't available yet for arm64..
So I need yet to reconsider moving or not away from k3os.. as at least in the mid-long run is the way to go..

@mazzy89
Copy link
Author

mazzy89 commented Apr 24, 2022

I'm preparing images of the costoolkit for arm64. The biggest problem that I'm encountering is that RPI does not support TPM, so it will be fun to find a proper solution.

@zewelor
Copy link

zewelor commented Apr 24, 2022

@alborotogarcia thanks for reply. Its sad k3os seems to be dead end. I was thinking just to get some updated k3os with newer k3s version would be fine till migration path to newer solution would be available.

@alborotogarcia
Copy link
Contributor

I'm preparing images of the costoolkit for arm64. The biggest problem that I'm encountering is that RPI does not support TPM, so it will be fun to find a proper solution.

@mazzy89 My use case is rather different than flashing k3os in a raspberry, I run uVirt in replace of QEMU or libvirt and need initrd and cloud init configs..

@mudler
Copy link

mudler commented Apr 25, 2022

I'm preparing images of the costoolkit for arm64. The biggest problem that I'm encountering is that RPI does not support TPM, so it will be fun to find a proper solution.

note cOS toolkit doesn't require TPM by itself, os2 does. cOS has also vanilla arm64 images released https://github.com/rancher-sandbox/cOS-toolkit/releases/tag/v0.8.2 which can be used to kick off and as a base for other derivatives.

If you need a full example on how to bake arm64 images against different distros, without vanilla images, you could check out c3os (which is a cOS derivative too) that has both alpine and openSUSE arm64 releases.

@vdboor
Copy link

vdboor commented May 3, 2022

@mazzy89 How can we upgrade to use your fork? I'd like to upgrade as the current v1.21.5+k3s2 release runs on an old kernel with security issues.

@mazzy89
Copy link
Author

mazzy89 commented May 3, 2022

@alexdrl
Copy link

alexdrl commented May 3, 2022

Hey guys, just seen this change here #845, does that mean that we're getting some kind of k3s version bump at least?

@andrewwebber
Copy link

@alborotogarcia, that's great. At the moment, I'm running my fork with the release that I've cut here https://github.com/mazzy89/k3os/releases/tag/v0.22.2-k3s2r0. Then, I patched the controller so that updates could be pulled from my fork rather than the official one. It's working without issues. Now you have confirmed that the newer k3s release works too. Then I think it is time to upgrade.

My opinion is that I do not want to make the project die. The project is pretty robust, and the repository is well done. I can't see a valid reason to throw everything into the grass and leave many users uncertain.

Many words have already been spent on some possible replacements, but I could not find something as k3os.

That being said, I could see some attraction and interest by the community to continue to use the project.

I want to move the project into one separated Github organization and start with the two main things:

  • cut a new release with the newer k3s version
  • patch the controller pointing to the new Git ref
  • spin up the existing automation on the new org (Github workflows, e2e tests, etc...)

@mazzy89 I would just like to complement you on how beautifully this works, thank you so much.
I was able to simply patch the plan and point it to your fork. Upgrade worked without issue, perfectly - very exciting.
This does extend a lifeline to my project

@mazzy89
Copy link
Author

mazzy89 commented Jun 15, 2022

Have fun guys with the latest release https://github.com/mazzy89/k3os/releases/tag/v0.23.3-k3s2r0

At the moment only for arm64 though. Hope there are more arm64 interested users than amd64.

@bkraegelin
Copy link

bkraegelin commented Jun 18, 2022 via email

@mazzy89
Copy link
Author

mazzy89 commented Jun 18, 2022

Yes it is correct.

@bkraegelin
Copy link

Trying to upgrade amd64, still having problems getting from v1.21.5+k3s2 to v1.22.2+k3s2. Now I get Init:ImagePullBackOff error.
Is this a consequence of your changes?

My cluster stucks at
kubectl get nodes
NAME STATUS ROLES AGE VERSION
k3s01 Ready control-plane,etcd,master 12d v1.22.2+k3s2
k3s02 Ready control-plane,etcd,master 12d v1.21.5+k3s2
k3s03 Ready control-plane,etcd,master 12d v1.21.5+k3s2
k3s04 Ready <none> 12d v1.22.2+k3s2
k3s05 Ready <none> 12d v1.22.2+k3s2
k3s06 Ready <none> 12d v1.22.2+k3s2
k3s07 Ready <none> 12d v1.22.2+k3s2

Any idea?

@mazzy89
Copy link
Author

mazzy89 commented Jun 20, 2022

I've never wired a release for v1.22.2+k3s2. What I did it was just to promote from a pre-release to a release. The release v1.22.2+k3s2 is still a rancher release hence all the references has to point to rancher. Starting from the next (the one wired 7 days ago), all the references has to be updated to point to my one.

@BlueKrypto
Copy link

BlueKrypto commented Aug 30, 2022

For anyone looking for an updated version of K3OS, I wanted to let everyone know I have mirrored this project and brought all the dependencies up to date, put a new kernel together from the Jammy LTS (the one used in this project is several years old), and I have built it for all three architectures.

If you would like to use it, please take a look at it:

@elsbrock
Copy link

elsbrock commented Oct 31, 2022

@BlueKrypto great work! Are you willing to maintain this going forward?

Any hints on how to migrate?

@Ender-events
Copy link

Ender-events commented Nov 1, 2022

For my part, i migrated my single node non prod Kubernetes cluster by running scrips from https://github.com/BlueKrypto/k3os/tree/v0.24.4-k3s-r0/overlay/share/rancher/k3os/scripts (k3os-upgrade-rootfs and k3os-upgrade-kernel)

Traefik V1 was installed in k3s 1.20 (https://docs.k3s.io/networking#traefik-ingress-controller) and k3s 1.22 removed the Ingress networking.k8s.io/v1beta1 resource
So i remove traefik (cp /k3os/system/config.yaml /var/lib/rancher/k3os/config.yaml and add the following line)

  k3s_args:
    - --disable
    - traefik

After a reboot, i install project contour ingress controller without a Kubernetes LoadBalancer

@vfiftyfive
Copy link

Please take a look at Kairos https://github.com/kairos-io/kairos. It goes beyond k3os and is a living project. Efforts would be a lot better redirected there.

@BlueKrypto
Copy link

@elsbrock

@BlueKrypto great work! Are you willing to maintain this going forward?

Any hints on how to migrate?

Yes, I am actively maintaining this. I have multiple production environments running k3OS that require updates.

The easiest way to upgrade is to apply the k3os-upgrade plan from my project:
https://github.com/BlueKrypto/k3os/blob/master/overlay/share/rancher/k3s/server/manifests/system-upgrade-plans/k3os-latest.yaml

If you have more than one master you will want to update or remove the current plan in manifests/system-upgrade-plans/k3os-latest.yaml. Updating this file will cause it to automatically re-apply.

Once the new plan is applied you will need to label the nodes you want to upgrade with k3os.io/upgrade I typically use k3os.io/upgrade=latest

@arbourd
Copy link

arbourd commented Nov 1, 2022

In my case I had to delete the Plan for the old one and ensure that the pods were gone from old upgrades before applying @BlueKrypto's.

@bkraegelin
Copy link

For anyone else looking for a solution:

I myself left K3OS and went to SUSE Enterprise MicroOS, see https://documentation.suse.com/trd/kubernetes/single-html/kubernetes_ri_k3s-slemicro/index.html

When using K3S you have free access to the supported enterprise version of MicroOS, which is an immutable system with transactional updates. Together with kured (https://github.com/kubereboot/kured) and the correct system-upgrade-plan you get a secure, reliable and always up-to-date platform.

I want to say Thanks to all involved in building a replacement for K3OS.
Birger

@tgolsson
Copy link

tgolsson commented Dec 7, 2022

@bkraegelin Can you please share some more details of your configuration? Especially kured/system-upgrade-plan. That'd be immensely helpful for others trying the same migration.

@sj14
Copy link

sj14 commented Dec 8, 2022

As another alternative, I've switched to https://www.talos.dev/

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests