Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Migrate K3S infrastructure to CNCF #148

Open
davidnuzik opened this issue Oct 23, 2020 · 22 comments
Open

Migrate K3S infrastructure to CNCF #148

davidnuzik opened this issue Oct 23, 2020 · 22 comments
Assignees
Milestone

Comments

@davidnuzik
Copy link

davidnuzik commented Oct 23, 2020

First and Last Name

Dave Nuzik

Email

david.nuzik@rancher.com

Company/Organization

Rancher Labs

Job Title

Project Manager

Project Title (i.e., summary of what do you want to do, not what is the name of the open source project you're working with)

Migrate K3S infrastructure to CNCF

Briefly describe the project (i.e., what is the detail of what you're planning to do with these servers?)

We want to migrate the infrastructure that currently supports K3S from Rancher Labs to CNCF.
The process involves migrating the CI for K3s from Rancher owned infrastructure to CNCF.

Is the code that you’re going to run 100% open source? If so, what is the URL or URLs where it is located? What is your association with that project?

Yes. The code is located at https://github.com/rancher/k3s at present; however, once we get infrastructure for drone we will move to https://github.com/k3s-io/k3s as this is a neutrally owned GitHub organization. There are also additional projects that support K3s which will be transitioned as well to https://github.com/k3s-io and you can find a list of these in this issue k3s-io/k3s#2189 These are also all 100% open source.

K3s is a CNCF sandbox project.

I am the project manager at Rancher for K3s and I work closely with all the Rancher contributors.

What kind of machines and how many do you expect to use (see: https://www.packet.com/bare-metal/)?

  • Three c3.small.x86 is needed for a 3-node kubernetes cluster for CI.
  • I do not see on the Equinix website options for ARMv8 servers, but we will also need one for ARM.

What OS and networking are you planning to use (see: https://support.packet.com/kb/articles/supported-operating-systems)?

Ubuntu 20.04

Any other relevant details we should know about?

No, but we would like to thank CNCF for all the support :)

@caniszczyk
Copy link
Contributor

+1

@caniszczyk caniszczyk self-assigned this Oct 27, 2020
@taylorwaggoner
Copy link
Contributor

@davidnuzik I've added you to a K3s project in Equinix Metal (formerly Packet). Let me know if you want to add anyone else to the project. Thanks!

@davidnuzik
Copy link
Author

Thanks! We're all set

@vielmetti vielmetti reopened this May 17, 2021
@vielmetti
Copy link
Collaborator

@davidnuzik -

We are taking one of the system types you are using (c1.large.arm.xda) out of service. Can you contact me (evielmetti@equinix.com by email, or here on Github) and we can make alternative arrangements? It should be the case that everything you need can be done with the c2.large.arm system type.

@davidnuzik
Copy link
Author

@vielmetti thanks I got your email we can discuss via email. I think migrating to the system type you propose should work. I will contact my team first and follow up with you via email.

@github-actions
Copy link

github-actions bot commented Sep 9, 2021

Stale issue message

@davidnuzik
Copy link
Author

@cjellick and @cwayne18 for awareness - did we complete transitioning Equinix infra to the new ARM systems?
cc: @brandond

@brandond
Copy link

brandond commented Sep 9, 2021

Yeah, I believe we're all done with that.

@cwayne18
Copy link

cwayne18 commented Sep 9, 2021

Yes, we're all set

@davidnuzik
Copy link
Author

I'm going to close this issue out. Thanks all :)

@vielmetti vielmetti reopened this Oct 15, 2021
@vielmetti
Copy link
Collaborator

Reopening as there has been a request to add an individual to the project.

@caniszczyk
Copy link
Contributor

caniszczyk commented Oct 15, 2021 via email

@vielmetti vielmetti added this to the K3S milestone Oct 20, 2021
@github-actions
Copy link

Checking if there are any updates on this issue

@vielmetti
Copy link
Collaborator

Reopening as there is a new team on the project.

@gunamata
Copy link

Hi, Due to some organization changes, we would like to get access to the k3s infrastructure for couple of people on the SUSE k3s team. Can you please advise us on the process? Thank you in advance.

@vielmetti
Copy link
Collaborator

^ attn @jeefy

@ml8mr
Copy link

ml8mr commented Oct 12, 2022

This is starting to get urgent, so any advice would be appreciated.

@vielmetti
Copy link
Collaborator

Status update: the new team has access to the project, and there's work underway (but not yet complete) to migrate services out of old Packet data centers into new Equinix data centers.

@cwayne18
Copy link

cwayne18 commented Dec 6, 2022

@ml8mr is that correct? My understanding from Luis is that we're done with the migration

@ml8mr
Copy link

ml8mr commented Dec 6, 2022

@ml8mr is that correct? My understanding from Luis is that we're done with the migration

Yes, I think the migration has completed. Will double-check ASAP.

@vielmetti
Copy link
Collaborator

Confirming that all migration is completed; as of 2023-01-11, a total of 9 systems are deployed. Of those there's a 3-node k3s cluster and 6 Drone runners, of which 4 are arm64.

(There may be a future conversation about efficiencies of the arm64 configuration since that represents 320 cores, but for now the task at hand is complete! thanks @ml8mr @cwayne18 for all your help.)

@jeefy
Copy link
Member

jeefy commented Nov 22, 2023

(There may be a future conversation about efficiencies of the arm64 configuration since that represents 320 cores, but for now the task at hand is complete! thanks @ml8mr @cwayne18 for all your help.)

Hey all! Re-opening this as I think that time might be now. :) @ml8mr @cwayne18 Could @vielmetti and I work with you (or some designates) to try and minimize your Equinix footprint? I think we might be able to reduce your resource consumption without any functional impact. :) Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

9 participants