New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
proposals: Proposal to add KubeVirt to the CNCF Sandbox #265
Conversation
Signed-off-by: Fabian Deutsch <fabiand@fedoraproject.org>
Signed-off-by: Fabian Deutsch <fabiand@fedoraproject.org>
Q&A from Presentation to TOC, July 9, 2019: Arun Gupta: Joe Beda: Arun Gupta: Joe Beda: Joe Beda: |
Signed-off-by: Fabian Deutsch <fabiand@fedoraproject.org>
We at NTT Open Source Center, have been evaluating KubeVirt for a couple of months now in the context of a potential solution migrating legacy application running in VM on Kubernetes. We found KubeVirt very promising solution for this use case. Though still many features are required (both in Kubernetes & KubeVirt) to make it a complete solution for the migration use case. Getting KubeVirt in CNCF sandbox will sure help the project and build the community around it which help in shaping up the project that can solve a big pain point especially for big enterprise. some reference of my work: https://static.sched.com/hosted_files/ossalsjp19/64/Running%20Legacy%20VMs%20with%20Kubernetes.pdf |
I'll be out for PTO but will follow up on this afterwards. |
There has been some interest expressed by the kubernetes multitenancy workgroup on the idea of using kubevirt + cluster-api-kubevirt to do hard multitenancy mentioned during the toc presentation (https://docs.google.com/presentation/d/1jhzJlSAAJNNil1nIYp60eSMH3LPd6AwqHLt3vEAzMSg/edit#slide=id.g5be2516312_0_32). Is anyone from the kubevirt project related to this interested in joining the workgroup to see if we can flesh the idea out more? |
@kfox1111 Yes, I am interesting, can you provide more details about workgroup? |
Speaking as a general member of the user community, I'm interested in seeing kubevirt be in the sandbox. While at first glance, it feels a little weird to do vms like this in a cloud native world, but there are some apps that need particular kernels/os's to work properly but are otherwise developed cloud native. This kind of tool enables following cloud native best practices while still allowing for things like custom kernels to be used. |
At Loodse, we use kubevirt to host Kubernetes worker nodes. This allows us to easely partition physical servers, as they tend to have much more capacity than what ppl need from their Kubernetes cluster. The whole thing is the quickest and easiest way I've personally come across to set up virtualmachines as a service, yet extremely powerful as it inherits all of Kubernetes scheduling capabilities. |
We leverage KubeVirt at One Source Integrations (OSI) to orchestrate network vendor appliances for network testing. The standardized API helps us adopt network platforms very quickly into our orchestration. |
Kubevirt will allow to use one same orchestration layer for all workloads, either the 'cloud' ones and the legacy ones that are still around until them can be migrated |
At SAP we use KubeVirt to run K8s worker nodes. Together with https://github.com/kubevirt/cloud-provider-kubevirt we use Kubevirt as a "cloud provider" to run K8s clusters. By putting KubeVirt on a bare metal cluster and running additional K8s clusters on KubeVirt VMs we give our developers many of the features they know from cloud providers (e.g. LoadBalancers, cluster autoscaling) in an on-prem environment. |
I strongly recommend adding KubeVirt to the CNCF Sandbox. At Cloudflare, we use Kubevirt VMs for test environments that are closer to production parity than containers would be - and because these VMs can run on Kubernetes, engineers are able to utilize unused capacity to maximize test efficiency, rather than being restricted to a set of costly, dedicated hardware. We also use VMs to run the majority of our production CI/CD build workloads and the KubeVirt has been essential for moving our build workloads from traditional dedicated hardware to auto-scaled Kubernetes VMs. It has been exciting to work with the project. Along the way we have contributed to it and KubeVirt maintainers have been really cooperative and helped us a lot with feature requests and fixes. |
I would sponsor for Sandbox |
We at Nvidia are using Kubevirt to develop a new VM management platform. We have some VM workloads which cannot be containerized. Kubevirt’s ability to orchestrate VMs within Kubernetes framework, helps us leverage the same stack for VMs and containers. This makes Kubevirt a perfect solution for our use case. |
We at IBM in next generation cloud networking group are using Kubernetes to run periodic Kubernetes CronJobs that deploy to Kubernetes clusters in production using a "Kuul Periodic System" (a k8s cluster that runs k8s CronJobs). I.e., our Continuous Delivery (CD) system for production is made of Pods/CronJobs on a k8s cluster. We also deploy Kubernetes (using kubespray running on Pods) and run periodic healthchecks and periodic canary tests -- all running in Pods. We wanted Pods, and not full VMs, because these jobs are running 3 per hour every hour 24x7 for many DCs. Eventually, we wanted to enhance our CD system so that we can run more complex deployment code that can only be done using VMs all on the same k8s cluster. For example, we want to build Kubernetes clusters or run minikube for other tests. Essentially, our use-case is to have Pods and VMs running side by side to allow more interesting/complex test scenarios and/or test environments. Kubernetes is not just a place to run applications but also a place to run tests and host test environments made up of Pods and VMs running side by side. Kubevirt gives us this ability. We know there are other ways to run VMs but the ability to run the VMs with our Pods to do testing with them using the same networking space allows for some isolation from the rest of the lab network space. This frees us from having to reserve IP addresses, keep track of the, and give them back as our test environments get created and destroyed. We want test environments to be ephemeral in nature (like for running basic unit level or CI tests) and having to allocate/free IPs would be cumbersome. Also, using Kubernetes to run tests, host test environments, and deploy code allows us to keep our skills focused on Kubernetes. Any operational knowledge, troubleshooting experience, and training we use to operate Kubernetes and kubevirt is all Kubernetes knowledge which can be transferred and re-used as operational knowledge to troubleshoot our production Kubernetes clusters. This reduces training costs for our SRE and CICD/infra teams. We are doing a lot of work lately to secure our production Kubernetes clusters with things like docker upgrades, Kubernetes version upgrades, and Calico upgrades, etc. Kubevirt has given us a great way to create test environments for what I call "micro kubes" (via "kube in kube") to help us test out the upgrade processes, the Kubernetes clusters after they've been upgraded, and our production workloads running on the "micro kubes" using the same processes (running infra/deployment code on Pods). Essentially, we get to test CICD processes in a way identical to how they are actually run for production (Infra is code and it too needs testing). Kubevirt has made this all possible in a convenient way. I think kubevirt gives Kubernetes the possibility to be used for more than just a place to run application workloads alongside legacy application workloads that need to remain on VMs. Kubevirt also allows us to create interesting test environments (due to VMs) that can test those application workloads. |
@lizrice Thank you for volunteering to sponsor. Can we get a status on another sponsor? |
I am happy to cosponsor Kubevirt. |
Architecture: https://github.com/kubevirt/kubevirt/blob/master/docs/architecture.md |
Thank you @lizrice and @bgrant0607 ! |
Indeed, @lizrice @bgrant0607 thanks for the sponsorship! |
Signed-off-by: Fabian Deutsch <fabiand@fedoraproject.org>
Signed-off-by: Fabian Deutsch <fabiand@fedoraproject.org>
Signed-off-by: Fabian Deutsch <fabiand@fedoraproject.org>
@caniszczyk I've updated the proposal, few content changes, mainly formatting. Should be good now. |
Welcome to the Sandbox :) |
yay! Thanks @caniszczyk 😃 |
DevStats pages will be ready tomorrow. Now only test server page is almost finished: https://kubevirt.teststats.cncf.io. |
Thank you 😊 |
This is a proposal to add KubeVirt to the CNCF Sandbox.
Signed-off-by: Fabian Deutsch fabiand@fedoraproject.org