-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Baremetal Worker Nodes #18
Comments
This issue is linked to #42 |
The link for the beta survey seems to be broken :/ |
Hello @ZuSe sorry for this, yes i removed the form last friday because that form concerned GPU, which we now offer, and baremetal for which our public cloud team is already working on. I can't yt share any ETA because we want to make sure e offer great experience, but I confirm work has already started on this. subscribe to this issue to be warned as we will open beta. |
Any update on this? |
Hi @OzySky I do not yet have an exact ETA to share, but we should be able to propoose this a couple of month after GA, so early 2023. Will update this as soon as we have an ETA. Do not hesitate to share details about your requirements and use case |
Hi there. We really need this feature! The current processors used by the C2-X family instances (neither the other ones) are not powerful enough to run our new workload an MKS, while it is working perfectly on Advance-1 Gen 2 or ADVANCE-2 baremetal machines. |
@mhurtrel How close is this to becoming a reality? |
Hi @guillaume1987 Can you share more details about the specifics that you are awaiting for this ? Is it pure CPU performance per Core ? @MaxHayman and all, sorry for the delay around this. We have spent the last few weeks assessing the challenges brought by baremetal instances as they currently are made available within OVHcloud Public Cloud. We have found some blocking limitations for multiple usecases we discussed with prospects, namely :
These limitations, alongside with the current pricing comapred to VMs with comparable compute power make it difficult for us to be sure to adress a wide enough number of use cases at this stage. We however have multiple options, including offering those machines in a "one vm per host approach" model where we would not be impacted by the above-mentioned limitations. We also are wondering if we should not prioritize instead a "bring your own nodes" approach, where you would be in charge of the full worker nodes lifecyle while we would keep managing the control plane, leaving you the freedom (and responsability) of choosing the worker nodes you want. A survey is about to be released (I hope within a week) here and to prospects that already showed us interest in baremetal worker nodes to validate quickly the best path to take to satisfy you. Thanks again for your patience, and in advance for your participation in this survey to help us adress you challenges with the best offering as soon as possible. |
Hi @mhurtrel. Since the app we are running is using large neural matrices, the bottleneck is actually more related to the size of the L1, L2 and L3 cache than the processor type or frequency. |
Hello everyone, |
Did you agree on a solution after the survey? |
Hello everyone Thanks to all participants in the survey. We hard you loud an clear and concluded that the integration of metal instances to managed kubernetes with its current limitation and pricing would not fit the needs for the vast majority of you. You seem to see value however a lor of ibterest in proposing the isolation and better network and i/o perf through a "1 VM/host" design that we could build on the current metal instances hosts, while keeping compatibility with Cinder, private networkint and the lifecycle management of the nodes. Your feeback on pricing and even better bandwith at a given compute/price point were also duly noted. My compute colleague @JacquesMrz will evaluate rhe effort required to deliver these 1VM/host machines, and we will get back to you asap. |
Hi @mhurtrel @JacquesMrz, Right now, our only options are the R2-240 and B2-120 since they're the smallest 10Gbit nodes. But both still have way too much CPU and RAM for our use case and are double the price of the BM-S1, which isn't really optimal. BM-S1 would be suited so much better for that. |
Hello @ALL Considering your requirements, and short and mid term limitations linked to metal instances (limited support for machine lifecycle, impossibility to use those machines in openstack private networks except vlan0, Block storage incompatibility) and the effort and limitations linked with the 1 VM/Host alternative that we just finished exploring, we decided to stop this experiment. These new flavors will offer :
C3 instances will offer full support for Cinder, advanced Private Networking and full lifecycle management, like any VMs in the public cloud portfolio. Performance will be guaranteed, with no overcommit. Here is a sneak peak at the flavors : C3
I am closing this roadmap issue as won't do. Thanks again for all those who shared details about their use case and requirements. Though I know that this metal integration will be missed by a small minority of you, I really look forward to the incredible perf/price that we will be able to offer the vst majority of you with those new models and invite you to follow #352 for new flavor release. |
This is disappointing. I doubt the C3 nodes will offer the price/performance that people here are after. I also don't really see the appeal to the bare metal public cloud instances, if you can't use them with Kubernetes, when the bare metal dedicated server are much much more value for money. |
@MaxHayman I confirm you that the C3 instances will be extremely interesting compared to metal instances for compute and network intensive use cases. My colleague Jacques will share princing details as soon and possible. When compared to "classic" dedicated servers that you rent monthly, of course there is a premium to be paid for public cloud automation, hourly princing and fast delivery, catalogu stability and availability. Short after C3 release we will also offer saving plans for public cloud instances that will reduce the gap significatly between "classic dedicated servers" monthly pricing and comparable instance pricing, while offering more flexible way to commit #67 . |
Hi Everyone ! |
As a MKS user
I want to use baremetal worker nodes and not VMs
so that I can benefit from specifc hardware features/performance and/or ensure total hardware isolation to my workloads
Note :
We will open specific baremetal profiles in selected regions at first.
Some features may be limited to virtualised worker nodes .
The text was updated successfully, but these errors were encountered: