-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
vRack (Private network) integration #15
Comments
Private beta has started for a selection of customers. |
We can now welcome now private beta testers. |
The beta is now open to anyone and the feature is self service. Please note that there are still known limits at this point :
|
The feature (with LB pub to private integration) will be available early March |
The current ETA for the feature with full LB public to private support is now the 24th of March |
🎉 thanks for the update on the ETA! |
We gave it a go for our evaluation progress, but we hit the (expected) wall where the LB cant use the internal cidr yet:
What would be awesome is that we could define internal loadbalancers with annotations (https://kubernetes.io/docs/concepts/services-networking/service/#service-tabs-5) Either use the "offical" openstack annotation:
Or stick with the ovh branding:
We have currently valid reasons for some internal LB (eg. clients that only Communicates with VPN Acces, Security for internal apps, customers data flow,...) where we neither want nor need an external loadbalancer. |
We have an additionnal slight delay, due to our Netwok colleagues helping customers affected by the SBG incident a few days ago. We now expect to ship the feature next week. |
My Network colleagues are informing me of a new ETA wich would bring the feature to be available in the week starting on the 5th of April... really sorry about this multiple delays. |
My Network colleagues are unfortunately informing me of an additionnal delay, and will communicate an update ETA soon |
We should be able to activate the LB compatibility next week. The vrack feature will then be GA complete and we would release the control panel in the following weeks). |
The feature is now compatible with External Load Balancers orchestrated with Kubernetes. The feature is currently available through OVH HTTP API and will soon be available from the control panel ( a.k.a. manager). Here is a full updated documentation : https://dl.plik.ovh/file/lc9J4dqIITkRBVhL/mhppqw02G1gENR7d/vrackbetafinalstage-allexceptmanager.pdf |
Any chance to have this in GRA11? |
The feature will be available in the control panel (aka Manager) in about 2 days. |
When i create a cluster with vrack support following the guide, kubernetes can't provisioning an IP for LB, it stuck on "PENDING" state. I've waited for over an hour. Any ideas? |
@ryam4u this look like a specific incident, I confirm you should get an IP assigned. Please open a support ticket (note the that the loas balancer team is also avaialble in best effort on https://gitter.im/ovh/cloud-loadbalancer @matmicro you can find the documentation as a pdf link in the issue above. The feature should be avialable in the control pannel in about 2 days |
@mhurtrel cool; tell me as soon as you have an ETA... I really want to avoid having to setup networking on a new region :) |
@mhurtrel |
@ZuSe We really have to improve the quality of our error returns, our apologies for this. However if this change doesn't solve the issue, please open a ticket or exchange with other users on https://gitter.im/ovh/kubernetes who I think will find whats wrong. |
Hi @mhurtrel i set it up with b2-15 now. However also here, u need make sure that u write the flavor in smaller case characters (e.g. B2-15 -> copy&paste from web, won't work). I there an ETA when the Discovery nodes are available? And second question. How can I migrate existing clusters to my private network/vrack? |
Concerning the D2 its is a matter of weeks. You can subsribe to #19 to be notified. If you have payed for monthly instances you can enable the private network and keep those instances by resetting the cluster, but you cnan't add/change the network used for an existing cluster without restetting it. You may want to use tools like Velero to move workload from one to another cluster. |
Hi @mhurtrel thanks for the clarification. I will give a try to velero. Do you know if it works with the OVH Object Storage S3 API? |
I just gave it a try. It works ;) For the ones who are interested. This my install command: velero install --provider aws --plugins velero/velero-plugin-for-aws:v1.2.0 --bucket backups --secret-file ./credentials-velero --backup-location-config s3Url=https://storage.gra.cloud.ovh.net,s3ForcePathStyle=true,region=GRA --snapshot-location-config region=GRA,s3Url=https://storage.gra.cloud.ovh.net --features=EnableCSI |
The feature is now available in the control panel (a.k.a. manager) |
When we could expect the support of this feature in terraform provider? |
@pzalews The private network feature can now be used withing OVHcloud terraform provider ovh/terraform-provider-ovh#189 |
Hi @mhurtrel , |
You cannot, you will have to reset your K8S cluster to be able to attach a vRack to it.... |
Hi @mhurtrel, I'm currently testing the Managed K8S inside a vRack. Here is my setup:
To fully test this setup, I've deployed a D2-2 (Ubuntu 20.04) instance in my Public Cloud, here is the result:
After validating my setup, I've configured a Managed K8S inside the appropriate Private Network/vRack, here is the result:
In conclusion, it seems that OpenStack subnet configuration (Gateway, Routes, DNS servers) is NOT honored by provisioned Managed K8S Worker Nodes. How can we enforce the DNS + Routes + Gateway in the Worker Nodes ? |
@fkalinowski we hit the same issue, our setup is similar... Our workaround is to use a deamonset that modifies the worker routing table with k8-route and to configure the corefile accordingly
There are considerable drawbacks when doing it this way, mainly if you want to use a registry on prem and try to download your images from there (since core dns isn't called in that case). |
I can read on the documentation that we should not use subnet 10.2.x.x I can see on my kubernetes that it uses 10.3.x.x Is there a mistake here ? |
@Escaflow We worked our way around this by also forcing our own nameserver in the /etc/resolv.conf of the host (mounting the host filesystem in a privileged container also deployed as a DaemonSet). This also avoids the need to edit the CoreDNS configuration, but requires a restart of the CoreDNS pods (DNS configuration is injected by kubelet only when the pod is started). Seems to be working fine for now... |
Thanks a lot for your feedbacks ! I wanted to insist we are aware of those limitation and we are chexking with the team |
Hi @mhurtrel, Just to be sure we are all aligned, there is two distinct problems:
In summary, any configuration that we are able to provide through the DHCP configuration at OpenStack level. |
our colleague @LostInBrittany wrote a guide describing how to manually define routes to manage this advanced use case (Kubernetes cluster in a vRack private Network wanting to access to other private network in the same vRack) : https://docs.ovh.com/gb/en/kubernetes/vrack-example-between-private-networks/ . This will be useful before this default gateway feature is developed ( #116 ) |
Thanks for the guide ! Unfortunately it does not cover the second problem "ability to provide custom (internal and/or external) DNS servers (additional or as replacement of the default ones)". |
@fkalinowski you can follow the dvancement on #116 . I will give an ETA there as soon as possible (my current guess would be october or november). Concerning your DNS you can change those default DNS servers by deploying a deamonset pod . We don't have specific documentation for this use case but that one should help : https://kubernetes.io/docs/tasks/administer-cluster/nodelocaldns/ |
@mhurtrel, thanks for your reply but regarding Node Local DNS approach this only solves the problem with DNS query within Pods. It does NOT solve the problem with DNS query within the Node. That's this second use case we need to achieve because we fetch some Docker images from a Docker Registry which is only reachable inside the vRack. This is required to avoid exposing our Docker Registry on the Internet (and thus increase our security). At the current moment the only workaround we have identified is to override the |
@fkalinowski just confirming we support coreDNS configuration since last October #184 |
Hi @mhurtrel thanks for notifying me, but as far as I know CoreDNS solves private DNS resolution at Pod level but still not at Node level. |
Indeed for you use case, the deamonset forcing a customer DNS resolver is the best solution. Please note that it is capital that your customer node DNS resolver resolves public FQDN to ensure normal functionning of our systems. We may improve that when DNSaaS will be part of our public cloud porfolio. |
So what about isolating an OVH managed docker registry inside a vRack env ? |
Hello @clement-igonet this feature is on our roadmap for 2024 and we can track it using this issue : #541 |
As a Managed Kubernetes Service user
I want my worker nodes to be deployed in one of my private networks
so that I can expose and acceess other IaaS deployed in my vRack
Note :
The choice of private network used for a given cluster will be done at cluster creation
Beta documentation : Here is a full updated documentation : https://dl.plik.ovh/file/lc9J4dqIITkRBVhL/mhppqw02G1gENR7d/vrackbetafinalstage-allexceptmanager.pdf
The text was updated successfully, but these errors were encountered: