You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Service for internal loadbalancer sits pending with error
Events:
Type Reason Age From Message
Normal EnsuringLoadBalancer 2m45s (x9 over 18m) service-controller Ensuring load balancer
Warning CreatingLoadBalancerFailed 2m45s (x9 over 18m) service-controller Error creating load balancer (will retry): failed to ensure load balancer for service openshift-ingress/router-default: ensure(openshift-ingress/router-default): lb(cluster63-99qm4-internal) - failed to get subnet: cluster63-99qm4-vnet/cluster63-99qm4-node-subnet
The subnet
cluster63-99qm4-vnet/cluster63-99qm4-node-subnet
does not exist the subnets that get created are
clustername-UID-worker-subnet
clustername-UID-master-subnet
interestingly the NSG for
clustername-UID-worker-subnet
is called clustername-UID-node-nsg
Expected results:
service starts correctly with an internalLB ip for azure, guessing this should try and apply against clustername-UID-worker-subnet
Additional info: This works ok on AWS just having issues with Azure, also if you re-label the subnet clustername-UID-worker-subnet to clustername-UID-node-subnet this issue is resolved
The text was updated successfully, but these errors were encountered:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Description of problem:
Has anyone looked at deploying private LB for ingress router yet? https://docs.openshift.com/container-platform/4.2/release_notes/ocp-4-2-release-notes.html#ocp-4-2-enable-ingress-controllers, looks to work on AWS ok but getting issues on azure.
fails against subnet as doesn't exist
cluster63-99qm4-vnet/cluster63-99qm4-node-subnet
the actual subnet that gets created in Azure is worker-subnet not node-subnet, maybe a bug with Naming standards?
Version-Release number of selected component (if applicable):
4.2.0 on Azure
How reproducible:
every time
Steps to Reproduce:
Actual results:
Service for internal loadbalancer sits pending with error
Events:
Type Reason Age From Message
Normal EnsuringLoadBalancer 2m45s (x9 over 18m) service-controller Ensuring load balancer
Warning CreatingLoadBalancerFailed 2m45s (x9 over 18m) service-controller Error creating load balancer (will retry): failed to ensure load balancer for service openshift-ingress/router-default: ensure(openshift-ingress/router-default): lb(cluster63-99qm4-internal) - failed to get subnet: cluster63-99qm4-vnet/cluster63-99qm4-node-subnet
The subnet
cluster63-99qm4-vnet/cluster63-99qm4-node-subnet
does not exist the subnets that get created are
clustername-UID-worker-subnet
clustername-UID-master-subnet
interestingly the NSG for
clustername-UID-worker-subnet
is called clustername-UID-node-nsg
Expected results:
service starts correctly with an internalLB ip for azure, guessing this should try and apply against clustername-UID-worker-subnet
Additional info: This works ok on AWS just having issues with Azure, also if you re-label the subnet clustername-UID-worker-subnet to clustername-UID-node-subnet this issue is resolved
The text was updated successfully, but these errors were encountered: