New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support of standard LB to Azure vmss #62707

Merged
merged 4 commits into from Apr 20, 2018

Conversation

Projects
None yet
4 participants
@feiskyer
Member

feiskyer commented Apr 17, 2018

What this PR does / why we need it:

Add support of standard LB to Azure vmss.

Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged):
Fixes #60485

Special notes for your reviewer:

Release note:

Add support of standard LB to Azure vmss

/sig azure

if az.useStandardLoadBalancer() {
return ""
}

This comment has been minimized.

@karataliu

karataliu Apr 17, 2018

Contributor

This will affect EnsureHostsInPool for availabilitySet if standard loadbalancer is on?

This comment has been minimized.

@feiskyer

feiskyer Apr 18, 2018

Member

@karataliu Yep, so availabilitySet logic is also updated here

@feiskyer

This comment has been minimized.

Member

feiskyer commented Apr 18, 2018

/retest

1 similar comment
@feiskyer

This comment has been minimized.

Member

feiskyer commented Apr 18, 2018

/retest

standardNodes := []*v1.Node{}
for _, curNode := range nodes {
curScaleSetName, err := extractScaleSetNameByExternalID(curNode.Spec.ExternalID)

This comment has been minimized.

@karataliu

karataliu Apr 18, 2018

Contributor

ExternalID is deprecated and being removed #61877, can we turn to providerId.

This comment has been minimized.

@feiskyer
if instanceIDs.Len() == 0 {
// This may happen when scaling a vmss capacity to 0.
return fmt.Errorf("no nodes running are belonging to vmss %q", ssName)

This comment has been minimized.

@karataliu

karataliu Apr 18, 2018

Contributor

If there're 2 scalesets, do we allow one to be with 0 instances? What about just skip in this case.

This comment has been minimized.

@feiskyer

feiskyer Apr 19, 2018

Member

hmm, makes sense, let's just allow 0 instances.

curScaleSetName, err := extractScaleSetNameByExternalID(curNode.Spec.ExternalID)
if err != nil {
glog.V(4).Infof("Node %q is not belonging to any scale sets, assuming it is belong to availability sets", curNode.Name)
standardNodes = append(standardNodes, curNode)

This comment has been minimized.

@karataliu

karataliu Apr 18, 2018

Contributor

Call excludeMasterNodesFromStandardLB and verify master nodes?
Also what do we do if master node is a scaleset?

This comment has been minimized.

@feiskyer

feiskyer Apr 19, 2018

Member

Will add a check before extractScaleSetNameByExternalID, which would work for both cases

// the same network interface couldn't be added to more than one load balancer of
// the same type. Omit those nodes (e.g. masters) so Azure ARM won't complain
// about this.
backendPool := *newBackendPools[0].ID

This comment has been minimized.

@karataliu

karataliu Apr 18, 2018

Contributor

why not go through all newBackendPools here?

This comment has been minimized.

@feiskyer

feiskyer Apr 19, 2018

Member

We only need to check one because they are in same LB

This comment has been minimized.

@karataliu

karataliu Apr 19, 2018

Contributor

The pools is on path
VMSS -> Properties/NetworkProfile/networkInterfaceConfigurations/IPConfigurations/loadBalancerBackendAddressPools

Thus they can belong to different load balancers, right?

If master VMSS node is there and already with two LBs, the pool could be:

[
{ "id": "/*/Microsoft.Network/loadBalancers/master-internal/backendAddressPools/default" },
{ "id": "/*/Microsoft.Network/loadBalancers/master/backendAddressPools/default"  }
]

If not checking the second one, it may later add this node to another public LB and the call would fail. Please correct me if there's something I missed.

This comment has been minimized.

@feiskyer

feiskyer Apr 20, 2018

Member

Good catch. There is also same issue with standard vm codes. Will fix togather in a new commit.

This comment has been minimized.

@feiskyer
for ssName, instanceIDs := range scalesets {
// Only add nodes belonging to specified vmSet to basic LB backends.
if !ss.useStandardLoadBalancer() && ssName != vmSetName {

This comment has been minimized.

@karataliu

karataliu Apr 18, 2018

Contributor

Need to check vmSetName != "" like others? We'd better add a note what does it mean for vmSetName==""

This comment has been minimized.

@feiskyer

feiskyer Apr 19, 2018

Member

vmSetName won't be "" now, see updates here

This comment has been minimized.

@karataliu

karataliu Apr 19, 2018

Contributor

Also use !strings.EqualFold to align with Line438?

This comment has been minimized.

@feiskyer
@feiskyer

This comment has been minimized.

Member

feiskyer commented Apr 19, 2018

@karataliu Rebased and addressed comments. PTAL

@karataliu

This comment has been minimized.

Contributor

karataliu commented Apr 20, 2018

/lgtm

@k8s-ci-robot

This comment has been minimized.

Contributor

k8s-ci-robot commented Apr 20, 2018

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: feiskyer, karataliu

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-merge-robot

This comment has been minimized.

Contributor

k8s-merge-robot commented Apr 20, 2018

Automatic merge from submit-queue (batch tested with PRs 62857, 62707). If you want to cherry-pick this change to another branch, please follow the instructions here.

@k8s-merge-robot k8s-merge-robot merged commit 2bfe40a into kubernetes:master Apr 20, 2018

15 checks passed

Submit Queue Queued to run github e2e tests a second time.
Details
cla/linuxfoundation feiskyer authorized
Details
pull-kubernetes-bazel-build Job succeeded.
Details
pull-kubernetes-bazel-test Job succeeded.
Details
pull-kubernetes-cross Skipped
pull-kubernetes-e2e-gce Job succeeded.
Details
pull-kubernetes-e2e-gce-device-plugin-gpu Job succeeded.
Details
pull-kubernetes-e2e-gke Skipped
pull-kubernetes-e2e-kops-aws Job succeeded.
Details
pull-kubernetes-integration Job succeeded.
Details
pull-kubernetes-kubemark-e2e-gce Job succeeded.
Details
pull-kubernetes-local-e2e Skipped
pull-kubernetes-node-e2e Job succeeded.
Details
pull-kubernetes-typecheck Job succeeded.
Details
pull-kubernetes-verify Job succeeded.
Details

@feiskyer feiskyer deleted the feiskyer:vmss-standard-lb branch Apr 20, 2018

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment