Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OSASINFRA-902: Update OpenStack Custom External LB and DNS Docs #4389

Merged

Conversation

iamemilio
Copy link

The underlying network architecture has changed a lot since these docs
were initially written. We want to make sure that these docs are accurate
and up to date so that users with complex networking use cases like workers
on a custom subnet and baremetal workers are able to manage their ingress/egress
traffic as needed. More up to date examples and reference information has been added.

We have chosen to omit sections outlining how to replace the internal lb and dns
services since they were inaccurate. We are targeting an upcoming release to handle
these features better given the complexity of our current networking architecture.

@iamemilio
Copy link
Author

/label platform/openstack
/assign mandre

@iamemilio
Copy link
Author

/cc @maxwelldb

@maxwelldb
Copy link
Contributor

@iamemilio Would you like suggestions on this, or is the review request more of an FYI only thing?

@iamemilio
Copy link
Author

iamemilio commented Nov 19, 2020

Suggestions sound great, but up to you.

@EmilienM
Copy link
Member

/lgtm
/approve

@openshift-ci-robot openshift-ci-robot added lgtm Indicates that a PR is ready to be merged. approved Indicates a PR has been approved by an approver from all required OWNERS files. labels Nov 23, 2020
@iamemilio
Copy link
Author

/hold lets give people some time to put reviews in

@openshift-ci-robot openshift-ci-robot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Nov 23, 2020
Copy link
Contributor

@maxwelldb maxwelldb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Made a number of comments. Feel free to accept/trash/interrogate however you like. @iamemilio


This documents how to shift from the internal load balancer, which is intended for internal networking needs, to an external load balancer.
This documents how to shift external traffic from the internal load balancer that comes stock with OpenShift on OpenStack to an external load balancer.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
This documents how to shift external traffic from the internal load balancer that comes stock with OpenShift on OpenStack to an external load balancer.
You can shift network traffic from the default OpenShift on OpenStack load balancer to a load balancer that you provide.

(Or something like this. What's the importance of external and internal in an instructional sense? Could preserve those terms if it's important.)


The load balancer must serve ports 6443, 443, and 80 to any users of the system. Port 22623 is for serving ignition start-up configurations to the OpenShift nodes and should not be reachable outside of the cluster.
It is essential that the instance your load balancing service is running from can reach all of the nodes in the cluster. One easy way to do this is to create the instance in a subnet that is within the OpenShift network created by your installer, and making sure a router interface is attached to that subnet from the OpenShift-external-router. Another solution is to attach a floating IP to the nodes that you want to add to your external load balancer.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
It is essential that the instance your load balancing service is running from can reach all of the nodes in the cluster. One easy way to do this is to create the instance in a subnet that is within the OpenShift network created by your installer, and making sure a router interface is attached to that subnet from the OpenShift-external-router. Another solution is to attach a floating IP to the nodes that you want to add to your external load balancer.
To use your own load balancer, the instance that it runs from must be able to access every machine in your cluster. You might ensure this access by creating the instance on a subnet that is within your cluster's network, and then attaching a router interface to that subnet from the `OpenShift-external-router` [object/instance/whatever]. You can also attach a floating IP address to the machines that you want to add to your load balancer.

openstack floating ip create --port master-port-1 <public network>
openstack floating ip create --port master-port-2 <public network>
```
The following external facing services should be added to your new load balancer:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
The following external facing services should be added to your new load balancer:
Add the following external facing services to your new load balancer:


The first step is to add floating IPs to all the master nodes:
#### Key OpenShift Services
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"Key" == "Required?" "Important?"


Once complete you can see your floating IPs using:
- The master nodes serve the OpenShift API over port 6443 via tcp.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- The master nodes serve the OpenShift API over port 6443 via tcp.
- The master nodes serve the OpenShift API on port 6443 by using TCP.

```

The external load balancer should now be operational along with your own DNS solution. The following curl command is an example of how to check functionality:
#### Verifying API Reachable
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
#### Verifying API Reachable
#### Verifying that the API is Reachable

Capitalized however you like.

The external load balancer should now be operational along with your own DNS solution. The following curl command is an example of how to check functionality:
#### Verifying API Reachable

One good way to test that you can reach the API is to try executing an `oc` command. If you can't do that easily, you can use this curl command:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
One good way to test that you can reach the API is to try executing an `oc` command. If you can't do that easily, you can use this curl command:
One way to test whether or not you can reach the API is to run the `oc` command.
If you can't use `oc`, use the following `curl` command:

Could include sample output for good/bad cases while using oc.

"compiler": "gc",
"platform": "linux/amd64"
}
```

Another useful thing to check is that the ignition configurations are only available from within the deployment. The following command should only succeed from a node in the OpenShift cluster:
Note: the versions may be different, but as long as you get a json payload response, it worked correctly.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Note: the versions may be different, but as long as you get a json payload response, it worked correctly.
Note: The versions in the sample output may differ from your own. As long as you get a JSON payload response, the API is accessible.

Another useful thing to check is that the ignition configurations are only available from within the deployment. The following command should only succeed from a node in the OpenShift cluster:
Note: the versions may be different, but as long as you get a json payload response, it worked correctly.

#### Verifying Apps Reachable
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
#### Verifying Apps Reachable
#### Verifying that Apps Are Reachable

Or something to that effect.


#### Verifying Apps Reachable

An easy way to check that the workers are load balanced correctly is to try to access the OpenShift console. This can be done from your web browser. If you don't have access to a web browser, then you can try to query it with the following curl command:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
An easy way to check that the workers are load balanced correctly is to try to access the OpenShift console. This can be done from your web browser. If you don't have access to a web browser, then you can try to query it with the following curl command:
The simplest way to verify that apps are reachable is to open the OpenShift console in a web browser.
If you don't have access to a web browser, query the console by using the following `curl` command:

@openshift-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: EmilienM, maxwelldb

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci-robot openshift-ci-robot removed the lgtm Indicates that a PR is ready to be merged. label Nov 24, 2020
@openshift-merge-robot
Copy link
Contributor

@iamemilio: The following test failed, say /retest to rerun all failed tests:

Test name Commit Details Rerun command
ci/prow/e2e-crc 2e9b816 link /test e2e-crc

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

You can verify the operation of the load balancer now if you wish, using the curl commands given below.

Now the DNS entry for `api.<cluster name>.<base domain>` needs to be updated to point to the new load balancer:
To ensure that your API and apps are accessible through your load balancer, [create or update DNS entries](#Create API and Ingress DNS Records) for them. To use your new load balancing service for external traffic, make sure the IP address for these DNS entries is the IP address your load balancer is reachable at.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The anchor should be #create-api-and-ingress-dns-records, otherwise the link doesn't render. The sentence ends bizarrely too.

Suggested change
To ensure that your API and apps are accessible through your load balancer, [create or update DNS entries](#Create API and Ingress DNS Records) for them. To use your new load balancing service for external traffic, make sure the IP address for these DNS entries is the IP address your load balancer is reachable at.
To ensure that your API and apps are accessible through your load balancer, [create or update DNS entries](#create-api-and-ingress-dns-records) for them. To use your new load balancing service for external traffic, make sure the IP address for these DNS entries is the IP address your load balancer:

The underlying network architecture has changed a lot since these docs
were initially written. We want to make sure that these docs are accurate
and up to date so that users with complex networking use cases like workers
on a custom subnet and baremetal workers are able to manage their ingress/egress
traffic as needed. More up to date examples and reference information has been added.

We have chosen to omit sections outlining how to replace the internal lb and dns
services since they were inaccurate. We are targeting an upcoming release to handle
these features better given the complexity of our current networking architecture.
@iamemilio
Copy link
Author

removing the hold, reviewers can lgtm again when ready

/hold cancel

@openshift-ci-robot openshift-ci-robot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Nov 30, 2020
@EmilienM
Copy link
Member

/lgtm

@openshift-ci-robot openshift-ci-robot added the lgtm Indicates that a PR is ready to be merged. label Nov 30, 2020
@openshift-merge-robot openshift-merge-robot merged commit 36cf196 into openshift:master Nov 30, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged. platform/openstack
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants