Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs: update README #155

Merged
merged 1 commit into from
Mar 12, 2019
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
127 changes: 118 additions & 9 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,17 +1,126 @@
# Cluster Ingress Operator
# OpenShift Ingress Operator

Cluster Ingress Operator deploys and manages [HAProxy](http://www.haproxy.org) to provide highly available network ingress in [OpenShift](https://openshift.io/), enabling both OpenShift [Routes](https://docs.okd.io/latest/rest_api/apis-route.openshift.io/v1.Route.html) and Kubernetes [Ingresses](https://kubernetes.io/docs/concepts/services-networking/ingress/).
Ingress Operator is an [OpenShift](https://www.openshift.com) component which enables external access to cluster services by configuring Ingress Controllers, which route traffic as specified by OpenShift [Route](https://docs.openshift.com/container-platform/3.11/architecture/networking/routes.html) and Kubernetes [Ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/) resources.

The operator tries to be useful out of the box by creating a working default deployment based on the cluster's configuration.
To provide this functionality, Ingress Operator deploys and manages an
[OpenShift router](https://github.com/openshift/router) — a
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"a"→"an"

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The name of the operator repo is "cluster-ingress-operator". The title of the doc is "OpenShift Ingress Operator". ClusterIngress has now transitioned to IngressController. OpenShift includes CR's for cluster/ingresses.config.openshift.io and supports the Kubernetes Ingress resource. I can see how a user would get confused. Should the name of the repo and readme title be more closely align to IngressController? For example, repo name "ingress-controller-operator" and title "OpenShift Ingress Controller Operator"?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changing the name of the repository is a more onerous endeavor and goes beyond the scope of this PR IMO. Otherwise, the naming makes sense to me: the ingress operator manages the ingress controller, which works with the ingress (sort of) and (really) route resources.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ingress Controller Operator vs. Ingress Operator sounds like a good list discussion.

Changing the labels, annotations, and resource names would have a few considerations, mostly self-contained to this project, although we have to be careful about resource names (which I believe are currently being used as an informal contract in some cases and which could potentially be replaced by stable label selectors).

Changing the repo name would have enough ripple effects to warrant its own design document (this isn't meant to be snarky — just a statement of fact for consideration).

[HAProxy-based](https://www.haproxy.com) Kubernetes [ingress
controller](https://kubernetes.io/docs/concepts/services-networking/ingress-controllers).

* On cloud platforms, ingress is exposed externally using Kubernetes [LoadBalancer Services](https://kubernetes.io/docs/concepts/services-networking/#loadbalancer).
* On Amazon AWS, the operator publishes a wildcard DNS record based on the [cluster ingress domain](https://github.com/openshift/api/blob/master/config/v1/types_ingress.go) and pointing to the service's load-balancer.
Ingress Operator implements the OpenShift [ingresscontroller API](https://github.com/openshift/api/blob/master/operator/v1/types_ingress.go).

## How to help
## Installing

See [HACKING.md](HACKING.md) for development topics.
Ingress Operator is a core feature of OpenShift and is enabled out of the box.

Every new [OpenShift installation](https://github.com/openshift/installer)
has an `ingresscontroller` named `default` which can be customized,
replaced, or supplemented with additional ingress controllers. To view the
default ingress controller, use the `oc` command:

```shell
$ oc describe --namespace=openshift-ingress-operator ingresscontroller/default
```

## Managing

Create and edit `ingresscontroller.operator.openshift.io` resources to manage
ingress controllers.

Interact with ingress controllers using the `oc` command. Every ingress
controller lives in the `openshift-ingress-operator` namespace.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Every ingress controller lives in the openshift-ingress-operator namespace.

should we say, by default?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For now, we do not handle a ingresscontroller it if it is not in openshift-ingress-operator, so I think this statement is fine for now... although there is some ambiguity between the ingresscontroller resource and the ingress controller (née router) itself.


To scale an ingress controller:

```shell
$ oc scale \
--namespace=openshift-ingress-operator \
--replicas=1 \
ingresscontroller/<name>

$ oc patch \
--namespace=openshift-ingress-operator \
--patch='{"spec": {"replicas": 2}}' \
--type=merge \
ingresscontroller/<name>
```

**Note:** Using `oc scale` on an `ingresscontroller` where `.spec.replicas` is unset will currently return an error ([Kubernetes #75210](https://github.com/kubernetes/kubernetes/pull/75210)).

## Customizing

Create new `ingresscontroller` resources in the `openshift-ingress-operator`
namespace.

To edit an existing ingress controller:

```shell
$ oc edit --namespace=openshift-ingress-operator ingresscontroller/<name>
```

**Important:** Updating an ingress controller may lead to disruption for public
facing network connections as a new ingress controller revision may be rolled
out.

Refer to the [ingresscontroller API](https://github.com/openshift/api/blob/master/operator/v1/types_ingress.go) for full details on defaults and
customizing an ingress controller. The most important initial customizations are
domain and endpoint publishing strategy, as they *cannot currently be changed
after the ingress controller is created*.

### Endpoint publishing

The `.spec.endpointPublishingStrategy` field is used to publish the ingress
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure what "publish the ingress controller endpoints to other networks" means; is that referring to DNS?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Basically lifted from our new public API docs; do you feel the API docs are similarly confusing? What would you propose here as an alternative, and would you suggest we upstream a change as well?

controller endpoints to other networks, enable load balancer integrations, etc.

Every strategy is described in detail in the [ingresscontroller API](https://github.com/openshift/api/blob/master/operator/v1/types_ingress.go). A brief
design diagram for each is shown below.

#### LoadBalancerService

## Reporting issues
The `LoadBalancerService` strategy publishes an ingress controller using a
Kubernetes [LoadBalancer
Service](https://kubernetes.io/docs/concepts/services-networking/#loadbalancer)
and on some platforms offers managed wildcard DNS.

Bugs are tracked in [Bugzilla](https://bugzilla.redhat.com/enter_bug.cgi?product=OpenShift%20Container%20Platform&version=4.0.0&component=Routing).
![Image of LoadBalancerService](docs/images/endpoint-publishing-loadbalancerservice.png)

#### HostNetwork

The `HostNetwork` strategy uses host networking to publish the ingress
controller directly on the node host where the ingress controller is deployed.

![Image of HostNetwork](docs/images/endpoint-publishing-hostnetwork.png)

#### Private

The `Private` strategy does not publish the ingress controller.

![Image of Private](docs/images/endpoint-publishing-private.png)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do not see any service in docs/images/endpoint-publishing-private.png. I thought a service of type ClusterIP is still created for internal traffic?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is the "internal" service, which we create irrespective of the endpoint publishing strategy; might be worth adding to the diagrams.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do admins care about the internal service? Is the internal service useful for anything but communications between the ingress controller and other OpenShift components (e.g. prometheus)? Would the internal service be a candidate for a separate internal architecture design?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The internal service is the only predictable endpoint by which pods can connect to a private ingresscontroller, without additional configuration on the part of the cluster administrator.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What pods?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any pods on the cluster, infrastructure or end-user, that need to connect to any routes that are served by the private ingresscontroller.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It may be that we want to recommend a different approach (other than using the internal service that the operator creates) for this use-case.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it fair to characterize the internal service as an implementation detail at this time?


## Troubleshooting

Use the `oc` command to troubleshoot operator issues.

To inspect the operator's status:

```shell
$ oc describe clusteroperators/ingress
```

To view the operator's logs:

```shell
$ oc logs --namespace=openshift-ingress-operator deployments/ingress-operator
```

To inspect the status of a particular ingress controller:

```shell
$ oc describe --namespace=openshift-ingress-operator ingresscontroller/<name>
```
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about logs for the ingress controller (i.e., router)?

$ oc logs --namespace=openshift-ingress deployments/router-<name>

(Are we going to rename "router" to "ingress-controller"?)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Waffled on this one... is exposing a potentially volatile naming convention problematic? Is there some other way we could direct users to the logs via the ingresscontroller? Random ideas:

  • Implement oc logs for an ingresscontroller to abstract discovery of the deployment
  • Report the deployment name on status
  • Document a stable selector for the deployment to pass to oc logs --selector


## Contributing

Report issues in [Bugzilla](https://bugzilla.redhat.com/enter_bug.cgi?product=OpenShift%20Container%20Platform&version=4.0.0&component=Routing).

See [HACKING.md](HACKING.md) for development topics.
Binary file added docs/images/endpoint-publishing-hostnetwork.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/endpoint-publishing-private.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.