New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Revise high availability discussion #3051
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is confusing and needs major re-work. Your explanation muddles the kube service proxy and the ipfailover stuff.
We need to clearly document a few use cases:
- When the failover capability is for a service using hostPorts or hostNetworking so that the ports are bound directly on the nodes
- When you use externalIP to get traffic into a service
Today, our documentation covers the router case and a service using hostPort. In my mind, those are both case 1 above. I think we should simplify and unify the documentation for those two cases.
The other case is where a service has an externalIP, either assigned manually, or by @marun's recent addition. That would be case 2.
The difference between case 1 and 2 is that with case 1 it matters where the pods land because the kubernetes service proxy is not being used. So when you configure your rc or dc, you need to provide a selector, and the same selector must be used when you configure the ipfailover pods.
For case 2, the configuration is far simpler. It does not matter where the pods run in the cluster. And it does not matter how many there are. The kube service proxy handles that. All you have to do is pick an external IP for your replication set and then set up an ipfailover instance (which will need a selector to restrict it to the edge nodes) for that same IP address. When traffic lands on one of the nodes on the right IP address, the kube proxy will kick in and get the traffic to the right pod(s) inside the cluster.
The advantages of 1 are that you can have multiple ingress IP addresses, so you can load-balance if you need more bandwidth than a single node can support.
With 2, you can only have a single IP address, so a single node to handle it.
The downside to 1 is more complex set up, and the pods must run on the ingress nodes, so you get more load on those nodes.
With 2, the edge node just forwards traffic by nat rules and the compute pods can be anywhere in the cluster.
|
||
IMPORTANT: Even though | ||
a service is highly available, performance can still be affected. | ||
IP Failover (IPF) provides high availibilty for a service in the cluster. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This whole section is wrong. The original was explaining the Kubernetes service proxy that seamlessly load-balances and provides IPF throughout the cluster.
* I want external users to have a single IP address to access my service which may be running on multiple nodes. | ||
* I care that when the node that is servicing the IP address goes down another node is automatically chosen to service the IP address. | ||
* I don't want to manually update the upstream network configuration (e.g., DNS) when nodes change. | ||
* I want the service to be available as long as at least one node is available. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This feels wrong. The service proxy already provides that. Failover is entirely about external IP addresses.
|--watch-port | ||
|`*OPENSHIFT_HA_MONITOR_PORT*` | ||
|80 | ||
|This port must be open for the service to be running. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we define this better?
"The ipfailover pod will test that it can open a TCP connection to this port. If it can, then the service is considered to be running."
And, can we set this to "" to turn off the monitoring?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changed the wording. Don't think we can turn off monitoring with current code.
|--interface | ||
|`*OPENSHIFT_HA_NETWORK_INTERFACE*` | ||
| | ||
|By default all NICs are watched. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"The interface name for the ip failover to use to send VRRP traffic. By default, all interfaces are used."
|--virtual-ips | ||
|`*OPENSHIFT_HA_VIRTUAL_IPS*` | ||
| | ||
|E.g., 1.2.3.4-6,1.2.3.9 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The list of IP address ranges to replicate. This must be provided.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changed wording.
|--vrrp-id-offset | ||
|`*OPENSHIFT_HA_VRRP_ID_OFFSET*` | ||
|0 | ||
|When multiple *ipfailover* deployments are used each must have a unique offset. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Worse than that, it's used as a base, but if you look at config_generators.sh it is incremented once per IP passed... so if your first failover specifies 4 IPs, you need to set the offset to 4 in the next failover config. SO... I'd recommend incrementing by 10s or more so that people can add IPs to existing configs without messing with the world. This needs to be explained clearly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added a discussion of VRRP-ID-OFFSET
|--iptables-chain | ||
|`*OPENSHIFT_HA_IPTABLES_CHAIN*` | ||
|INPUT | ||
|If the chain does not exist it is not created. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"The name of the iptables chain to automatically add an iptables rule to allow the VRRP traffic on. If the value is not set, an iptables rule will not be added."
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changed wording.
@knobunc @JacobTanenbaum |
a697a57
to
65aaf5c
Compare
Virtual IP (VIP) addresses on a set of hosts. Each VIP is serviced by, at most, one of the hosts at | ||
any point in time. *Keepalived* uses the VRRP protocol to determine which host in the set of hosts | ||
will service each VIP address. If a host goes down or the service that it is watching goes down the | ||
VIP is serviced from another host in the set. As long as a single host is availble the VIP is serviced. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
s/it/keepalived/
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And a typo available
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changed.
For some IP-based traffic services, virtual IP addresses (VIPs) should always be | ||
serviced for as long as a single instance is available. This simplifies the | ||
operational overhead and handles failure cases gracefully. | ||
{product-title} supports an ipfailover DC created by the `oadm ipfailover` command. The ipfailover DC |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe call out deployment config (DC)
in this section?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changed. I confused deploymet config with deployment controller. I fixed it throughout the doc.
a service is highly available, performance can still be affected. | ||
When using VIPs to access a service, such as a router, the service should be running on all of the | ||
nodes that are running the ipfailover pods. This is done by using the same selector and replication count | ||
that is used by the ipfailover DC. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well, you could have a pool of 4 hosts labelled (zone=ipf
) - run ipfailover on those 4 hosts (--replicas=4
and selector zone=ipf
). But you do not need the service replication count to match - as an example have only say 3 routers (replicas=3
and selector zone=ipf
). That would still work and have a node available/standby in case a node fails.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ramr in this case ipfailover will notice that one node doesn't have port 80 open and it won't participate in negotiation. So, yes it will work, but it seems like a mismatch in the configuration. I also used should, not must. What do you think? I will change the wording if you feel that this is important.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Think we should reword it - to me, it read that I need
to run the service on all the nodes running ipfailover pods.
Maybe When using VIPs to access a service, such as a router, the service needs to be running on one or more nodes that run the ipfailover pods
or something to that effect.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ramr I reworded it. PTAL
Using IP failover involves switching IP addresses to a redundant or stand-by | ||
set of nodes on failure conditions. | ||
IP Failover is configured using the `oadm ipfailover` command with suitable options. | ||
`oadm ipfailover` creates a DC (Deployment Controller) based on provided options. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
DC (Deployment Config)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe An IP Failover configuration is created using the oadm ipfailover command.
and add a link to the options and then maybe say Internally this command creates a deployment config using the provided options
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changed
VIP in the VIP list is assigned a value of 0. The next gets the value 1, and so forth. When there are | ||
multiple ipfailover DCs the `--vrrp-id-offset=<n>` option is used to specify the value of the first VRRP | ||
Id Offset of the first VIP in the list of configured VIPs. It is up to the admin to make sure the lists | ||
do not collide with each other. VRRP Id is in the range 0..255. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Aah, this makes is sound like the vrrp-id offset is on the DC. Its really the id of the keepalived peers.
As an example: if you create an ipfailover config at an offset of 0 and have a replica count of 4 (--replicas=4
) and there's a 4 nodes/pods that run keepalived, then the ids 0-3
would be used up.
The next/second ipfailover configuration should use a vrrp-id offset of atleast 4. The id offsets exist to ensure that there is collision across groups (so replica count is important as well here). Should mention that - this would need some rework.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ramr I may have this wrong. I think that the there is a unique vrrp-id-offset for each VIP that keepalived is servicing. For a given DC the numbers are sequential from the --vrrp-id-offset for the number of defined VIPs. E.g., if --vrrp-id-offset=7 and there are 3 VIPs in the DC, the vrrp-ids will be 7,8 and 9. This forms a range of ids for the DC. When there are multiple DCs the ranges must not overlap. The available numbers range from 1..255. The --vrrp-id-offset
I will reword this to be clearer.
---- | ||
==== | ||
|
||
. Next is is the service account. You can use *ipfailover* or when using |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Next is is
sounds a bit weird - maybe Next create/specify the service account
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changed.
I'll take another pass at this sometime today. |
@ramr I made the changes you suggested. I really appreciate the time you take reviewing this and making comments. PTAL |
@knobunc Are there more changes you wold like in this? |
At this time of writing, ipfailover is not compatible with cloud | ||
Make sure that the service that relies on a VIP is running on every node in the | ||
set of IP Failover nodes. You never know which of the nodes will be servicing the | ||
VIP at any point in time. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, this recommendations does have ramifications - see above.
There are times when you want some hot standby nodes to scale up/down based on load/failure.
to automatically add it to the specified chain on each host or you can manually add it. If you manually add it, | ||
please also ensure that the rules persist after a system restart. | ||
|
||
Be careful since every *keepalived* daemon uses the VRRP protocol over multicast 224.0.0.18 to negoiate witih its |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
typo witih
- with
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed
|--interface | ||
|`*OPENSHIFT_HA_NETWORK_INTERFACE*` | ||
| | ||
|The interface name for the ip failover to use to send VRRP traffic. By default, all interfaces are used. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It uses eth0
by default.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed
--service-account=ipfailover --create | ||
---- | ||
==== | ||
endif::[] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Any reason why we removed the example here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I covered that in the above examples. I can add it back in if you like.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I also went into some detail on how the set of vips and vrrp-id-offset relate.
edc6d93
to
efb9fad
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Getting better.
We need to document the Service ExternalIP or Service NodePort cases too.
will service each VIP address. If a host goes down or the service that *Keepalived* is watching goes down the | ||
VIP is serviced from another host in the set. As long as a single host is available the VIP is serviced. | ||
|
||
{product-title} supports an ipfailover Deployment Config (DC) created by the `oadm ipfailover` command. The ipfailover DC |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
They prefer us to say Deployment Config rather than DC.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changed throughout the file.
VIP will be serviced as long as at least one node with a running service is available. As part of normal | ||
operation the VIP may be assigned to any of the nodes at any point in time. | ||
|
||
IMPORTANT: Even though a service VIP is highly available, performance can still be affected. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
affected by what? We should say by load or something. And that HA does not imply load balancing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK
@knobunc I made the changes you suggested. The ExternalIP and NodePort part is rather thin. Is there a line of discussion you are thinking about for this? PTAL |
@knobunc I added the prose on removing the iptables rule. PTAL |
|
||
When using VIPs to access a service, such as a router, the service should be running on all of the | ||
nodes that are running the ipfailover pods. This permits all ipfailover nodes to possibly service the VIPs and | ||
all service nodes to possibly be serviced. A miss match will result in either some ipfailover nodes never servicing |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
typo: mismatch
But the rest of this paragraph doesn't really make sense unless you are talking about a pod.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@knobunc rewrote PP
nodes that are running the ipfailover pods. This permits all ipfailover nodes to possibly service the VIPs and | ||
all service nodes to possibly be serviced. A miss match will result in either some ipfailover nodes never servicing | ||
the VIPs or some services to never receive traffic. Using the same selector and replication count for both | ||
ipfailover and the service will eliminate the missmatch. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
mismatch
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed
the VIPs or some services to never receive traffic. Using the same selector and replication count for both | ||
ipfailover and the service will eliminate the missmatch. | ||
|
||
The nodes in the high availibility set can come and go. The service on the node can start and stop. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pod rather than service
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Dropped PP
ipfailover and the service will eliminate the missmatch. | ||
|
||
The nodes in the high availibility set can come and go. The service on the node can start and stop. | ||
When a node or sevice becomes unavailable *keepalived* will select another node to service the VIP. The |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"node or pod"
keepalived will float the VIP to another node that has a running instance of the pod.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@knobunc The keepalived docs talk about servicing a VIP on, at most, one host and floating it on the rest.
|
||
The nodes in the high availibility set can come and go. The service on the node can start and stop. | ||
When a node or sevice becomes unavailable *keepalived* will select another node to service the VIP. The | ||
VIP will be serviced as long as at least one node with a running service is available. As part of normal |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
running pod
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@knobunc reworded this
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I really want this to clearly document the three use cases:
-
You are running pods with hostNetworking = true (e.g. the usual configuration of the router). Here you want to make sure that the pods land on the same nodes as the ones with keepalived, and that the number of replicas is the same. You run the check against the port that the pod has opened on the host.
-
Expose a single service. Here it doesn't matter where the pods land. You can have as many keepalive instances as you want and float one or more VIPs for the service (e.g. if you want to run two VIPs for double the bandwitdh and have an external load balancer or advertise multiple addresses in DNS). You must set a NodePort or (one or more) ExternalIPs in the service. The ExternalIPs must match the VIPs being used. You can set up the check to go against the service port.
-
Expose the allocated LoadBalancer services. Here you want to roll out a bunch of VIPs across your edge nodes automatically. You don't really care about the check, but since (I think, let's check) there is no way to disable it, we need a canary service. So document that they should set up a DC running hello-openshift with a few replicas, and then have a service with a nodeport set. Then use that nodeport as the check port. Or find a tcp port that openshift always is listening on on a node and just use that as the check port.
92b701f
to
f43d8bf
Compare
@knobunc There are 4 files here. I tried to pull together the various snippets of the discussion and link them together. PTAL |
For some IP-based traffic services, virtual IP addresses (VIPs) should always be | ||
serviced for as long as a single instance is available. This simplifies the | ||
operational overhead and handles failure cases gracefully. | ||
The VIPs must be routable from outside the cluster and not visible from inside the cluster. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"and not visible from inside the cluster"
really? What's the reason for that?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We talked about this and I forgot to delete it...
IP Failover uses link:http://www.keepalived.org/[*Keepalived*] to host a set of externally accessible | ||
Virtual IP (VIP) addresses on a set of hosts. Each VIP is serviced by, at most, one of the hosts at | ||
any point in time. *Keepalived* uses the VRRP protocol to determine which host in the set of hosts | ||
will service each VIP address. If a host goes down or the service that *Keepalived* is watching goes down the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
down, or
down, the
or the service => or the port
goes down => becomes unreachable
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed
* I want my cluster to be assigned a set of VIPs that the cluster manages and migrates (with zero or minimal downtime) on failure conditions, and I should not be required to perform any manual interactions to update the upstream "discovery" sources (e.g., DNS). The cluster should service all the assigned VIPs when at least a single node is available, despite the current available resources not being sufficient to reach the desired state. | ||
{product-title} supports an ipfailover Deployment Config created by the `oadm ipfailover` command. The ipfailover | ||
Deployment Config specifies the set of VIP addresses and the set of nodes on which to service them. The cluster | ||
can have multiple ipfailover Deployment Configs, each servicing its own set of unique VIP addresses. Each pod in the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"each servicing" => "each with" perhaps?
"Each pod created by the ..." perhaps?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Went with "managing". Rewrote last sentence in PP
When using VIPs to access a service any of the nodes can be in the ipfailover set of nodes, since the service is | ||
reachable on all nodes (no matter where the application pod really is running). Any of the ipfailover nodes can | ||
be master at any time. The service must configure the externalIPs to be the set of VIPs or the service can use a | ||
NodePort and use any VIPs that route to the nodes. The ipfailover watch port can be set to the service port. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The service must configure the externalIPs to be the set of VIPs or the service can use a NodePort and use any VIPs that route to the nodes. The ipfailover watch port can be set to the service port.
=>
The service must be exposed using either externalIPs in the service definition, in which case the set of VIPs used must match the externalIPs; or the service can be exposed using a NodePort, since those are exposed on all nodes the administrator can any VIPs that route to the nodes. The ipfailover watch port must be set to the service port for externalIP or the NodePort (or the check can be disabled since all nodes in the cluster will work).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reworded it.
|
||
IMPORTANT: Even though a service VIP is highly available, performance can still be affected. *keepalived* | ||
makes sure that each of the VIPs is served by some node in the configuration, and several VIPs may end up | ||
on the same node even when other nodes have none. Also, high availability does not do load balancing. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, high availability does not do load balancing.
High availability does not inherently do load balancing. But you can expose multiple IPs for the same resource and load balance them outside the cluster, or serve all of the IPs in DNS.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@knobunc ... and ipfailover will happily master them all on the same node.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So? They may spread too... the default config seems to want to spread them out. Or am I mistaken?
address. | ||
|
||
The default service IP addresses are from the {product-title} internal network and are used to permit pods to | ||
access each other. The service can be assigned IP addresses that are |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The service can be assigned additional IP addresses
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK
The default service IP addresses are from the {product-title} internal network and are used to permit pods to | ||
access each other. The service can be assigned IP addresses that are | ||
xref:../../dev_guide/getting_traffic_into_cluster.adoc#using-externalIP[external] | ||
to cluster which permit external access to the cluster. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
external access to the service.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK
the master service. | ||
|
||
The `ingressIPNetworkCIDR` is set by default to `172.46.0.0/16`. You could use the default if | ||
your cluster environment is not already using this private range. However, if you want to use a different range. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"However, if you want to use a different range. "
What? Sentence fragment.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should also mention that if you make an ipfailover for the range, it needs to be smaller.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK
The selected port will be reported in the service's spec.ports[*].nodePort. | ||
|
||
If you want to specify a port just place the value in the nodePort field. The value you specify must be in the configured | ||
range for nodePorts. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Doc where that is set please.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK
ExternalIPs require elevated permissions to assign, and manual tracking of the | ||
The user can assign a list of xref:../architecture/core_concepts/pods_and_services.adoc#service-externalip[externalIPs] | ||
for which nodes in the cluster will also accept traffic for the service. | ||
These IPs are not managed by {product-title}. The user is responsible for ensuring that traffic arrives at a node with this IP. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
user => administrator
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK
f43d8bf
to
55946d5
Compare
@knobunc I made the changes, PTAL |
On 11/11/2016 04:53 PM, Ben Bennett wrote:
The pod configure (at pod start) uses the environment variable to decide |
IP:ports that are in use. | ||
|
||
An externally visible IP can be configured in one of several ways: | ||
|
||
- Manually configure the ExternalIP with a known external IP address. | ||
- Configure ExternalIP to a | ||
- Manually configure the service's externalIPs with a ilist of known external IP addresses. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
typo for ilist
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed
xref:../admin_guide/high_availability.adoc#admin-guide-high-availability[VIPs]. | ||
The VIPs are generated from the VRRP protocol. | ||
xref:../admin_guide/high_availability.adoc#[High availability] improves the chances that an IP address | ||
will remain active by assigning the virtual IP address it a host in a pool of suitable hosts. If the host goes |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
s/it/to ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changed wording
--credentials="$KUBECONFIG" \ | ||
--service-account=ipfailover | ||
---- | ||
==== | ||
endif::[] | ||
+ | ||
[NOTE] | ||
Run the *geo-cache* service with a replica on each of the nodes. An example configuration |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are some format problems from this line to line 445.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I ran asciibinder and it look OK to me. What problem are you seeing?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just view the file in github directly with link https://github.com/pecameron/openshift-docs/blob/c973e88256f7e58ed00915893a771c94946b8379/admin_guide/high_availability.adoc
There are some format issues in the lines I commented above, not sure if it is github problem or doc itself.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@bmeng I clone openshift-docs and use asciibinder to format the text. The result looks correct to me.
I did not see the changes on the section |
Openshift 3.4 feature Generally revise the IP failover/High Availability discussion. Include discussion of 224.0.0.18 multicast address. Add information about oadm ipfailover options and environment variables. See bugzilla 1381632 Code changes in openshift/origin PR 11327 Signed-off-by: Phil Cameron <pcameron@redhat.com>
55946d5
to
c973e88
Compare
@knobunc This version is minor cleanups, typos, etc. |
@knobunc PTAL, this is the doc for 3.4 fix bz1381632, what more do we need to do here? |
LGTM @openshift/team-documentation PTAL @pecameron can you say "bug 1381632 https://bugzilla.redhat.com/show_bug.cgi?id=1381632" in the commit message. The "bug 1381632" is what the bot wants, and the link makes it easier for humans. |
@gaurav-nelson - PTAL. |
@knobunc you have "Changes requested" set, can this go away? |
| | ||
|The interface name for the ip failover to use to send VRRP traffic. By default, eth0 is used. | ||
|
||
| |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing variable! Is this '--replicas'? @pecameron
@pecameron Closing this, all changes are included in PR #3281 |
Openshift 3.4 feature
Generally revise the IP failover/ High Availability discussion.
Add information of oadm ipfailoverd options and environment
variables
See bugzilla 1381632
Code changes in openshift/origin PR 11327