Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Revise high availability discussion #3051

Closed
wants to merge 1 commit into from

Conversation

pecameron
Copy link

@pecameron pecameron commented Oct 14, 2016

Openshift 3.4 feature

Generally revise the IP failover/ High Availability discussion.
Add information of oadm ipfailoverd options and environment
variables

See bugzilla 1381632
Code changes in openshift/origin PR 11327

@pecameron
Copy link
Author

@ramr @knobunc First draft of High Availability re-write PTAL

Copy link
Contributor

@knobunc knobunc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is confusing and needs major re-work. Your explanation muddles the kube service proxy and the ipfailover stuff.

We need to clearly document a few use cases:

  1. When the failover capability is for a service using hostPorts or hostNetworking so that the ports are bound directly on the nodes
  2. When you use externalIP to get traffic into a service

Today, our documentation covers the router case and a service using hostPort. In my mind, those are both case 1 above. I think we should simplify and unify the documentation for those two cases.

The other case is where a service has an externalIP, either assigned manually, or by @marun's recent addition. That would be case 2.

The difference between case 1 and 2 is that with case 1 it matters where the pods land because the kubernetes service proxy is not being used. So when you configure your rc or dc, you need to provide a selector, and the same selector must be used when you configure the ipfailover pods.

For case 2, the configuration is far simpler. It does not matter where the pods run in the cluster. And it does not matter how many there are. The kube service proxy handles that. All you have to do is pick an external IP for your replication set and then set up an ipfailover instance (which will need a selector to restrict it to the edge nodes) for that same IP address. When traffic lands on one of the nodes on the right IP address, the kube proxy will kick in and get the traffic to the right pod(s) inside the cluster.

The advantages of 1 are that you can have multiple ingress IP addresses, so you can load-balance if you need more bandwidth than a single node can support.

With 2, you can only have a single IP address, so a single node to handle it.

The downside to 1 is more complex set up, and the pods must run on the ingress nodes, so you get more load on those nodes.

With 2, the edge node just forwards traffic by nat rules and the compute pods can be anywhere in the cluster.


IMPORTANT: Even though
a service is highly available, performance can still be affected.
IP Failover (IPF) provides high availibilty for a service in the cluster.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This whole section is wrong. The original was explaining the Kubernetes service proxy that seamlessly load-balances and provides IPF throughout the cluster.

* I want external users to have a single IP address to access my service which may be running on multiple nodes.
* I care that when the node that is servicing the IP address goes down another node is automatically chosen to service the IP address.
* I don't want to manually update the upstream network configuration (e.g., DNS) when nodes change.
* I want the service to be available as long as at least one node is available.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This feels wrong. The service proxy already provides that. Failover is entirely about external IP addresses.

|--watch-port
|`*OPENSHIFT_HA_MONITOR_PORT*`
|80
|This port must be open for the service to be running.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we define this better?

"The ipfailover pod will test that it can open a TCP connection to this port. If it can, then the service is considered to be running."

And, can we set this to "" to turn off the monitoring?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed the wording. Don't think we can turn off monitoring with current code.

|--interface
|`*OPENSHIFT_HA_NETWORK_INTERFACE*`
|
|By default all NICs are watched.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"The interface name for the ip failover to use to send VRRP traffic. By default, all interfaces are used."

|--virtual-ips
|`*OPENSHIFT_HA_VIRTUAL_IPS*`
|
|E.g., 1.2.3.4-6,1.2.3.9
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The list of IP address ranges to replicate. This must be provided.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed wording.

|--vrrp-id-offset
|`*OPENSHIFT_HA_VRRP_ID_OFFSET*`
|0
|When multiple *ipfailover* deployments are used each must have a unique offset.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Worse than that, it's used as a base, but if you look at config_generators.sh it is incremented once per IP passed... so if your first failover specifies 4 IPs, you need to set the offset to 4 in the next failover config. SO... I'd recommend incrementing by 10s or more so that people can add IPs to existing configs without messing with the world. This needs to be explained clearly.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added a discussion of VRRP-ID-OFFSET

|--iptables-chain
|`*OPENSHIFT_HA_IPTABLES_CHAIN*`
|INPUT
|If the chain does not exist it is not created.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"The name of the iptables chain to automatically add an iptables rule to allow the VRRP traffic on. If the value is not set, an iptables rule will not be added."

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed wording.

@pecameron
Copy link
Author

@knobunc @JacobTanenbaum
Ben, I took another pass through this. Hopefully this is moving in a good direction. I don't understand your case 2 so there is nothing about it yet. We need to talk this over. PTAL

@pecameron pecameron force-pushed the ipfailover branch 3 times, most recently from a697a57 to 65aaf5c Compare October 19, 2016 13:03
Virtual IP (VIP) addresses on a set of hosts. Each VIP is serviced by, at most, one of the hosts at
any point in time. *Keepalived* uses the VRRP protocol to determine which host in the set of hosts
will service each VIP address. If a host goes down or the service that it is watching goes down the
VIP is serviced from another host in the set. As long as a single host is availble the VIP is serviced.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/it/keepalived/

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And a typo available

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed.

For some IP-based traffic services, virtual IP addresses (VIPs) should always be
serviced for as long as a single instance is available. This simplifies the
operational overhead and handles failure cases gracefully.
{product-title} supports an ipfailover DC created by the `oadm ipfailover` command. The ipfailover DC
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe call out deployment config (DC) in this section?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed. I confused deploymet config with deployment controller. I fixed it throughout the doc.

a service is highly available, performance can still be affected.
When using VIPs to access a service, such as a router, the service should be running on all of the
nodes that are running the ipfailover pods. This is done by using the same selector and replication count
that is used by the ipfailover DC.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, you could have a pool of 4 hosts labelled (zone=ipf) - run ipfailover on those 4 hosts (--replicas=4 and selector zone=ipf). But you do not need the service replication count to match - as an example have only say 3 routers (replicas=3 and selector zone=ipf). That would still work and have a node available/standby in case a node fails.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ramr in this case ipfailover will notice that one node doesn't have port 80 open and it won't participate in negotiation. So, yes it will work, but it seems like a mismatch in the configuration. I also used should, not must. What do you think? I will change the wording if you feel that this is important.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Think we should reword it - to me, it read that I need to run the service on all the nodes running ipfailover pods.
Maybe When using VIPs to access a service, such as a router, the service needs to be running on one or more nodes that run the ipfailover pods or something to that effect.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ramr I reworded it. PTAL

Using IP failover involves switching IP addresses to a redundant or stand-by
set of nodes on failure conditions.
IP Failover is configured using the `oadm ipfailover` command with suitable options.
`oadm ipfailover` creates a DC (Deployment Controller) based on provided options.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

DC (Deployment Config)

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe An IP Failover configuration is created using the oadm ipfailover command. and add a link to the options and then maybe say Internally this command creates a deployment config using the provided options?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed

VIP in the VIP list is assigned a value of 0. The next gets the value 1, and so forth. When there are
multiple ipfailover DCs the `--vrrp-id-offset=<n>` option is used to specify the value of the first VRRP
Id Offset of the first VIP in the list of configured VIPs. It is up to the admin to make sure the lists
do not collide with each other. VRRP Id is in the range 0..255.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Aah, this makes is sound like the vrrp-id offset is on the DC. Its really the id of the keepalived peers.

As an example: if you create an ipfailover config at an offset of 0 and have a replica count of 4 (--replicas=4) and there's a 4 nodes/pods that run keepalived, then the ids 0-3 would be used up.
The next/second ipfailover configuration should use a vrrp-id offset of atleast 4. The id offsets exist to ensure that there is collision across groups (so replica count is important as well here). Should mention that - this would need some rework.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ramr I may have this wrong. I think that the there is a unique vrrp-id-offset for each VIP that keepalived is servicing. For a given DC the numbers are sequential from the --vrrp-id-offset for the number of defined VIPs. E.g., if --vrrp-id-offset=7 and there are 3 VIPs in the DC, the vrrp-ids will be 7,8 and 9. This forms a range of ids for the DC. When there are multiple DCs the ranges must not overlap. The available numbers range from 1..255. The --vrrp-id-offset

I will reword this to be clearer.

----
====

. Next is is the service account. You can use *ipfailover* or when using
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Next is is sounds a bit weird - maybe Next create/specify the service account?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed.

@ramr
Copy link

ramr commented Oct 19, 2016

I'll take another pass at this sometime today.

@pecameron
Copy link
Author

@ramr I made the changes you suggested. I really appreciate the time you take reviewing this and making comments. PTAL

@pecameron
Copy link
Author

@knobunc Are there more changes you wold like in this?

At this time of writing, ipfailover is not compatible with cloud
Make sure that the service that relies on a VIP is running on every node in the
set of IP Failover nodes. You never know which of the nodes will be servicing the
VIP at any point in time.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, this recommendations does have ramifications - see above.
There are times when you want some hot standby nodes to scale up/down based on load/failure.

to automatically add it to the specified chain on each host or you can manually add it. If you manually add it,
please also ensure that the rules persist after a system restart.

Be careful since every *keepalived* daemon uses the VRRP protocol over multicast 224.0.0.18 to negoiate witih its
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

typo witih - with

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

|--interface
|`*OPENSHIFT_HA_NETWORK_INTERFACE*`
|
|The interface name for the ip failover to use to send VRRP traffic. By default, all interfaces are used.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It uses eth0 by default.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed

--service-account=ipfailover --create
----
====
endif::[]
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any reason why we removed the example here?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I covered that in the above examples. I can add it back in if you like.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I also went into some detail on how the set of vips and vrrp-id-offset relate.

@pecameron pecameron force-pushed the ipfailover branch 3 times, most recently from edc6d93 to efb9fad Compare October 28, 2016 14:51
Copy link
Contributor

@knobunc knobunc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Getting better.

We need to document the Service ExternalIP or Service NodePort cases too.

will service each VIP address. If a host goes down or the service that *Keepalived* is watching goes down the
VIP is serviced from another host in the set. As long as a single host is available the VIP is serviced.

{product-title} supports an ipfailover Deployment Config (DC) created by the `oadm ipfailover` command. The ipfailover DC
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

They prefer us to say Deployment Config rather than DC.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed throughout the file.

VIP will be serviced as long as at least one node with a running service is available. As part of normal
operation the VIP may be assigned to any of the nodes at any point in time.

IMPORTANT: Even though a service VIP is highly available, performance can still be affected.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

affected by what? We should say by load or something. And that HA does not imply load balancing.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK

@pecameron
Copy link
Author

@knobunc I made the changes you suggested. The ExternalIP and NodePort part is rather thin. Is there a line of discussion you are thinking about for this? PTAL

@pecameron
Copy link
Author

@knobunc I added the prose on removing the iptables rule. PTAL


When using VIPs to access a service, such as a router, the service should be running on all of the
nodes that are running the ipfailover pods. This permits all ipfailover nodes to possibly service the VIPs and
all service nodes to possibly be serviced. A miss match will result in either some ipfailover nodes never servicing
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

typo: mismatch

But the rest of this paragraph doesn't really make sense unless you are talking about a pod.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@knobunc rewrote PP

nodes that are running the ipfailover pods. This permits all ipfailover nodes to possibly service the VIPs and
all service nodes to possibly be serviced. A miss match will result in either some ipfailover nodes never servicing
the VIPs or some services to never receive traffic. Using the same selector and replication count for both
ipfailover and the service will eliminate the missmatch.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

mismatch

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed

the VIPs or some services to never receive traffic. Using the same selector and replication count for both
ipfailover and the service will eliminate the missmatch.

The nodes in the high availibility set can come and go. The service on the node can start and stop.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pod rather than service

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Dropped PP

ipfailover and the service will eliminate the missmatch.

The nodes in the high availibility set can come and go. The service on the node can start and stop.
When a node or sevice becomes unavailable *keepalived* will select another node to service the VIP. The
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"node or pod"

keepalived will float the VIP to another node that has a running instance of the pod.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@knobunc The keepalived docs talk about servicing a VIP on, at most, one host and floating it on the rest.


The nodes in the high availibility set can come and go. The service on the node can start and stop.
When a node or sevice becomes unavailable *keepalived* will select another node to service the VIP. The
VIP will be serviced as long as at least one node with a running service is available. As part of normal
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

running pod

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@knobunc reworded this

Copy link
Contributor

@knobunc knobunc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I really want this to clearly document the three use cases:

  1. You are running pods with hostNetworking = true (e.g. the usual configuration of the router). Here you want to make sure that the pods land on the same nodes as the ones with keepalived, and that the number of replicas is the same. You run the check against the port that the pod has opened on the host.

  2. Expose a single service. Here it doesn't matter where the pods land. You can have as many keepalive instances as you want and float one or more VIPs for the service (e.g. if you want to run two VIPs for double the bandwitdh and have an external load balancer or advertise multiple addresses in DNS). You must set a NodePort or (one or more) ExternalIPs in the service. The ExternalIPs must match the VIPs being used. You can set up the check to go against the service port.

  3. Expose the allocated LoadBalancer services. Here you want to roll out a bunch of VIPs across your edge nodes automatically. You don't really care about the check, but since (I think, let's check) there is no way to disable it, we need a canary service. So document that they should set up a DC running hello-openshift with a few replicas, and then have a service with a nodeport set. Then use that nodeport as the check port. Or find a tcp port that openshift always is listening on on a node and just use that as the check port.

@pecameron
Copy link
Author

@knobunc There are 4 files here. I tried to pull together the various snippets of the discussion and link them together. PTAL

For some IP-based traffic services, virtual IP addresses (VIPs) should always be
serviced for as long as a single instance is available. This simplifies the
operational overhead and handles failure cases gracefully.
The VIPs must be routable from outside the cluster and not visible from inside the cluster.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"and not visible from inside the cluster"

really? What's the reason for that?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We talked about this and I forgot to delete it...

IP Failover uses link:http://www.keepalived.org/[*Keepalived*] to host a set of externally accessible
Virtual IP (VIP) addresses on a set of hosts. Each VIP is serviced by, at most, one of the hosts at
any point in time. *Keepalived* uses the VRRP protocol to determine which host in the set of hosts
will service each VIP address. If a host goes down or the service that *Keepalived* is watching goes down the
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

down, or

down, the

or the service => or the port

goes down => becomes unreachable

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

* I want my cluster to be assigned a set of VIPs that the cluster manages and migrates (with zero or minimal downtime) on failure conditions, and I should not be required to perform any manual interactions to update the upstream "discovery" sources (e.g., DNS). The cluster should service all the assigned VIPs when at least a single node is available, despite the current available resources not being sufficient to reach the desired state.
{product-title} supports an ipfailover Deployment Config created by the `oadm ipfailover` command. The ipfailover
Deployment Config specifies the set of VIP addresses and the set of nodes on which to service them. The cluster
can have multiple ipfailover Deployment Configs, each servicing its own set of unique VIP addresses. Each pod in the
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"each servicing" => "each with" perhaps?

"Each pod created by the ..." perhaps?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Went with "managing". Rewrote last sentence in PP

When using VIPs to access a service any of the nodes can be in the ipfailover set of nodes, since the service is
reachable on all nodes (no matter where the application pod really is running). Any of the ipfailover nodes can
be master at any time. The service must configure the externalIPs to be the set of VIPs or the service can use a
NodePort and use any VIPs that route to the nodes. The ipfailover watch port can be set to the service port.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The service must configure the externalIPs to be the set of VIPs or the service can use a NodePort and use any VIPs that route to the nodes. The ipfailover watch port can be set to the service port.

=>

The service must be exposed using either externalIPs in the service definition, in which case the set of VIPs used must match the externalIPs; or the service can be exposed using a NodePort, since those are exposed on all nodes the administrator can any VIPs that route to the nodes. The ipfailover watch port must be set to the service port for externalIP or the NodePort (or the check can be disabled since all nodes in the cluster will work).

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reworded it.


IMPORTANT: Even though a service VIP is highly available, performance can still be affected. *keepalived*
makes sure that each of the VIPs is served by some node in the configuration, and several VIPs may end up
on the same node even when other nodes have none. Also, high availability does not do load balancing.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, high availability does not do load balancing.

High availability does not inherently do load balancing. But you can expose multiple IPs for the same resource and load balance them outside the cluster, or serve all of the IPs in DNS.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@knobunc ... and ipfailover will happily master them all on the same node.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So? They may spread too... the default config seems to want to spread them out. Or am I mistaken?

address.

The default service IP addresses are from the {product-title} internal network and are used to permit pods to
access each other. The service can be assigned IP addresses that are
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The service can be assigned additional IP addresses

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK

The default service IP addresses are from the {product-title} internal network and are used to permit pods to
access each other. The service can be assigned IP addresses that are
xref:../../dev_guide/getting_traffic_into_cluster.adoc#using-externalIP[external]
to cluster which permit external access to the cluster.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

external access to the service.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK

the master service.

The `ingressIPNetworkCIDR` is set by default to `172.46.0.0/16`. You could use the default if
your cluster environment is not already using this private range. However, if you want to use a different range.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"However, if you want to use a different range. "

What? Sentence fragment.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should also mention that if you make an ipfailover for the range, it needs to be smaller.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK

The selected port will be reported in the service's spec.ports[*].nodePort.

If you want to specify a port just place the value in the nodePort field. The value you specify must be in the configured
range for nodePorts.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Doc where that is set please.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK

ExternalIPs require elevated permissions to assign, and manual tracking of the
The user can assign a list of xref:../architecture/core_concepts/pods_and_services.adoc#service-externalip[externalIPs]
for which nodes in the cluster will also accept traffic for the service.
These IPs are not managed by {product-title}. The user is responsible for ensuring that traffic arrives at a node with this IP.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

user => administrator

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK

@pecameron
Copy link
Author

@knobunc I made the changes, PTAL

@pecameron
Copy link
Author

On 11/11/2016 04:53 PM, Ben Bennett wrote:

@knobunc commented on this pull request.


In admin_guide/high_availability.adoc
#3051:

EOF
done;


====

-. Depending on your environment policies, you can either reuse the router
-service account created previously or create a new ipfailover service account.
+[[options-environment-variables]]
+=== Command Line Options and Environment Variables
+
+.Command Line Options and Environment Variables
+[cols="1a,3a,1a,4a",options="header"]
+|===
+
+| Option | Variable Name | Default | Notes

Isn't the iptables chain set by oadm ipfailover to a default of INPUT.
But the pod does not set a default and if there is no value sent, does
not set the iptables rule?


You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
#3051, or mute the
thread
https://github.com/notifications/unsubscribe-auth/ANUgekDJath49bOSZFcgCLoNUq0TqW4Hks5q9OPogaJpZM4KXV2Q.

oadm ipfailover creates an environment variable in the DC. The default
when --iptables-chain is missing is INPUT. If --iptables-chain is
present, whatever is there is set in the variable (including "").

The pod configure (at pod start) uses the environment variable to decide
what to do.

IP:ports that are in use.

An externally visible IP can be configured in one of several ways:

- Manually configure the ExternalIP with a known external IP address.
- Configure ExternalIP to a
- Manually configure the service's externalIPs with a ilist of known external IP addresses.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

typo for ilist

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed

xref:../admin_guide/high_availability.adoc#admin-guide-high-availability[VIPs].
The VIPs are generated from the VRRP protocol.
xref:../admin_guide/high_availability.adoc#[High availability] improves the chances that an IP address
will remain active by assigning the virtual IP address it a host in a pool of suitable hosts. If the host goes
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/it/to ?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed wording

--credentials="$KUBECONFIG" \
--service-account=ipfailover
----
====
endif::[]
+
[NOTE]
Run the *geo-cache* service with a replica on each of the nodes. An example configuration
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some format problems from this line to line 445.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I ran asciibinder and it look OK to me. What problem are you seeing?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just view the file in github directly with link https://github.com/pecameron/openshift-docs/blob/c973e88256f7e58ed00915893a771c94946b8379/admin_guide/high_availability.adoc
There are some format issues in the lines I commented above, not sure if it is github problem or doc itself.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@bmeng I clone openshift-docs and use asciibinder to format the text. The result looks correct to me.

@bmeng
Copy link

bmeng commented Nov 14, 2016

I did not see the changes on the section Configuring a Highly-available Service to guide the user to setup HA service. Should we describe it clearly under the single section just like what we have done before in here https://docs.openshift.org/latest/admin_guide/high_availability.html#ip-failover?

@pecameron
Copy link
Author

@bmeng @knobunc asked me to combine the two descriptions since most was in common.

Openshift 3.4 feature

Generally revise the IP failover/High Availability discussion.
Include discussion of 224.0.0.18 multicast address.
Add information about oadm ipfailover options and environment
variables.

See bugzilla 1381632
Code changes in openshift/origin PR 11327

Signed-off-by: Phil Cameron <pcameron@redhat.com>
@pecameron
Copy link
Author

@knobunc This version is minor cleanups, typos, etc.

@pecameron
Copy link
Author

@knobunc PTAL, this is the doc for 3.4 fix bz1381632, what more do we need to do here?

@knobunc
Copy link
Contributor

knobunc commented Nov 17, 2016

LGTM @openshift/team-documentation PTAL

@pecameron can you say "bug 1381632 https://bugzilla.redhat.com/show_bug.cgi?id=1381632" in the commit message. The "bug 1381632" is what the bot wants, and the link makes it easier for humans.

@vikram-redhat
Copy link
Contributor

@gaurav-nelson - PTAL.

@pecameron
Copy link
Author

@knobunc you have "Changes requested" set, can this go away?

|
|The interface name for the ip failover to use to send VRRP traffic. By default, eth0 is used.

|
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing variable! Is this '--replicas'? @pecameron

@gaurav-nelson
Copy link
Contributor

@pecameron Closing this, all changes are included in PR #3281

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants