Skip to content

Commit

Permalink
Merge pull request #1153 from kgaillot/doc
Browse files Browse the repository at this point in the history
Minor updates, mostly to documentation
  • Loading branch information
kgaillot committed Oct 3, 2016
2 parents 348a191 + 920fdcb commit 0d18acb
Show file tree
Hide file tree
Showing 13 changed files with 94 additions and 51 deletions.
2 changes: 1 addition & 1 deletion GNUmakefile
Expand Up @@ -70,7 +70,7 @@ SPECVERSION ?= $(COUNT)

# rpmbuild wrapper that translates "--with[out] FEATURE" into RPM macros
#
# Unfortunately, at least recent versions of rpm do no support mentioned
# Unfortunately, at least recent versions of rpm do not support mentioned
# switch. To work this around, we can emulate mechanism that rpm uses
# internally: unfold the flags into respective macro definitions:
#
Expand Down
11 changes: 7 additions & 4 deletions crmd/utils.c
Expand Up @@ -1086,10 +1086,13 @@ update_attrd_helper(const char *host, const char *name, const char *value, const
{
gboolean rc;
int max = 5;
int attrd_opts = attrd_opt_none;

#if !HAVE_ATOMIC_ATTRD
/* Talk directly to cib for remote nodes if it's legacy attrd */
if (is_remote_node) {
#if HAVE_ATOMIC_ATTRD
attrd_opts |= attrd_opt_remote;
#else
/* Talk directly to cib for remote nodes if it's legacy attrd */
int rc;

/* host is required for updating a remote node */
Expand All @@ -1101,8 +1104,8 @@ update_attrd_helper(const char *host, const char *name, const char *value, const
log_attrd_error(host, name, value, is_remote_node, command, rc);
}
return;
}
#endif
}

if (attrd_ipc == NULL) {
attrd_ipc = crm_ipc_new(T_ATTRD, 0);
Expand All @@ -1118,7 +1121,7 @@ update_attrd_helper(const char *host, const char *name, const char *value, const
}

rc = attrd_update_delegate(attrd_ipc, command, host, name, value, XML_CIB_TAG_STATUS, NULL,
NULL, user_name, is_remote_node?attrd_opt_remote:attrd_opt_none);
NULL, user_name, attrd_opts);
if (rc == pcmk_ok) {
break;

Expand Down
28 changes: 16 additions & 12 deletions doc/Clusters_from_Scratch/en-US/Ch-Apache.txt
@@ -1,9 +1,11 @@
= Add Apache as a Cluster Service =
= Add Apache HTTP Server as a Cluster Service =

indexterm:[Apache HTTP Server]

Now that we have a basic but functional active/passive two-node cluster,
we're ready to add some real services. We're going to start with Apache
because it is a feature of many clusters and relatively simple to
configure.
we're ready to add some real services. We're going to start with
Apache HTTP Server because it is a feature of many clusters and relatively
simple to configure.

== Install Apache ==

Expand Down Expand Up @@ -46,6 +48,8 @@ END

== Enable the Apache status URL ==

indexterm:[Apache HTTP Server,/server-status]

In order to monitor the health of your Apache instance, and recover it if
it fails, the resource agent used by Pacemaker assumes the server-status
URL is available. On both nodes, enable the URL with:
Expand All @@ -54,21 +58,22 @@ URL is available. On both nodes, enable the URL with:
# cat <<-END >/etc/httpd/conf.d/status.conf
<Location /server-status>
SetHandler server-status
Order deny,allow
Deny from all
Allow from 127.0.0.1
Require local
</Location>
END
----

[NOTE]
======
If you are using a different operating system, server-status may already be
enabled or may be configurable in a different location.
enabled or may be configurable in a different location. If you are using
a version of Apache HTTP Server less than 2.4, the syntax will be different.
======

== Configure the Cluster ==

indexterm:[Apache HTTP Server,Apache resource configuration]

At this point, Apache is ready to go, and all that needs to be done is to
add it to the cluster. Let's call the resource WebSite. We need to use
an OCF resource script called apache in the heartbeat namespace.
Expand Down Expand Up @@ -145,12 +150,11 @@ failed to start, then you've likely not enabled the status URL correctly.
You can check whether this is the problem by running:

....
wget -O - http://127.0.0.1/server-status
wget -O - http://localhost/server-status
....

If you see *Connection refused* in the output, then this is likely the
problem. Ensure that *Allow from 127.0.0.1* is present for
the *<Location /server-status>* block.
If you see *Not Found* or *Forbidden* in the output, then this is likely the
problem. Ensure that the *<Location /server-status>* block is correct.

======

Expand Down
1 change: 1 addition & 0 deletions doc/Pacemaker_Explained/en-US/Ch-Advanced-Options.txt
Expand Up @@ -480,6 +480,7 @@ three (again assuming that +multiplier+ is set to 1000).
-------
=====

[[s-migrating-resources]]
=== Migrating Resources ===

Normally, when the cluster needs to move a resource, it fully restarts
Expand Down
45 changes: 28 additions & 17 deletions doc/Pacemaker_Explained/en-US/Ch-Constraints.txt
Expand Up @@ -588,11 +588,11 @@ functionality. Depending on the tool, creating a set +A B+ may be equivalent to

=== Ordering Multiple Sets ===

The syntax can be expanded to allow
ordered sets of (un)ordered resources. In the example below, +A+
and +B+ can both start in parallel, as can +C+ and +D+,
however +C+ and +D+ can only start once _both_ +A+ _and_
+B+ are active.
The syntax can be expanded to allow sets of resources to be ordered relative to
each other, where the members of each individual set may be ordered or
unordered (controlled by the +sequential+ property). In the example below, +A+
and +B+ can both start in parallel, as can +C+ and +D+, however +C+ and +D+ can
only start once _both_ +A+ _and_ +B+ are active.

.Ordered sets of unordered resources
======
Expand Down Expand Up @@ -753,11 +753,14 @@ functionality. Depending on the tool, creating a set +A B+ may be equivalent to
+A with B+, or +B with A+.
=========

This notation can also be used to tell the cluster
that a set of resources must all be located with a common peer, but
have no dependencies on each other. In this scenario, unlike the
previous, +B+ 'would' be allowed to remain active even if +A+ or +C+ (or
both) were inactive.
This notation can also be used to tell the cluster that sets of resources must
be colocated relative to each other, where the individual members of each set
may or may not depend on each other being active (controlled by the
+sequential+ property).

In this example, +A+, +B+, and +C+ will each be colocated with +D+.
+D+ must be active, but any of +A+, +B+, or +C+ may be inactive without
affecting any other resources.

.Using colocated sets to specify a common peer
======
Expand Down Expand Up @@ -788,38 +791,46 @@ There is no inherent limit to the number and size of the sets used.
The only thing that matters is that in order for any member of one set
in the constraint to be active, all members of sets listed after it must also
be active (and naturally on the same node); and if a set has +sequential="true"+,
then in order for one member of that set to be active, all members listed after it
must also be active. You can even specify the role in which the members of a set
must be in using the set's +role+ attribute.
then in order for one member of that set to be active, all members listed
before it must also be active.

If desired, you can restrict the dependency to instances of multistate
resources that are in a specific role, using the set's +role+ property.

.A colocation chain where the members of the middle set have no interdependencies and the last has master status.
.Colocation chain in which the members of the middle set have no interdependencies, and the last listed set (which the cluster places first) is restricted to instances in master status.
======
[source,XML]
-------
<constraints>
<rsc_colocation id="coloc-1" score="INFINITY" >
<resource_set id="colocated-set-1" sequential="true">
<resource_ref id="A"/>
<resource_ref id="B"/>
<resource_ref id="A"/>
</resource_set>
<resource_set id="colocated-set-2" sequential="false">
<resource_ref id="C"/>
<resource_ref id="D"/>
<resource_ref id="E"/>
</resource_set>
<resource_set id="colocated-set-3" sequential="true" role="Master">
<resource_ref id="F"/>
<resource_ref id="G"/>
<resource_ref id="F"/>
</resource_set>
</rsc_colocation>
</constraints>
-------
======

.Visual representation of a colocation chain where the members of the middle set have no inter-dependencies
.Visual representation the above example (resources to the left are placed first)
image::images/three-sets-complex.png["Colocation chain",width="16cm",height="9cm",align="center"]

[NOTE]
====
Pay close attention to the order in which resources and sets are listed.
While the colocation dependency for members of any one set is last-to-first,
the colocation dependency for multiple sets is first-to-last. In the above
example, +B+ is colocated with +A+, but +colocated-set-1+ is
colocated with +colocated-set-2+.

Unlike ordered sets, colocated sets do not use the +require-all+ option.
====
7 changes: 6 additions & 1 deletion doc/Pacemaker_Explained/en-US/Ch-Resources.txt
Expand Up @@ -394,6 +394,11 @@ indexterm:[Resource,Option,requires]
indexterm:[multiple-active,Resource Option]
indexterm:[Resource,Option,multiple-active]

|allow-migrate
|TRUE for ocf:pacemaker:remote resources, FALSE otherwise
|Whether the cluster should try to "live migrate" this resource when it needs
to be moved (see <<s-migrating-resources>>)

|remote-node
|
|The name of the remote-node this resource defines. This both enables the
Expand Down Expand Up @@ -660,7 +665,7 @@ indexterm:[Action,Property,on-fail]
indexterm:[Action,Property,enabled]

|record-pending
|
|FALSE
|If +true+, the intention to perform the operation is recorded so that
GUIs and CLI tools can indicate that an operation is in progress.
This is best set as an 'operation default' (see next section).
Expand Down
19 changes: 16 additions & 3 deletions doc/Pacemaker_Remote/en-US/Ch-Options.txt
Expand Up @@ -7,9 +7,22 @@ the options available for configuring pacemaker_remote-based nodes.

== Resource Meta-Attributes for Guest Nodes ==

When configuring a virtual machine to use as a guest node, these are the
metadata options available to enable the resource as a guest node and
define its connection parameters.
When configuring a virtual machine as a guest node, the virtual machine is
created using one of the usual resource agents for that purpose (for example,
ocf:heartbeat:VirtualDomain or ocf:heartbeat:Xen), with additional metadata
parameters.

No restrictions are enforced on what agents may be used to create a guest node,
but obviously the agent must create a distinct environment capable of running
the pacemaker_remote daemon and cluster resources. An additional requirement is
that fencing the host running the guest node resource must be sufficient for
ensuring the guest node is stopped. This means, for example, that not all
hypervisors supported by VirtualDomain may be used to create guest nodes; if
the guest can survive the hypervisor being fenced, it may not be used as a
guest node.

Below are the metadata options available to enable a resource as a guest node
and define its connection parameters.

.Meta-attributes for configuring VM resources as guest nodes
[width="95%",cols="2m,1,4<",options="header",align="center"]
Expand Down
2 changes: 1 addition & 1 deletion extra/rgmanager/README
@@ -1,6 +1,6 @@
# Legacy: Linux-cluster cluster.conf to Pacemaker CIB translation utility

This directory used contain several parts related to the procedure of
This directory used to contain several parts related to the procedure of
cluster stacks/configuration migration, in particular and as the directory
name suggests: from (CMAN+RGManager)-based stack of HA components to
the (Corosync+Pacemaker)-based one.
Expand Down
7 changes: 3 additions & 4 deletions include/crm/lrmd.h
Expand Up @@ -228,10 +228,9 @@ typedef struct lrmd_event_data_s {
* parameters given to the operation */
void *params;

/* client node name associated with this conneciton.
* This is useful if multiple clients are being utilized by
* a single process. This name allows the actions to be matched
* to the proper client. */
/*! client node name associated with this connection
* (used to match actions to the proper client when there are multiple)
*/
const char *remote_nodename;

/*! exit failure reason string from resource agent operation */
Expand Down
4 changes: 2 additions & 2 deletions include/crm/services.h
Expand Up @@ -104,8 +104,8 @@ enum ocf_exitcode {
/* 150-199 reserved for application use */
PCMK_OCF_CONNECTION_DIED = 189, /* Operation failure implied by disconnection of the LRM API to a local or remote node */

PCMK_OCF_DEGRADED = 190, /* Active reasource that is no longer 100% functional */
PCMK_OCF_DEGRADED_MASTER = 191, /* Promoted reasource that is no longer 100% functional */
PCMK_OCF_DEGRADED = 190, /* Active resource that is no longer 100% functional */
PCMK_OCF_DEGRADED_MASTER = 191, /* Promoted resource that is no longer 100% functional */

PCMK_OCF_EXEC_ERROR = 192, /* Generic problem invoking the agent */
PCMK_OCF_UNKNOWN = 193, /* State of the service is unknown - used for recording in-flight operations */
Expand Down
2 changes: 1 addition & 1 deletion lib/pengine/clone.c
Expand Up @@ -318,7 +318,7 @@ configured_role_str(resource_t * rsc)
const char *target_role = g_hash_table_lookup(rsc->meta,
XML_RSC_ATTR_TARGET_ROLE);

if (target_role == NULL) {
if ((target_role == NULL) && rsc->children && rsc->children->data) {
target_role = g_hash_table_lookup(((resource_t*)rsc->children->data)->meta,
XML_RSC_ATTR_TARGET_ROLE);
}
Expand Down
15 changes: 11 additions & 4 deletions lib/pengine/unpack.c
Expand Up @@ -500,19 +500,26 @@ handle_startup_fencing(pe_working_set_t *data_set, node_t *new_node)
{
static const char *blind_faith = NULL;
static gboolean unseen_are_unclean = TRUE;
static gboolean need_warning = TRUE;

if ((new_node->details->type == node_remote) && (new_node->details->remote_rsc == NULL)) {
/* ignore fencing remote-nodes that don't have a conneciton resource associated
* with them. This happens when remote-node entries get left in the nodes section
* after the connection resource is removed */
/* Ignore fencing for remote nodes that don't have a connection resource
* associated with them. This happens when remote node entries get left
* in the nodes section after the connection resource is removed.
*/
return;
}

blind_faith = pe_pref(data_set->config_hash, "startup-fencing");

if (crm_is_true(blind_faith) == FALSE) {
unseen_are_unclean = FALSE;
crm_warn("Blind faith: not fencing unseen nodes");
if (need_warning) {
crm_warn("Blind faith: not fencing unseen nodes");

/* Warn once per run, not per node and transition */
need_warning = FALSE;
}
}

if (is_set(data_set->flags, pe_flag_stonith_enabled) == FALSE
Expand Down
2 changes: 1 addition & 1 deletion lrmd/lrmd.c
Expand Up @@ -1681,7 +1681,7 @@ cancel_op(const char *rsc_id, const char *action, int interval)

if (safe_str_eq(rsc->class, "stonith")) {
/* The service library does not handle stonith operations.
* We have to handle recurring stonith opereations ourselves. */
* We have to handle recurring stonith operations ourselves. */
for (gIter = rsc->recurring_ops; gIter != NULL; gIter = gIter->next) {
lrmd_cmd_t *cmd = gIter->data;

Expand Down

0 comments on commit 0d18acb

Please sign in to comment.