dlm_controld - daemon that configures dlm according to cluster events
The dlm lives in the kernel, and the cluster infrastructure (corosync
membership and group management) lives in user space. The dlm in the
kernel needs to adjust/recover for certain cluster events. It's the job
of dlm_controld to receive these events and reconfigure the kernel dlm as
needed. dlm_controld controls and configures the dlm through sysfs and
configfs files that are considered dlm-internal interfaces.
The cman init script usually starts the dlm_controld daemon.
Command line options override a corresponding setting in cluster.conf.
Enable debugging to stderr and don't fork.
Enable debugging to log file.
Enable kernel dlm debugging messages.
groupd compatibility mode, 0 off, 1 on.
Enable (1) or disable (0) fencing recovery dependency.
Enable (1) or disable (0) quorum recovery dependency.
Enable (1) or disable (0) deadlock detection code.
Enable (1) or disable (0) plock code for cluster fs.
Limit the rate of plock operations, 0 for no limit.
Enable (1) or disable (0) plock ownership.
Plock ownership drop resources time (milliseconds).
Plock ownership drop resources count.
Plock ownership drop resources age (milliseconds).
Enable plock debugging messages (can produce excessive output).
Print a help message describing available options, then exit.
Print program version information, then exit.
is usually located at /etc/cluster/cluster.conf. It is not read directly.
Other cluster components load the contents into memory, and the values are
accessed through the libccs library.
Configuration options for dlm (kernel) and dlm_controld are added to the
<dlm /> section of cluster.conf, within the top level <cluster> section.
can be set to tcp, sctp or detect which selects tcp or sctp based on
the corosync rrp_mode configuration (redundant ring protocol).
The rrp_mode "none" results in tcp. Default detect.
centiseconds, the dlm will emit a warning via netlink. This only applies
to lockspaces created with the DLM_LSFL_TIMEWARN flag, and is used for
deadlock detection. Default 500 (5 seconds).
DLM kernel debug messages can be enabled by setting
to 1. Default 0.
The lock directory
can be specified one the clusternode lines. Weights would usually be
used in the lock server configurations shown below instead.
Enable (1) or disable (0) plock debugging messages (can produce excessive
output). Default 0.
Disabling resource directory
Lockspaces usually use a resource directory to keep track of which node is
the master of each resource. The dlm can operate without the resource
directory, though, by statically assigning the master of a resource using
a hash of the resource name. To enable, set the per-lockspace
option to 1.
<lockspace name="foo" nodir="1">
The nodir setting can be combined with node weights to create a
configuration where select node(s) are the master of all resources/locks.
nodes can be viewed as "lock servers" for the other nodes.
Lock management will be partitioned among the available masters. There
can be any number of masters defined. The designated master nodes will
master all resources/locks (according to the resource name hash). When no
masters are members of the lockspace, then the nodes revert to the common
fully-distributed configuration. Recovery is faster, with little
disruption, when a non-master node joins/leaves.
There is no special mode in the dlm for this lock server configuration,
it's just a natural consequence of combining the "nodir" option with node
weights. When a lockspace has master nodes defined, the master has a
default weight of 1 and all non-master nodes have weight of 0. An explicit
can also be assigned to master nodes, e.g.