Archives

Categories

configuring a Heartbeat service

In my last post about Heartbeat I gave an example of a script to start and stop a cluster service. In that post I omitted to mention that the script goes in the directory /usr/lib/ocf/resource.d/heartbeat.

To actually use the script you need to write some XML configuration to tell Heartbeat which parameters should be passed to it via environment variables and which nodes may be candidates to run it.

In the below example the type of web means that the script /usr/lib/ocf/resource.d/heartbeat/web will be called to do the work. The id attributes are all arbitrary, but you want to decide on some sort of consistent naming scheme. I have decided to name web server instances web-X where X is the IP address used for providing the service.

The nvpair element contains a configuration option that will be passed to the script as an environment variable. The name of ip means that the environment variable will be named OCF_RESKEY_ip. Naming of such variables is arbitrary and a script may take many variables. A well written script (which incidentally does not mean my previous blog post) will have an option meta-data to give XML output describing all the variables that it accepts. An example of this can be seen by the command /usr/lib/ocf/resource.d/heartbeat/IPaddr2 meta-data.

In the XML the resources section (as specified by --obj_type resources on the cibadmin command-line) describes resources that the Heartbeat system will run. The constraints section specifies a set of rules that determine where they will run. If the symmetric-cluster attribute in the cluster_property_set is set to true then resources will be permitted to run anywhere, if it is set to false then a resource will not run anywhere unless there is a constraint specifying that it should do so – which means that there must be at least one constraint rule for every resource that is permitted to run.

In the below example I have constraint rules for the service giving node-0 and node-1 a priority of 9000 for running the service.

In a future post I will describe the cluster_property_set and how it affects calculations of where resources should run.

#!/bin/bash

if [ "$1" = "start" ]; then
  cibadmin --obj_type resources --cib_create -p << END
  <primitive id="web-10.1.0.99" class="ocf" type="web" provider="heartbeat">
    <instance_attributes>
      <attributes>
        <nvpair name="ip" value="10.1.0.99"/>
      </attributes>
    </instance_attributes>
    <operations>
      <op id="web-10.1.0.99-resource-operation-stop" name="stop" timeout="10s"/>
      <op id="web-10.1.0.99-resource-operation-start" name="start" timeout="20s"/>
      <op id="web-10.1.0.99-resource-operation-monitor" name="monitor" timeout="5s"/>
    </operations>
  </primitive>
END
  sleep 1
  cibadmin --obj_type constraints --cib_create -p << END
  <rsc_location id="web-10.1.0.99-constraint" rsc="web1">
      <rule id="web-10.1.0.99-rule-node-0" score="9000">
        <expression id="web-10.1.0.99-rule-expression-node-0"
          attribute="#uname" operation="eq" value="node-0"/>
      </rule>
      <rule id="web-10.1.0.99-rule-node-1" score="9000">
        <expression id="web-10.1.0.99-rule-expression-node-1"
          attribute="#uname" operation="eq" value="node-1"/>
      </rule>
  </rsc_location>
END
else
  cibadmin -D --obj_type resources -X '<primitive id="web-10.1.0.99">'
  cibadmin -D --obj_type constraints -X '<rsc_location id="web-10.1.0.99-constraint">'
fi

Update: If you want to read more about Heartbeat then see the HA category of my blog.

1 comment to configuring a Heartbeat service

  • Tim Serong

    If you pass the –sync-call flag to cibadmin, it obviates the need for the sleep. This guarantees that the CIB update is actually complete prior to cibadmin exiting.

    Also, the rsc attribute in is incorrect – it should be “web-10.1.0.99”, not “web1”.