A service script for Heartbeat needs to support at least three operations, start, stop, and status. The operations will return 0 on success, 7 on failure (which in the case of the monitor script means that the service is not running) and any other value to indicate that something has gone wrong.
In the second […]
In a Heartbeat cluster installation it may not be possible to have one STONITH device be used to reboot all nodes. To support this it is possible to have multiple STONITH devices configured that will each be used to reboot different nodes in the cluster. In the following code section there is an example of […]
Below is a sample script to configure the ssh STONITH agent for the Heartbeat system. STONITH will reboot nodes when things go wrong to restore the integrity of the cluster.
The STONITH test program supports the -n option to list parameters and the -l option to list nodes. The following is an example of using […]
Currently I am considering the priority scheme to use for some highly available services running on Linux with Heartbeat.
The Heartbeat system has a number of factors that can be used to determine the weight for running a particular service on a given node. One is the connectivity to other systems determined by ping (every […]
In a comment on my blog post “a Heartbeat developer comments on my blog post” Alan Robertson writes: I got in a hurry on my math because of the emergency. So, there are even more assumptions (errors?) than I documented. In particular, the probability model I gave was for a particular node to fail. So […]
Alan Robertson (a major contributor to the Heartbeat project) commented on my post failure probability and clusters. His comment deserves wider readership than a comment generally gets so I’m making a post out of it. Here it is:
One of my favorite phrases is “complexity is the enemy of reliability” . This is absolutely true, […]
A comment on my post about the failure probability of clusters suggested that a six node cluster that has one node fail should become a five node cluster.
The problem with this is what to do when nodes recover from a failure. For example if a six node cluster had a node fail and became […]
When running a high-availability cluster of two nodes it will generally be configured such that if one node fails then the other runs. Some common operation (such as accessing a shared storage device or pinging a router) will be used by the surviving node to determine that the other node is dead and that it’s […]
In Debian bug 418210 there is discussion of what constitutes a cluster.
I believe that the node configuration lines in the config file /etc/ha.d/ha.cf should authoritatively define what is in the cluster and any broadcast packets from other nodes should be ignored.
Currently if you have two clusters sharing the same VLAN and they both […]