Cluster Architecture and Prerequisites
To establish a robust High Availability (HA) environment using Heartbeat, a two-node architecture is configured on CentOS 6.8 (x86_64). The setup requires distinct network interfaces for management traffic and heartbeat signals to ensure reliable failover mechanisms.
- Primary Node (ha-node-primary):
- Menagement IP (eth0): 192.168.50.10
- Heartbeat IP (eth1): 10.50.50.1
- Secondary Node (ha-node-backup):
- Management IP (eth0): 192.168.50.11
- Heartbeat IP (eth1): 10.50.50.2
- Virtual IP (VIP): 192.168.50.100 (Floating address managed by the cluster)
Network and System Preparation
Begin by cloning virtual machines to ensure identical base environments. Configure the hostnames permanently by editing /etc/sysconfig/network on each node. Set HOSTNAME=ha-node-primary on the first machine and HOSTNAME=ha-node-backup on the second. Apply changes immediately using the hostname command.
Ensure name resolution works locally by updating the /etc/hosts file on both servers. This step is critical for Heartbeat node identification.
echo -e "192.168.50.10\tha-node-primary\n192.168.50.11\tha-node-backup" >> /etc/hosts
Verify configuration by ensuring uname -n matches the entries in /etc/hosts.
Configuring the Heartbeat Link
A dedicated network link prevents management traffic from interfering with cluster health checks. Connect the eth1 interfaces of both nodes directly via Ethernet cable or through an isolated switch VLAN.
To enforce traffic segregation, add static host routes so communication betweeen heartbeat IPs utilizes the dedicated interface.
On Primary Node:
route add -host 10.50.50.2 dev eth1
echo "route add -host 10.50.50.2 dev eth1" >> /etc/rc.d/rc.local
chmod +x /etc/rc.d/rc.local
On Secondary Node:
route add -host 10.50.50.1 dev eth1
echo "route add -host 10.50.50.1 dev eth1" >> /etc/rc.d/rc.local
chmod +x /etc/rc.d/rc.local
Configure the Virtual IP alias on the primary node to ensure the interface exists before cluster takeover, adding the configuration to startup scripts:
echo "ifconfig eth0:1 192.168.50.100 netmask 255.255.255.0 up" >> /etc/rc.d/rc.local
Security and Dependency Installation
Disable security modules that might block cluster communication during setup. Flush iptables rules and set SELinux to permissive mode.
service iptables stop
chkconfig iptables off
setenforce 0
sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config
Heartbeat packages are not included in the standard CentOS repositories. Install the EPEL repository to access the required binaries.
cd /tmp
wget http://mirrors.kernel.org/fedora-epel/6/x86_64/epel-release-6-8.noarch.rpm
rpm -Uvh epel-release-6-8.noarch.rpm
yum install heartbeat heartbeat-libs -y
Heartbeat Configuration Files
The core configuration resides in /etc/ha.d. Three primary files control behavior: ha.cf (cluster settings), authkeys (security), and haresources (resource management).
1. Cluster Communication (ha.cf)
Copy the default template to the configuration directory and modify parameters to match the network setup.
cp /usr/share/doc/heartbeat-3.0.4/{ha.cf,authkeys,haresources} /etc/ha.d/
Edit /etc/ha.d/ha.cf with the following directives:
debugfile /var/log/ha-debug
logfile /var/log/ha-log
keepalive 2
deadtime 30
warntime 10
initdead 60
mcast eth1 225.0.0.50 694 1 0
auto_failback on
node ha-node-primary
node ha-node-backup
respawn hacluster /usr/lib64/heartbeat/ipfail
The mcast directive defines multicast communication on the heartbeat interface. Ensure the multicast IP is unique within the local network segment.
2. Authentication (authkeys)
Define a secure authentication method to prevent unauthorized nodes from joining the cluster. MD5 or SHA1 is recommended over CRC.
cat > /etc/ha.d/authkeys << EOF
auth 3
3 md5 SecureClusterPass2023
EOF
chmod 0600 /etc/ha.d/authkeys
Permissions must be restricted to root read/write only, or the daemon will refuse to start.
3. Resource Definition (haresources)
Specify which node owns resources by default and define the VIP script parameters.
echo "ha-node-primary IPaddr::192.168.50.100/24/eth0:1" > /etc/ha.d/haresources
This configuration instructs the cluster to assign the VIP to the primary node using the IPaddr resource agent.
Deploying Configuration to Secondary Node
Synchronize the configuration files from the primary to the secondary node. In production, configuration management tools are preferred, but scp suffices for this setup.
scp /etc/ha.d/{authkeys,ha.cf,haresources} ha-node-backup:/etc/ha.d/
Ensure the node names in ha.cf match the actual hostnames of both machines. If using unicast instead of multicast, update the ucast directive with the peer's heartbeat IP.
Service Activation and Failover Testing
Start the Heartbeat daemon on the primary node first, followed by the secondary node.
service heartbeat start
Verify the VIP is active on the primary node using ifconfig. To test high availability, simulate a failure by stopping the service or disabling the network interface on the primary.
service heartbeat stop
# OR
ifdown eth1
Monitor the secondary node; it should detect the failure and acquire the VIP automatically. Check cluster status and transition logs to confirm successful failover.
tail -f /var/log/ha-log
tail -f /var/log/messages
Observe the logs for resource takeover messages indicating the backup node has assumed control of the virtual IP address.