Red Hat Cluster Suite Study Note 2
Cluster Administrator GUI(system-config-cluster). Cluster Configuration Took, create edit “/etc/cluster/cluster.conf”. Cluster Status Tool, manager high-availability service.
Command-line tools
1.ccs-tool: cluster configuration system tool make online updates to the cluster configuration file. It provides the capability to create and modify cluster infrastructure components(Creat a cluster,add, remove nodes).
2.cman-tool: cluster management tool manages the CMAN cluster manager. It provides that capability to join a cluster, leave a cluster, kill a node, or change the expected quorum votes of a node in cluster.
3.fence-tool: use to join or leave the default fense domain specifically, it starts fence domain to join the domain and kill fenced to leave the domain.
4.clustat: cluster status utility. display the status of the cluster.
5.clusvcadm: cluster user service administration utility. allows you to enable,disable,relocate and restart high availabilty service in a domain.
cman.ko: the kernel module for CMAN
libcman.so.1.0.0: Library for programs that need to interact with cman.ko
DLM: dlm.ko and libdlm.so.1.0.0
gfs.ko: kernel module
gfs_fsck: repairs an unmounted GFS file system
gfs_grow: grows a mounted GFS file system
gfs_jadd: add journals to a mounted GFS file system
gfs_mkfs: create a GFS file system on a storage device
gfs_quota: manager quota on a mounted GFS file system
gfs_tool: configures or tunes a GFS file system
lock_harness.ko: allow a vaviety of locking mechanism
lock_dlm.ko: A lock module that implements DLM locking for GFS
lock_nolock.ko: A lock module for use when GFS is used as a local file system only.
The foundation of a cluster is an advanced host membership algorithm, this algorithm ensures that the cluster maintains complete data integrity by using the following method of inter-node communication.
1. Network connections between the cluster system
2. A cluster configuration system daemon(ccsd) that synchoronize configuration between cluster nodes.
No-single-point-of-failure hardware configuration:
Cluster can include a dual-controller RAID array, multiple bonded network channels, multiple paths between cluster members and storage, and redunant uninterruptible power supply(UPS) system to ensure that no single failure results in application down time or loss of data.
For RedHat cluster suite 4, node health is monitored through a cluster network heartbeat. In previous versions of Red Hat cluster suite, node health was monitored on shared disk. Shared disk is not required for node_heath monitoring in RHCS4.
To improve availability, protect against component failure and ensure data integrity under all failure conditions, more hardware is required.
Disk failure: Hardware RAID to replicate data across multiple disks.
RAID controller failure: Dual RAID controllers to provide redundant access to disk data.
Network Interface failure: Ethernet channel bonding and failover.
Power Source Failure: Redundant uninterruptible Power Supply(UPS) systems
Machine failure: Power Switch
APC: network-attached Power Switch
Because attached storage devices must have the same device special file on each node, it is recommended that the nodes have symmetric I/O subsystems. It means the machines would better have same configuraion.
Do not include any file system used as resource for a cluster device in the node’s local /etc/fstab, because the cluster software must control the mounting and unmounting of service file systems. For optimal performance of shared file systems make sure to specify a 4KB block size with the mke2fs -b command. A smaller block size can cause long fsck times.