Setting Up SQL Server 2019 Always On Availability Groups on Linux with Pacemaker

Environment Preparation

Infrastructure Requirements

Three servers minimum for a proper cluster setup:

Node IP Address Role
m191 192.168.1.191 Primary replica
m192 192.168.1.192 Secondary replica
m193 192.168.1.193 Secondary replica

Operating system: CentOS 7.6 with 2 CPU cores, 4GB RAM, and 20GB storage.

Pre-Installation Configuration

Disable the firewall across all nodes:

iptables -F
systemctl stop firewalld
systemctl disable firewalld

Update /etc/hosts on every server:

vim /etc/hosts
192.168.1.191 m191
192.168.1.192 m192
192.168.1.193 m193

Set appropriate hostnames:

hostnamectl set-hostname m191

Repeat this command on each node with the corresponding hostname.

Installing SQL Server 2019

Step 1: Install SQL Server Package

Download the Microsoft repository configuration:

sudo curl -o /etc/yum.repos.d/mssql-server.repo \
  https://packages.microsoft.com/config/rhel/7/mssql-server-2019.repo

Install the server package:

sudo yum install -y mssql-server

Initialize the instance by running the setup utility and providing a strong SA password when prompted:

sudo /opt/mssql/bin/mssql-conf setup

Verify the service is running:

systemctl status mssql-server

Open the default SQL Server port for external access:

sudo firewall-cmd --zone=public --add-port=1433/tcp --permanent
sudo firewall-cmd --reload

Enable the SQL Server agent:

yum install mssql-server-agent
/opt/mssql/bin/mssql-conf set sqlagent.enabled true

Step 2: Install Command-Line Tools

Add the Microsoft product repository:

sudo curl -o /etc/yum.repos.d/msprod.repo \
  https://packages.microsoft.com/config/rhel/7/prod.repo

Remove any previous ODBC packages:

sudo yum remove unixODBC-utf16 unixODBC-utf16-devel

Install sqlcmd and bcp utilities:

sudo yum install -y mssql-tools unixODBC-devel

Add the tools to your PATH:

echo 'export PATH="$PATH:/opt/mssql-tools/bin"' >> ~/.bash_profile
echo 'export PATH="$PATH:/opt/mssql-tools/bin"' >> ~/.bashrc
source ~/.bashrc

Step 3: Initial Connection and Database Creation

Connect to the local instance:

sqlcmd -S localhost -U SA -P '<YourPassword>'

Create a test database:

CREATE DATABASE TestDB
GO

SELECT Name FROM sys.Databases
GO

Create a table and insert sample data:

USE TestDB
GO

CREATE TABLE Inventory (
    id INT,
    name NVARCHAR(50),
    quantity INT
)
GO

INSERT INTO Inventory VALUES (1, 'banana', 150);
INSERT INTO Inventory VALUES (2, 'orange', 154);
GO

SELECT * FROM Inventory WHERE quantity > 152
GO

QUIT

Configuring Certificates and Endpoints

SQL Server version: Microsoft SQL Server 2019 (CTP2.4) - 15.0.1400.75 on CentOS Linux 7.

Enable AlwaysOn Availability Groups

On every node, activate the HADR feature:

sudo /opt/mssql/bin/mssql-conf set hadr.hadrenabled 1
sudo systemctl restart mssql-server

Install the Pacemaker resource agent for SQL Server:

yum install -y mssql-server-ha

Enable the health event session on all replicas:

ALTER EVENT SESSION AlwaysOn_health ON SERVER WITH (STARTUP_STATE=ON);
GO

Create Endpoint Authentication Users

On all nodes, create a dedicated login for database mirroring:

CREATE LOGIN dbm_login WITH PASSWORD = 'StrongPass123!';
CREATE USER dbm_user FOR LOGIN dbm_login;

Generate Certificates on Primary Node

Create the master key and certificate on the primary replica:

CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'StrongPass123!';

CREATE CERTIFICATE dbm_certificate
WITH SUBJECT = 'Database Mirroring Certificate';

BACKUP CERTIFICATE dbm_certificate
TO FILE = '/var/opt/mssql/data/dbm_certificate.cer'
WITH PRIVATE KEY (
    FILE = '/var/opt/mssql/data/dbm_certificate.pvk',
    ENCRYPTION BY PASSWORD = 'StrongPass123!'
);

Distribute Certificates to Secondary Nodes

Copy the certificate files to all secondary replicas:

cd /var/opt/mssql/data/
scp dbm_certificate.* 192.168.1.192:/var/opt/mssql/data/
scp dbm_certificate.* 192.168.1.193:/var/opt/mssql/data/

Set proper ownership on each secondary:

cd /var/opt/mssql/data/
chown mssql.mssql dbm_certificate.*

Import Certificates on Secondary Nodes

On each secondary replica, restore the certificate:

CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'StrongPass123!';

CREATE CERTIFICATE dbm_certificate
AUTHORIZATION dbm_user
FROM FILE = '/var/opt/mssql/data/dbm_certificate.cer'
WITH PRIVATE KEY (
    FILE = '/var/opt/mssql/data/dbm_certificate.pvk',
    DECRYPTION BY PASSWORD = 'StrongPass123!'
);

Create Database Mirroring Endpoints

On every node, create the endpoint using port 5022:

CREATE ENDPOINT [Hadr_endpoint]
AS TCP (LISTENER_PORT = 5022)
FOR DATABASE_MIRRORING (
    ROLE = ALL,
    AUTHENTICATION = CERTIFICATE dbm_certificate,
    ENCRYPTION = REQUIRED ALGORITHM AES
);

ALTER ENDPOINT [Hadr_endpoint] STATE = STARTED;

GRANT CONNECT ON ENDPOINT::[Hadr_endpoint] TO [dbm_login];
GO

Open the endpoint port on all servers:

sudo firewall-cmd --zone=public --add-port=5022/tcp --permanent
sudo firewall-cmd --reload

Creating the Availability Group

Verify and Configure Server Names

Check the current server name configuration:

SELECT @@SERVERNAME;

If the names don't match the configured hostnames, correct them:

SELECT * FROM sys.SysServers;

EXEC sp_dropserver 'localhost';
EXEC sp_addserver 'm191', 'LOCAL';

Restart the SQL Server service after making changes.

Create the Availability Group

On the primary node, create the AG with external cluster type:

CREATE AVAILABILITY GROUP [AG_Primary]
WITH (DB_FAILOVER = ON, CLUSTER_TYPE = EXTERNAL)
FOR REPLICA ON
N'm191' WITH (
    ENDPOINT_URL = N'tcp://m191:5022',
    AVAILABILITY_MODE = SYNCHRONOUS_COMMIT,
    FAILOVER_MODE = EXTERNAL,
    SEEDING_MODE = AUTOMATIC,
    SECONDARY_ROLE (ALLOW_CONNECTIONS = ALL)
),
N'm192' WITH (
    ENDPOINT_URL = N'tcp://m192:5022',
    AVAILABILITY_MODE = SYNCHRONOUS_COMMIT,
    FAILOVER_MODE = EXTERNAL,
    SEEDING_MODE = AUTOMATIC,
    SECONDARY_ROLE (ALLOW_CONNECTIONS = ALL)
),
N'm193' WITH (
    ENDPOINT_URL = N'tcp://m193:5022',
    AVAILABILITY_MODE = SYNCHRONOUS_COMMIT,
    FAILOVER_MODE = EXTERNAL,
    SEEDING_MODE = AUTOMATIC,
    SECONDARY_ROLE (ALLOW_CONNECTIONS = ALL)
);

ALTER AVAILABILITY GROUP [AG_Primary] GRANT CREATE ANY DATABASE;
GO

Join Secondary Replicas to the AG

On each secondary node:

ALTER AVAILABILITY GROUP [AG_Primary] JOIN WITH (CLUSTER_TYPE = EXTERNAL);
ALTER AVAILABILITY GROUP [AG_Primary] GRANT CREATE ANY DATABASE;
GO

Add a Database to the Availability Group

On the primary:

CREATE DATABASE [appdb];
ALTER DATABASE [appdb] SET RECOVERY FULL;

BACKUP DATABASE [appdb]
TO DISK = N'/var/opt/mssql/data/appdb.bak';

ALTER AVAILABILITY GROUP [AG_Primary] ADD DATABASE [appdb];
GO

Verify the database appears on secondary replicas.

Configuring Pacemaker Cluster

Install Cluster Packages

Register the system and enable the high-availability repository:

sudo subscription-manager register
sudo subscription-manager list --available
sudo subscription-manager attach --pool=<PoolID>
sudo subscription-manager repos --enable=rhel-ha-for-rhel-7-server-rpms

Install the cluster software:

yum install pacemaker pcs resource-agents corosync fence-agents-all -y

Configure Cluster Authentication

Set an identical password for the hacluster user on all nodes:

passwd hacluster

Enable and start the pcsd service:

sudo systemctl enable pcsd
sudo systemctl start pcsd
sudo systemctl enable pacemaker

Open cluster-related firewall ports:

firewall-cmd --add-service=high-availability --zone=public --permanent
firewall-cmd --zone=public --add-port=2224/tcp --permanent
firewall-cmd --zone=public --add-port=3121/tcp --permanent
firewall-cmd --zone=public --add-port=5405/udp --permanent
firewall-cmd --reload

Create the Pacemaker Cluster

Destroy any existing cluster configuration:

sudo pcs cluster destroy
sudo systemctl enable pacemaker

Authenticate all nodes and create the cluster from the primary:

sudo pcs cluster auth m191 m192 m193 -u hacluster -p 'ClusterPass1!'
sudo pcs cluster setup --name SQLCluster m191 m192 m193
chown -R hacluster.haclient /var/log/cluster
pcs cluster start --all
pcs cluster enable --all

Verify Cluster Status

Check the cluster state:

pcs cluster status
ps aux | grep pacemaker
corosync-cfgtool -s
corosync-cmapctl | grep members
pcs status corosync
crm_verify -L -V

Disable STONITH and Configure Quorum

For a test environment, disable fencing and quorum checks:

sudo pcs property set stonith-enabled=false
sudo pcs property set no-quorum-policy=ignore
sudo pcs property set cluster-recheck-interval=2min
sudo pcs property set start-failure-is-fatal=true

Install SQL Server Resource Agent

On all nodes:

yum install mssql-server-ha -y
sudo systemctl restart mssql-server

Create Pacemaker Login in SQL Server

On every node:

USE [master]
GO
CREATE LOGIN [pacemakerLogin] WITH PASSWORD = N'StrongPass123!';
ALTER SERVER ROLE [sysadmin] ADD MEMBER [pacemakerLogin];
GO

Store Credentials Securely

Create and secure the credential file:

sudo bash -c 'echo -e "pacemakerLogin\nStrongPass123!" > ~/pacemaker-passwd'
sudo mv ~/pacemaker-passwd /var/opt/mssql/secrets/passwd
sudo chown root:root /var/opt/mssql/secrets/passwd
sudo chmod 400 /var/opt/mssql/secrets/passwd

Create Cluster Resources

Create a master/slave resource for the availability group and a virtual IP:

sudo pcs resource create ag_cluster ocf:mssql:ag \
    ag_name=AG_Primary meta failure-timeout=60s master notify=true

sudo pcs resource create virtualip ocf:heartbeat:IPaddr2 \
    ip=192.168.1.220

pcs resource update AG_Primary meta failure-timeout=60s

Verify the VIP is bound:

ip addr show

Configure Colocation and Ordering Constraints

Set up the relationship between the AG master and virtual IP:

sudo pcs constraint colocation add virtualip ag_cluster-master INFINITY \
    with-rsc-role=Master

sudo pcs constraint order promote ag_cluster-master then start virtualip

Monitor Cluster Status

sudo pcs status

Access SQL Server via Virtual IP

Connect to the cluster using the virtual IP address:

sqlcmd -S 192.168.1.220 -U SA -P 'StrongPass123!'

Query Availability Group Information

View AG-level health status:

SELECT
    g.name AS ag_name,
    rgs.primary_replica,
    rgs.primary_recovery_health_desc AS recovery_health,
    rgs.synchronization_health_desc AS sync_health
FROM sys.dm_hadr_availability_group_states AS rgs
JOIN sys.availability_groups AS g ON rgs.group_id = g.group_id;

View replica-level information:

SELECT
    g.name AS ag_name,
    r.replica_server_name,
    rs.is_local,
    rs.role_desc AS role,
    rs.operational_state_desc AS op_state,
    rs.connected_state_desc AS connect_state,
    rs.synchronization_health_desc AS sync_state
FROM sys.dm_hadr_availability_replica_states AS rs
JOIN sys.availability_replicas AS r ON rs.replica_id = r.replica_id
JOIN sys.availability_groups AS g ON g.group_id = r.group_id;

View database-level synchronization:

SELECT
    g.name AS ag_name,
    r.replica_server_name,
    DB_NAME(drs.database_id) AS database_name,
    drs.is_local,
    drs.is_primary_replica,
    synchronization_state_desc AS sync_state,
    synchronization_health_desc AS sync_health,
    database_state_desc AS db_state
FROM sys.dm_hadr_database_replica_states AS drs
JOIN sys.availability_replicas AS r ON r.replica_id = drs.replica_id
JOIN sys.availability_groups AS g ON g.group_id = drs.group_id
ORDER BY g.name, drs.is_primary_replica DESC;
GO

Perform Manual Failover

Simulate a failure by moving the primary to a different node:

sudo pcs resource move ag_cluster-master m192 --master

The virtual IP will migrate to the new primary, and SQL Server will continue accepting connections at the same VIP address. Verify the failover by checking which node owns the VIP and monitoring the AG state through the diagnostic queries above.

Cleanup Commands

To remove an availability group:

DROP AVAILABILITY GROUP group_name;

Tags: SQL Server 2019 Always On AG Pacemaker Linux High Availability

Posted on Thu, 14 May 2026 05:18:21 +0000 by frikikip