Redis Cluster Architectures: Replication, Sentinel, and Distribution

Master-Slave Replication

The master-slave replication mechanism in Redis automatically synchronizes data from the master node to one or more slave nodes based on configuration settings. The master node primarily handles write operations, while slave nodes are optimized for read operations. The general principle is to configure multiple slaves rather than multiple masters.

Setting Up Master-Slave Replication

  • Create three directories: redis_node1, redis_node2, redis_node3
  • Copy redis.conf to redis_node1
  • Configure redis.conf:
1. bind 192.168.50.20   # Specify server IP
2. port 7001            # Port number
3. daemonize yes
4. pidfile /opt/redis/redis_node1/redis_7001.pid 
5. dir /opt/redis/redis_node1
6. requirepass mySecretPass
  • Copy the configured redis.conf to redis_node2 and redis_node3
# For node2 and node3, replace port numbers
# Use :%s/7001/7002/g for batch replacement in node2
# Use :%s/7001/7003/g for batch replacement in node3

# Critical configurations for slave nodes to connect to master
replicaof 192.168.50.20 7001
masterauth mySecretPass
  • Start the 7001, 7002, and 7003 instances:
./redis-server ../redis_node1/redis.conf 
./redis-server ./redis_node2/redis.conf 
./redis-server ./redis_node3/redis.conf 
  • Connect using the Redis client:
./redis-cli -h 192.168.50.20 -p 7001 -a mySecretPass
./redis-cli -h 192.168.50.20 -p 7002 -a mySecretPass
./redis-cli -h 192.168.50.20 -p 7003 -a mySecretPass
  • Check replication status with: info replication

Advantages and Disadvantages of Master-Slave Replication

Advantages:

  • A single Master can synchronize with multiple Slaves.
  • Slaves can accept connections from other Slaves for replication, distributing the Master's synchronization load.
  • The Master node serves clients without blocking during synchronization.
  • li>Slave nodes complete data synchronization without blocking, returning the most recent data available before synchronization. li>Slave nodes can handle read requests, reducing the Master's read load while writes remain Master-exclusive. li>The Master can delegate persistence operations to Slaves. li>Automatic data synchronization from Master to Slaves enables read-write separation.

Disadvantages:

    li>Write operations occur first on the Master before synchronizing to Slaves, introducing latency that worsens under high load or with numerous Slaves. li>Master failure prevents write operations until manual promotion of a Slave to Master status:
slaveof no one   # Promote slave to master
slaveof 192.168.50.20 7002   # Redirect slave to new master

Summary of Master-Slave Replication

    li>Regardless of whether a Slave is connecting for the first time, it sends a PSYNC command to the Master to request data replication. The Master responds by performing background persistence using bgsave to generate a new RDB snapshot. During persistence, the Master continues accepting client requests except for fork operations, modifying data requests are cached in memory. After persistence completes, the Master sends the RDB file to the Slave, which loads it into memory. Finally, the Master sends the cached commands to the Slave. li>A single Master can have multiple Slaves, but each Slave can have only one Master. Data flows unidirectionally from Master to Slave. li>Redis lacks automatic fault tolerance and recovery. Both Master and Slave failures cause frontend read/write request failures, requiring machine restarts or manual IP switching. li>Master failure can lead to data inconsistency if not all data was synchronized to Slaves before the failure, reducing system availability. li>Redis struggles with online scaling, making capacity expansion complex when limits are reached, often requiring over-provisioning of resources.

Sentinel Mechanism

Introduced in Redis 2.8, the Sentinel mechanism provides automatic failover for master nodes.

Core Functions:

    li>Monitoring: Sentinels continuously check the health of master and slave nodes. li>Automatic Failover: When a master node fails, Sentinels automatically promote a slave to master and reconfigure other slaves to follow the new master. li>Configuration Provider: Clients connect to Sentinels to discover the current master node address. li>Notification: Sentinels inform clients about failover results. li>Sentinel Nodes: Special Redis nodes that don't store data but monitor the cluster. li>Data Nodes: Master and slave nodes that actually store Redis data.

Setting Up Sentinel Mode

    li>Build on master-slave replication with consistent Redis passwords:
1. Copy sentinel.conf
   cp sentinel.conf ../sentinel/sentinel_26379.conf
2. Modify configuration:
   bind 0.0.0.0
   port 26379
   daemonize yes
   pidfile /var/run/redis/sentinel_26379.pid
   logfile "sentinel_26379.log"
   dir /var/lib/redis/sentinel
   
   # Monitor configuration: monitor    
   sentinel monitor mymaster 192.168.50.20 7001 2
   
   # Authentication for the master
   sentinel auth-pass mymaster mySecretPass
   
   Copy the modified sentinel_26379.conf twice
   Use :%s/26379/26380/g for batch replacement
   Use :%s/26379/26381/g for batch replacement
3. Start Sentinels:
   ./redis-sentinel ../sentinel/sentinel_26379.conf 
   ./redis-sentinel ../sentinel/sentinel_26380.conf 
   ./redis-sentinel ../sentinel/sentinel_26381.conf 

Monitoring Failover with Jedis

public static void main(String[] args) {
       Logger logger = LoggerFactory.getLogger(SentinelMonitor.class);
       Set<String> sentinelHosts = new HashSet<>();
       sentinelHosts.add("192.168.50.20:26379");
       sentinelHosts.add("192.168.50.20:26380");
       sentinelHosts.add("192.168.50.20:26381");
       
       JedisSentinelPool pool = new JedisSentinelPool("mymaster", sentinelHosts, "mySecretPass");
       
       while (true) {
           Jedis jedis = null;
           try {
               jedis = pool.getResource();
               String key = "key:" + UUID.randomUUID().toString();
               String value = "value:" + System.currentTimeMillis();
               jedis.set(key, value);
               System.out.println("Stored: " + jedis.get(key));
               Thread.sleep(2000);
           } catch (Exception e) {
               logger.error("Redis operation failed: " + e.getMessage());
           } finally {
               if (jedis != null) {
                   jedis.close();
               }
           }
       }
   }

Sentinel Mode Principles

After startup, Sentinels establish two connections with the monitored master database. Once connected, Sentinel uses the second connection to send commands:

    li>Every 10 seconds, Sentinels send INFO commands to master and slave databases to obtain current database information, such as discovering new slave nodes or updating role changes. li>Every 2 seconds, Sentinels publish messages to the _sentinel:hello channel of master and slave databases. Each Sentinel subscribes to this channel to share monitoring data with other Sentinels. li>Every 1 second, Sentinels send PING commands to master databases, slave databases, and other Sentinel nodes.

The INFO command returns database information (running ID, slave information, etc.), enabling automatic discovery of new nodes. Sentinels only need to monitor the Redis master in configuration, as they can use INFO commands to discover all slaves.

Sentinels publish information to the _sentinel:hello channel to share their monitoring data with other Sentinels monitoring the same database. The published message contains:

<sentinel-ip>, <sentinel-port>, <sentinel-runid>, 
<sentinel-config-version>, <master-name>, 
<master-ip>, <master-port>, <master-config-version>

When Sentinels receive messages from other Sentinels via the _sentinel:hello channel, they determine if it's a new Sentinel and add it to the discovered list if so.

After automatically discovering slaves and other Sentinels, Sentinels monitor these nodes by sending PING commands at intervals determined by the down-after-milliseconds option:

# Send PING every 1 second
sentinel down-after-milliseconds mymaster 60000
# Send PING every 600ms
sentinel down-after-milliseconds othermaster 600

Subjective Down vs Objective Down

    li>Subjective Down (SDOWN): When a node doesn't respond within the down-after-milliseconds period, the Sentinel considers it subjectively down from its perspective. li>Objective Down (ODOWN): If the unresponsive node is a master, the Sentinel checks with other Sentinels to determine if they also consider it SDOWN. If enough Sentinels (specified by the quorum parameter) agree, the master is objectively down.

Leading Sentinel Election

When a master is objectively down, a leading Sentinel is elected using Raft algorithm:

    li>A Sentinel (A) discovering the objective down state requests other Sentinels to elect it as leader. li>If a target Sentinel hasn't voted yet, it votes for A. li>If more than half of Sentinels (and exceeding quorum) vote for A, A becomes the leading Sentinel. li>If multiple Sentinels compete simultaneously, none may be elected. Each waits a random time before retrying.

Failover Process

    li>The leading Sentinel selects a new master from the available slaves based on:
      li>Highest priority (set by replica-priority) li>Greatest replication offset (most complete replication) li>Smallest run ID (if priority and offset are equal)
    li>The selected slave is promoted to master with SLAVEOF NO ONE. li>Other slaves are reconfigured with SLAVEOF to follow the new master. li>The old master is reconfigured as a slave of the new master for when it recovers.

Sentinel Mode Advantages and Disadvantages

Advantages:

    li>Inherits all master-slave advantages. li>Automatic master-slave failover improves availability. li>More robust and highly available than master-slave alone.

Disadvantages:

    li>Complex online scaling when cluster capacity is reached. li>Requires significant over-provisioning of resources. li>More complex configuration than master-slave.

Redis Cluster Mode

Redis Cluster is a collection of programs that distribute data across multiple Redis nodes. Unlike traditional master-slave setups, Redis Cluster doesn't support multi-key operations to avoid data movement between nodes, which would degrade performance.

Redis Cluster Advantages:

    li>High availability: The cluster remains partially functional even with node failures. li>Automatic failover: Slaves can quickly become masters when masters fail. li>Data persistence: Rapid recovery from data loss after failures. li>Memory scaling: Utilizes all available memory across cluster nodes. li>Performance scaling: Computational power increases linearly with added servers. li>Network bandwidth: Increases with additional computers and network interfaces. li>No single point of failure: No central node becomes a performance bottleneck. li>Asynchronous processing: Enables fast read and write operations.

Redis Cluster Data Sharding

Redis Cluster uses hash slots instead of consistent hashing. It has 16384 hash slots by default. Each key is assigned to a slot using CRC16(key) % 16384. Each node manages a subset of these slots. For example, with 3 nodes:

    li>Node A: slots 0-5460 li>Node B: slots 5461-10922 li>Node C: slots 10923-16383

Slots can be moved between nodes without service interruption, allowing flexible scaling.

Environment Setup

CentOS 7.5, Redis 6.x

Cluster Deployment

    li>Copy the compiled redis/bin directory to redis-cluster:
mkdir redis-cluster
cp -r redis/bin/ redis-cluster/redis_node01
    li>Modify redis.conf configuration:
bind 192.168.50.21   # Server IP address
port  7001           # Node 1 port
cluster-enabled yes   # Enable cluster mode
    li>Copy the modified redis_node01 directory 5 more times, changing ports to 7002-7006. li>Start Redis instances. Consider creating a script for batch startup.

For Redis versions before 5.0, cluster creation relied on ruby scripts (redis-trib.rb). For Redis 5.0+, use redis-cli directly.

    li>Copy redis-cli to the redis-cluster directory from the src directory:
cp -r redis-cli /opt/Redis/redis-cluster

Creating the Cluster

./redis-cli --cluster create --cluster-replicas 1 \
192.168.50.21:7001 192.168.50.21:7002 192.168.50.21:7003 \
192.168.50.21:7004 192.168.50.21:7005 192.168.50.21:7006

The --cluster-replicas 1 parameter specifies one replica per master node. During creation, respond "yes" when prompted.

Cluster Operations

    li>Connect to any Redis node in cluster mode:
./redis-cli -c -h 192.168.50.21 -p 7001
    li>Cluster information commands:
cluster info      # Query cluster information
cluster nodes     # List all cluster nodes

Cluster Management

Redis Cluster automatically handles data distribution and failover. When a master node fails, its promoted slave takes over automatically. The cluster continues operating as long as a majority of nodes are available.

Scaling Considerations

While Redis Cluster provides high availability and horizontal scaling, adding nodes requires careful slot redistribution. The cluster can continue operating during slot migration, but performance may be temporarily affected.

Tags: Redis cluster Master-Slave Sentinel High Availability

Posted on Fri, 08 May 2026 16:54:30 +0000 by mvidberg