Using ReentrantReadWriteLock in JUC and Simple Applications

ReentrantReadWriteLock Introduction and Usage

Overview

ReentrantReadWriteLock is ideal for scenarios with far more read operations than write operations. It allows concurrent read access while ensuring mutual exclusion between reads and writes, or between writes and writes—similar to database shared locks (select ... from ... lock in share mode). JUC provides two types of read-write locks: ReentrantReadWriteLock and StampedLock.

Basic Tests of ReentrantReadWriteLock

package examples.juc;
import lombok.extern.slf4j.Slf4j;
import java.util.concurrent.locks.ReentrantReadWriteLock;

@Slf4j(topic = "c.DataStorage")
class DataStorage {
    private Object data;
    private final ReentrantReadWriteLock readWriteLock = new ReentrantReadWriteLock();
    private final ReentrantReadWriteLock.ReadLock readLock = readWriteLock.readLock();
    private final ReentrantReadWriteLock.WriteLock writeLock = readWriteLock.writeLock();

    public Object read() {
        readLock.lock();
        log.warn("Read lock acquired");
        try {
            log.warn("Reading data");
            Thread.sleep(1000);
            return data;
        } catch (InterruptedException e) {
            e.printStackTrace();
        } finally {
            log.warn("Read lock released");
            readLock.unlock();
        }
        return data;
    }

    public void write() {
        log.warn("Write lock acquiring");
        writeLock.lock();
        try {
            log.debug("Writing data");
        } finally {
            log.debug("Write lock released");
            writeLock.unlock();
        }
    }
}
Test Concurrent Reads
package examples.juc;
public class ReadWriteLockTest {
    public static void main(String[] args) {
        DataStorage storage = new DataStorage();
        new Thread(storage::read, "Reader-1").start();
        new Thread(storage::read, "Reader-2").start();
    }
}

Output:

08:55:41.208 [Reader-1] WARN c.DataStorage - Read lock acquired
08:55:41.208 [Reader-2] WARN c.DataStorage - Read lock acquired
08:55:41.212 [Reader-2] WARN c.DataStorage - Reading data
08:55:41.212 [Reader-1] WARN c.DataStorage - Reading data
08:55:42.214 [Reader-2] WARN c.DataStorage - Read lock released
08:55:42.214 [Reader-1] WARN c.DataStorage - Read lock released

Threads Reader-1 and Reader-2 read concurrently.

Test Read-Write Exclusion
package examples.juc;
public class ReadWriteLockTest {
    public static void main(String[] args) throws InterruptedException {
        DataStorage storage = new DataStorage();
        new Thread(storage::read, "Reader").start();
        Thread.sleep(10);
        new Thread(storage::write, "Writer").start();
    }
}

Output:

09:02:56.738 [Reader] WARN c.DataStorage - Read lock acquired
09:02:56.743 [Reader] WARN c.DataStorage - Reading data
09:02:57.743 [Reader] WARN c.DataStorage - Read lock released
09:02:57.743 [Writer] WARN c.DataStorage - Write lock acquiring
09:02:57.743 [Writer] DEBUG c.DataStorage - Writing data
09:02:57.743 [Writer] DEBUG c.DataStorage - Write lock released

The write operation waits for the read lock to release.

Test Write-Write Exclusion

Writes block each other.

Output:

09:05:10.378 [Writer-1] WARN c.DataStorage - Write lock acquiring
09:05:10.381 [Writer-1] DEBUG c.DataStorage - Writing data
09:05:10.382 [Writer-1] DEBUG c.DataStorage - Write lock released
09:05:10.387 [Writer-2] WARN c.DataStorage - Write lock acquiring
09:05:10.388 [Writer-2] DEBUG c.DataStorage - Writing data
09:05:10.388 [Writer-2] DEBUG c.DataStorage - Write lock released

Usage Summary

  • A read lock excludes write locks but allows other read locks.
  • A write lock excludes both read and write locks.

Important Notes

  1. No Condition Variables for Read Locks: Read locks do not support Condition objects.
  2. No Read-to-Write Upgrade: Attempting to acquire a write lock while holding a read lock causes a permanent wait. Always release the read lock first.
  3. Write-to-Read Downgrade: Acquiring a read lock while holding a write lock is allowed (see the official cache example below).

Official Cache Example:

class CachedValue {
    Object value;
    volatile boolean valid;
    final ReentrantReadWriteLock rwLock = new ReentrantReadWriteLock();

    void process() {
        rwLock.readLock().lock();
        if (!valid) {
            rwLock.readLock().unlock();
            rwLock.writeLock().lock();
            try {
                if (!valid) {
                    value = ...; // Compute value
                    valid = true;
                }
                rwLock.readLock().lock();
            } finally {
                rwLock.writeLock().unlock();
            }
        }
        try {
            use(value);
        } finally {
            rwLock.readLock().unlock();
        }
    }
}

Application: Consistant Caching with ReentrantReadWriteLock

Enviroment Setup

Create an emp table in MySQL:

CREATE TABLE `emp` (
`empno` int(11) NOT NULL,
`ename` varchar(50),
`job` varchar(50),
`sal` decimal(65,30) NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=utf8;

INSERT INTO `emp` (`empno`, `sal`) VALUES (7369, 800), (7351, 200);

Emp Entity:

package examples.cache;
import java.math.BigDecimal;
class Emp {
    private int empno;
    private String ename;
    private String job;
    private BigDecimal sal;

    public int getEmpno() { return empno; }
    public void setEmpno(int empno) { this.empno = empno; }
    public String getEname() { return ename; }
    public void setEname(String ename) { this.ename = ename; }
    public String getJob() { return job; }
    public void setJob(String job) { this.job = job; }
    public BigDecimal getSal() { return sal; }
    public void setSal(BigDecimal sal) { this.sal = sal; }

    @Override
    public String toString() {
        return "Emp{" +
                "empno=" + empno +
                ", ename='" + ename + '\'' +
                ", job='" + job + '\'' +
                ", sal=" + sal +
                '}';
    }
}

Base GenericDao (JDBC):

package examples.cache;
import java.beans.*;
import java.lang.reflect.*;
import java.sql.*;
import java.util.*;

public class GenericDao {
    static String URL = "jdbc:mysql://localhost:3306/test?serverTimezone=GMT%2B8";
    static String USERNAME = "root";
    static String PASSWORD = "123456";

    public <T> List<T> queryList(Class<T> beanClass, String sql, Object... args) {
        System.out.println("sql: [" + sql + "] params:" + Arrays.toString(args));
        BeanRowMapper<T> mapper = new BeanRowMapper<>(beanClass);
        return queryList(sql, mapper, args);
    }

    public <T> T queryOne(Class<T> beanClass, String sql, Object... args) {
        System.out.println("sql: [" + sql + "] params:" + Arrays.toString(args));
        BeanRowMapper<T> mapper = new BeanRowMapper<>(beanClass);
        return queryOne(sql, mapper, args);
    }

    private <T> List<T> queryList(String sql, RowMapper<T> mapper, Object... args) {
        try (Connection conn = DriverManager.getConnection(URL, USERNAME, PASSWORD)) {
            try (PreparedStatement psmt = conn.prepareStatement(sql)) {
                if (args != null) {
                    for (int i = 0; i < args.length; i++) {
                        psmt.setObject(i + 1, args[i]);
                    }
                }
                List<T> list = new ArrayList<>();
                try (ResultSet rs = psmt.executeQuery()) {
                    while (rs.next()) {
                        T obj = mapper.map(rs);
                        list.add(obj);
                    }
                }
                return list;
            }
        } catch (SQLException e) {
            throw new RuntimeException(e);
        }
    }

    private <T> T queryOne(String sql, RowMapper<T> mapper, Object... args) {
        List<T> list = queryList(sql, mapper, args);
        return list.size() == 0 ? null : list.get(0);
    }

    public int update(String sql, Object... args) {
        System.out.println("sql: [" + sql + "] params:" + Arrays.toString(args));
        try (Connection conn = DriverManager.getConnection(URL, USERNAME, PASSWORD)) {
            try (PreparedStatement psmt = conn.prepareStatement(sql)) {
                if (args != null) {
                    for (int i = 0; i < args.length; i++) {
                        psmt.setObject(i + 1, args[i]);
                    }
                }
                return psmt.executeUpdate();
            }
        } catch (SQLException e) {
            throw new RuntimeException(e);
        }
    }

    interface RowMapper<T> {
        T map(ResultSet rs);
    }

    static class BeanRowMapper<T> implements RowMapper<T> {
        private Class<T> beanClass;
        private Map<String, PropertyDescriptor> propertyMap = new HashMap<>();

        public BeanRowMapper(Class<T> beanClass) {
            this.beanClass = beanClass;
            try {
                BeanInfo beanInfo = Introspector.getBeanInfo(beanClass);
                PropertyDescriptor[] propertyDescriptors = beanInfo.getPropertyDescriptors();
                for (PropertyDescriptor pd : propertyDescriptors) {
                    propertyMap.put(pd.getName().toLowerCase(), pd);
                }
            } catch (IntrospectionException e) {
                throw new RuntimeException(e);
            }
        }

        @Override
        public T map(ResultSet rs) {
            try {
                ResultSetMetaData metaData = rs.getMetaData();
                int columnCount = metaData.getColumnCount();
                T t = beanClass.newInstance();
                for (int i = 1; i <= columnCount; i++) {
                    String columnLabel = metaData.getColumnLabel(i);
                    PropertyDescriptor pd = propertyMap.get(columnLabel.toLowerCase());
                    if (pd != null) {
                        pd.getWriteMethod().invoke(t, rs.getObject(i));
                    }
                }
                return t;
            } catch (Exception e) {
                throw new RuntimeException(e);
            }
        }
    }
}

Cached GenericDao Implementation

package examples.cache;
import java.util.*;
import java.util.concurrent.locks.ReentrantReadWriteLock;

class GenericDaoCached extends GenericDao {
    private final GenericDao dao = new GenericDao();
    private final Map<QueryKey, Object> cache = new HashMap<>();
    private final ReentrantReadWriteLock rwLock = new ReentrantReadWriteLock();

    @Override
    public <T> List<T> queryList(Class<T> beanClass, String sql, Object... args) {
        return dao.queryList(beanClass, sql, args);
    }

    @Override
    public <T> T queryOne(Class<T> beanClass, String sql, Object... args) {
        QueryKey key = new QueryKey(sql, args);
        rwLock.readLock().lock();
        try {
            T value = (T) cache.get(key);
            if (value != null) return value;
        } finally {
            rwLock.readLock().unlock();
        }

        rwLock.writeLock().lock();
        try {
            T value = (T) cache.get(key);
            if (value == null) {
                value = dao.queryOne(beanClass, sql, args);
                cache.put(key, value);
            }
            return value;
        } finally {
            rwLock.writeLock().unlock();
        }
    }

    @Override
    public int update(String sql, Object... args) {
        rwLock.writeLock().lock();
        try {
            int result = dao.update(sql, args);
            cache.clear();
            return result;
        } finally {
            rwLock.writeLock().unlock();
        }
    }

    class QueryKey {
        private final String sql;
        private final Object[] args;

        QueryKey(String sql, Object[] args) {
            this.sql = sql;
            this.args = args;
        }

        @Override
        public boolean equals(Object o) {
            if (this == o) return true;
            if (o == null || getClass() != o.getClass()) return false;
            QueryKey queryKey = (QueryKey) o;
            return Objects.equals(sql, queryKey.sql) && Arrays.equals(args, queryKey.args);
        }

        @Override
        public int hashCode() {
            int result = Objects.hash(sql);
            result = 31 * result + Arrays.hashCode(args);
            return result;
        }
    }
}

Test Cached GenericDao

package examples.cache;
import java.sql.SQLException;
public class TestCachedDao {
    public static void main(String[] args) throws SQLException {
        GenericDao dao = new GenericDaoCached();
        System.out.println("============> Query");
        String sql = "select * from emp where empno = ?";
        int empno = 7369;
        Emp emp = dao.queryOne(Emp.class, sql, empno);
        System.out.println(emp);
        emp = dao.queryOne(Emp.class, sql, empno);
        System.out.println(emp);
        emp = dao.queryOne(Emp.class, sql, empno);
        System.out.println(emp);

        System.out.println("============> Update");
        dao.update("update emp set sal = ? where empno = ?", 1000, empno);
        emp = dao.queryOne(Emp.class, sql, empno);
        System.out.println(emp);
    }
}

Output:

============> Query
sql: [select * from emp where empno = ?] params:[7369]
Emp{empno=7369, ename='null', job='null', sal=800.000000000000000000000000000000}
Emp{empno=7369, ename='null', job='null', sal=800.000000000000000000000000000000}
Emp{empno=7369, ename='null', job='null', sal=800.000000000000000000000000000000}
============> Update
sql: [update emp set sal = ? where empno = ?] params:[1000, 7369]
sql: [select * from emp where empno = ?] params:[7369]
Emp{empno=7369, ename='null', job='null', sal=1000.000000000000000000000000000000}

Additional Considerations for Real-World Caching

  • Write Performance: The current implementation has low performance if writes are frequent. Consider partitioning the cache or using finer-grained locks.
  • Cache Eviction: Implement LRU/LFU policies to handle cache expiration.
  • Cache Capacity: Add size limits to prevant OOM.
  • Distributed Systems: The current cache is in-memory and single-node. For distributed systems, use Redis or other distributed caching solutions.
  • Update Strategy: The update method clears the entire cache; optimize by clearing only related keys.
  • Optimistic Locking: Consider using CAS-based atomic operations for read-heavy scenarios with very infrequent writes.

Tags: java JUC ReentrantReadWriteLock ReadWriteLock Caching

Posted on Sun, 17 May 2026 07:45:46 +0000 by !jazz