Java Concurrency: A Deep Dive into Lock Mechanisms

In Java’s multi‑threaded ecosystem, locks govern access to shared resources and ensure consistency. They can be classified along several dimensions, each addressing distinct design concerns: optimistic versus pessimistic concurrency control, blocking versus non‑blocking semantics, fairness policy, reentrance capability, and the ability to share a lock among threads. This article explores these categories, their internal workings, and practical usage patterns.

Pessimistic and Optimistic Locking

Pessimistic locking assumes conflicts are likely. Before touching any data, a thread acquires an exclusive lock, blocking all other contenders. In Java, synchronized and java.util.concurrent.locks.ReentrantLock embody this approach: they lock even when the data might remain unchanged. Databases adopt a similar strategy using row‑level or table‑level locks.

Optimistic locking proceeds without acquiring a lock for reads, expecting minimal interference. Conflicts are checked only at write time. This startegy is typically realized through a version number or Compare‑And‑Swap (CAS). Java’s atomic variable classes (e.g., AtomicInteger) rely on CAS to implement lock‑free, non‑blocking algorithms.

Version Number Approcah

A dedicated version column is added to the data store. When a transaction reads a record, it also captures the version number. On update, the write succeeds only if the version matches the value originally read; otherwise the operation is retried. This prevents lost updates in scenarios like financial systems where multiple operators may modify the same balance.

CAS Algorithm

CAS operates on three values: the memory location V, the expected value A, and the new value B. If V == A, the location is atomically set to B. Java exposes CAS through the java.util.concurrent.atomic package. The following snippet demonstrates a thread‑safe counter using AtomicInteger:

import java.util.concurrent.atomic.AtomicInteger;

class ThreadSafeCounter {
    private final AtomicInteger value = new AtomicInteger(0);

    public void increment() {
        value.incrementAndGet();
    }

    public void decrement() {
        value.decrementAndGet();
    }

    public int current() {
        return value.get();
    }
}

public class CounterDemo {
    static final int ITERATIONS = 10_000;

    public static void main(String[] args) throws InterruptedException {
        ThreadSafeCounter counter = new ThreadSafeCounter();
        Runnable producer = () -> {
            for (int i = 0; i < ITERATIONS; i++) counter.increment();
        };
        Runnable consumer = () -> {
            for (int i = 0; i < ITERATIONS; i++) counter.decrement();
        };

        Thread t1 = new Thread(producer);
        Thread t2 = new Thread(consumer);
        t1.start();
        t2.start();
        t1.join();
        t2.join();

        System.out.println(counter.current()); // always 0
    }
}

Although CAS avoids blocking, it suffers from the ABA problem: a value may change from A to B and back to A, fooling the check. AtomicStampedReference mitigates this by maintaining a version stamp alongside the value. Additionally, heavy contention can cause prolonged spinning, wasting CPU cycles.

Spin Locks

When a thread fails to acquire a lock, it may either be suspended (mutex) or loop actively checking the lock state. The latter is a spin lock. Spinning eliminates context‑switch overhead and is advantageous when the lock is held for very short intervals. However, under high contention or long hold times, spinning degrades performance.

A basic spin lock can be built with AtomicBoolean:

import java.util.concurrent.atomic.AtomicBoolean;

class SimpleSpinLock {
    private final AtomicBoolean locked = new AtomicBoolean(false);

    public void acquire() {
        while (!locked.compareAndSet(false, true)) {
            // busy wait
        }
    }

    public void release() {
        if (!locked.compareAndSet(true, false)) {
            throw new IllegalStateException("Lock was not held");
        }
    }
}

This implementation lacks fairness. To enforce FIFO ordering, developers created queue‑based spin locks. TicketLock assigns sequential tickets to arriving threads and serves them in order, analogous to a deli counter. CLHLock and MCSLock are linked‑list‑based variants that reduce cache coherency traffic by having each thread spin on a local variable.

TicketLock Example

import java.util.concurrent.atomic.AtomicInteger;

class TicketSpinLock {
    private final AtomicInteger serving = new AtomicInteger(0);
    private final AtomicInteger nextTicket = new AtomicInteger(0);

    public int acquire() {
        int myTicket = nextTicket.getAndIncrement();
        while (serving.get() != myTicket) { /* spin */ }
        return myTicket;
    }

    public void release(int ticket) {
        serving.compareAndSet(ticket, ticket + 1);
    }
}

CLHLock (Simplified)

import java.util.concurrent.atomic.AtomicReference;

class CLHLock {
    private static class Node {
        volatile boolean locked = true;
    }

    private final AtomicReference<node> tail = new AtomicReference<>(null);
    private final ThreadLocal<node> currentNode = ThreadLocal.withInitial(Node::new);

    public void acquire() {
        Node node = currentNode.get();
        node.locked = true;
        Node pred = tail.getAndSet(node);
        if (pred != null) {
            while (pred.locked) { /* spin on predecessor */ }
        }
    }

    public void release() {
        Node node = currentNode.get();
        node.locked = false;
        tail.compareAndSet(node, null); // attempt to reset tail
    }
}
</node></node>

Synchronized and Lock Inflation

The synchronized keyword is a pessimistic, reentrant lock implemented through monitor objects. Internally, each Java object header contains a Mark Word that encodes hashcode, garbage collection age, and lock state. The lock state evolves through four levels: unlocked, biased, lightweight, and heavyweight.

  • Biased locking assumes a single thread repeatedly acquires the lock. The object header records that thread’s ID, avoiding CAS on subsequent entries/exits.
  • When another thread attempts to acquire the lock, the bias is revoked, and the lock transitions to a lightweight state. The competing thread allocates a lock record in its stack frame and uses CAS to point the object header to that record. The thread then spins briefly hoping the owner releases quickly.
  • If lightweight contention persists or the spinning threshold is exceeded, the lock is inflated to a heavyweight monitor. At this point the operating system’s mutex is used, and waiting threads are suspended.

Lock inflation is one‑way; locks do not deflate. Biased and lightweight locks improve performance in scenarios with low to moderate contention, while heavyweight mode handles extreme contention.

Fairness and Reentrancy

Fair vs. non‑fair locks determine acquisition order. A fair lock grants the lock to the longest‑waiting thread (FIFO), while a non‑fair lock allows barging, possibly causing starvation but often yielding higher throughput. ReentrantLock accepts a boolean fairness parameter:

ReentrantLock fairLock = new ReentrantLock(true);
ReentrantLock unfairLock = new ReentrantLock(false);

Internally, fair mode relies on AQS’s hasQueuedPredecessors() to check whether any thread is waiting longer. Non‑fair mode simply attempts a CAS‑based acquisition first, then falls back to queuing.

Reentrancy means a thread that already holds a lock may re‑acquire it without blocking. Both synchronized and ReentrantLock are reentrant:

public synchronized void outer() {
    System.out.println("Outer");
    inner();
}

public synchronized void inner() {
    System.out.println("Inner");
}

Without reentrancy, calling inner() from outer() would deadlock.

Shared and Exclusive Locks

Exclusive (write) locks allow only one thread. Shared (read) locks permit multiple concurrent readers but exclude writers. ReentrantReadWriteLock combines a read lock and a write lock:

ReentrantReadWriteLock rwLock = new ReentrantReadWriteLock();
Lock readLock = rwLock.readLock();
Lock writeLock = rwLock.writeLock();

Readers can enter simultaneously if no writer is active. A writer must wait for all readers to finish. This improves concurrency in read‑heavy workloads.

Tags: java Concurrency Lock Synchronized ReentrantLock

Posted on Wed, 13 May 2026 04:38:29 +0000 by bhoward3