Thread synchronization coordinates concurrent access to shared resources, preventing race conditions and ensuring memory visibility. Java provides multiple coordination mechanisms ranging from low-level intrinsic monitors to high-level concurrency utilities.
Intrinsic Locking
Every Java object possesses an intrinsic lock (monitor). When a thread acquires this lock, it establishes a happens-before relationship with subsequent lock acquisitions, ensuring visibility of memory writes across threads.
Method-Level Synchronization
Declaring a method synchronized causes the thread to acquire the instance's monitor before execution:
public class SafeCounter {
private int value = 0;
public synchronized void increment() {
value++;
}
public synchronized int fetch() {
return value;
}
public static void main(String[] args) throws InterruptedException {
SafeCounter counter = new SafeCounter();
CountDownLatch latch = new CountDownLatch(2);
Runnable worker = () -> {
for (int i = 0; i < 1000; i++) {
counter.increment();
}
latch.countDown();
};
new Thread(worker).start();
new Thread(worker).start();
latch.await();
System.out.println("Result: " + counter.fetch());
}
}
Block-Level Synchronization
For finer granularity, synchronize specific code segments using arbitrary objects as monitors:
public class GranularCounter {
private final Object mutex = new Object();
private volatile int total = 0;
public void add(int delta) {
synchronized (mutex) {
total += delta;
}
}
public int current() {
synchronized (mutex) {
return total;
}
}
}
Block-level locking reduces contention by limiting the critical section scope compared to entire method synchronization.
Explicit Locks with ReentrantLock
The java.util.concurrent.locks.Lock interface provides more flexible locking strategies than intrinsic monitors. ReentrantLock supports interruptible lock acquisition, timeout-based attempts, and fairness policies.
import java.util.concurrent.locks.Lock;
import java.util.concurrent.locks.ReentrantLock;
public class LockBasedAccumulator {
private final Lock guard = new ReentrantLock();
private long sum = 0;
public void add(long amount) {
guard.lock();
try {
sum += amount;
} finally {
guard.unlock();
}
}
public long getSum() {
guard.lock();
try {
return sum;
} finally {
guard.unlock();
}
}
}
Unlike synchronized, explicit locks require manual release in a finally block to prevent deadlocks during exceptions.
Semaphore-Based Throttling
Semaphores control access to finite resources through permit counting. Unlike mutexes that allow single-thread access, semaphores enable configurable concurrency levels:
import java.util.concurrent.Semaphore;
import java.util.stream.IntStream;
public class ResourcePool {
private final Semaphore permits = new Semaphore(3);
public void useResource(int taskId) {
try {
permits.acquire();
System.out.printf("Task %d acquired resource%n", taskId);
Thread.sleep(1500);
} catch (InterruptedException e) {
Thread.currentThread().interrupt();
} finally {
System.out.printf("Task %d releasing resource%n", taskId);
permits.release();
}
}
public static void main(String[] args) {
ResourcePool pool = new ResourcePool();
IntStream.range(1, 8)
.forEach(i -> new Thread(() -> pool.useResource(i)).start());
}
}
ReadWriteLock Optimizaton
ReentrantReadWriteLock separates access modes: multiple threads can hold read locks concurrently, while write locks remain exclusive. This pattern suits read-heavy workloads:
import java.util.concurrent.locks.ReadWriteLock;
import java.util.concurrent.locks.ReentrantReadWriteLock;
public class CachedData {
private final ReadWriteLock rwl = new ReentrantReadWriteLock();
private int data = 0;
public void update(int newValue) {
rwl.writeLock().lock();
try {
data = newValue;
} finally {
rwl.writeLock().unlock();
}
}
public int read() {
rwl.readLock().lock();
try {
return data;
} finally {
rwl.readLock().unlock();
}
}
}
Synchronization Guidelines
- Prefer higher-level utilities: Use
ConcurrentHashMap,AtomicInteger, orBlockingQueuebefore implementing custom locking - Minimize critical sections: Hold locks for the shortest duration possible to reduce contention
- Lock ordering: When acquiring multiple locks, establish a global ordering to prevent circular wait conditions
- Visibility without locking: Use
volatilefor single-variable visibility guarantees when atomicity isn't required - Avoid locking during I/O: Never hold locks during blocking operations like network or disk access