Common Lock Strategies
Detailed Lock Strategies
1. Pessimistic Lock vs. Optimistic Lock
Criteria: When locking, predict whether the probability of lock contention is high or low.
- If the probability is predicted to be high, more work is required, and locking overhead (time, system resources) is larger → Pessimistic Lock
- If the probability is predicted to be low, less work is required, and locking overhead is smaller → Optimistic Lock
synchronized is both optimistic and pessimistic; it supports adaptation. It automatically counts the number of lock contentions to determine the probability. When contention probability is low, it behaves as an optimistic lock (faster). When contention probability is high, it upgrades to a pessimistic lock (more work).
2. Heavyweight Lock vs. Lightweight Lock
Criteria: The amount of work done.
- More work → Heavyweight lock
- Less work → Lightweight lock
Generally, pessimistic locks tend to be heavyweight, and optimistic locks tend to be lightweight. synchronized is both heavyweight and lightweight; it supports adaptation.
3. Spin Lock vs. Suspension-Based Lock
Spin Lock
void lock() {
while (true) {
if (/* lock is not available */) {
continue;
}
// Acquire lock
break;
}
}
Consumes more CPU resources but can acquire the lock as soon as it is released, resulting in faster lock acquisition. Because it continuously executes, CPU consumption remains high.
Suspension-Based Lock
This is a typical implementation of heavyweight locks. When a thread attempts to lock and the lock is already held (lock contention occurs), the thread is suspended (blocked). It no longer participates in scheduling until the lock is released and the system wakes it up to retry acquiring the lock. (Slower lock acquisition, saves CPU consumption) The time when the blocked thread is awakened is uncontrollable and may take a long time.
synchronized uses spin locks for its lightweight lock part and suspension-based locks for its heavyweight lock part.
4. Reentrant Lock vs. Non-Reentrant Lock
synchronized is reentrant: a thread can lock the same lock multiple times without causing a deadlock. A non-reentrant lock would cause a deadlock if the same thread tries to lock it twice.
5. Fair Lock vs. Unfair Lock
- Fair Lock: Threads acquire locks strictly in the order they requested (FIFO).
- Unfair Lock: Threads compete randomly; lock acquisition is indepandent of waiting time.
Analogy: Fair lock is like queuing for food, unfair lock is like grabbing tickets simultaneously. synchronized is an unfair lock; multiple threads attempt to acquire the lock.
6. Mutual Exclusion Lock vs. Read-Write Lock
- Mutual Exclusion Lock: Locking a read operation prevents writes; locking a write operation prevents reads.
- Read-Write Lock in Java:
- Read locks do not exclude each other.
- Write locks exclude each other.
- Read and write locks exclude each other.
In many scenarios with many reads and few writes, read-write locks improve efficiency.
Synchronized Implementation Principles
synchronized is both pessimistic and optimistic, both lightweight and heavyweight. Lightweight locks are implemented via spin locks; heavyweight locks via suspension-based locks. It is reentrant, not a read-write lock, and unfair.
The adaptive lock upgrade process:
- Unlocked → Code starts executing
synchronized→ (2) - Biased Lock → Lock contention occurs
- Lightweight Lock → Contention increases
- Heavyweight Lock
Biased lock aims to avoid locking if possible, or delay locking as much as possible. Metaphor: Like buying a house: initially unlocked means not bought; expressing interest is biased lock; when someone else wants to buy, it upgrades to lightweight lock (down payment); if many buyers compete, it becomes heavyweight lock.
Lock Elimination
Most typical: using synchronized in a single thread. Compiler optimization ensures equivalence before and after optimization, but it is conservative and has limited effect. It simplifies steps to improve efficeincy.
Lock Coarsening
Lock granularity: The amount of code within a lock. More code → coarser granularity; less code → finer granularity.
synchronized (this) {
fun1();
}
synchronized (this) {
fun2();
}
synchronized (this) {
fun3();
}
Merged:
synchronized (this) {
fun1();
fun2();
fun3();
}
Optimization strategies of synchronized:
- Lock upgrade (important, frequent exam topic: understand and explain biased locks clearly)
- Lock elimination
- Lock coarsening
CAS (Compare and Swap)
CAS is a CPU instruction that performs comparison and exchange atomically.
boolean cas(memoryAddress, register1, register2) {
if (memoryAddress == register1) {
swap(memoryAddress, register2);
return true;
}
return false;
}
Used for assignment. In Java, we can use CAS operations. Locking often implies lower performance, but CAS allows lock-free thread-safe code.
import java.util.concurrent.atomic.AtomicInteger;
public class Test5 {
private static AtomicInteger counter = new AtomicInteger(0);
public static void main(String[] args) {
Thread t1 = new Thread(() -> {
for (int i = 0; i < 5000; i++) {
counter.getAndIncrement(); // count++
counter.incrementAndGet(); // ++count
counter.getAndDecrement(); // count--
counter.decrementAndGet(); // --count
counter.getAndAdd(10); // count += 10
}
});
}
}
This code uses no locks; it is based on CAS. This approach is called lock-free programming. CAS can also be used to implement a spin lock, but that defeats the purpose of lock-free programming.
public class SpinLock {
private Thread owner = null; // The thread holding the lock; null means unlocked
public void lock() {
while (!CAS(this.owner, null, Thread.currentThread())) {
// If owner is not null, CAS returns false; loop continues
}
}
public void unlock() {
this.owner = null;
}
}
ABA Problem of CAS
CAS relies on comparing and swapping only if the value matches the expected value. The assumption is that the data hasn't changed in between. However, another thread might change the value from A to B and back to A, making it appear unchanged even though it was modified. This is the ABA problem, which may or may not afffect correctness depending on the scenario.