The Implementation Principle of Synchronized#
synchronized can ensure that at any given time, only one method can enter the critical section during execution, while also guaranteeing the memory visibility of shared variables.
Java Object Header and Monitor#
The Java object header and monitor are the foundation for implementing synchronized.
Object Header#
The object header in the Hotspot virtual machine mainly consists of two parts: Mark Word (mark field), Klass Pointer (type pointer)
. The Klass Pointer
is a pointer that points the object to its class metadata, and the virtual machine uses this pointer to determine which class instance the object belongs to. The Mark Word
is used to store the runtime data of the object itself, such as hash code (HashCode)
, GC generation age
, lock state flag
, thread holding the lock
, biased thread ID
, biased timestamp
, etc. It is key to implementing lightweight locks and biased locks.
Reference link:
Monitor#
Associative memory of locks in operating systems.
Scope of Action#
Every object in Java can act as a lock, which is the basis for synchronized implementation:
Instance method modifier
, the lock is the current instance object.Static method modifier
, the lock is the class object of the current class.Code block modifier
, the lock is the object inside the parentheses.
Synchronized code blocks are implemented using monitorenter and monitorexit
instructions, while synchronized methods rely on the method modifier ACC_SYNCHRONIZED
.
Analysis of Lock Competition in Multithreading#
Lock Object#
Situation 1:
The same object is accessed by two threads through two synchronized methods.
Result: Mutual exclusion occurs.
Explanation: Because the lock targets the object, when an object calls a synchronized method, other synchronized methods must wait for its execution to finish and the lock to be released before they can execute.
Situation 2:
Different objects call the same synchronized method in two threads.
Result: No mutual exclusion occurs.
Explanation: Because they are two different objects, the lock targets the object, not the method, so they can execute concurrently without mutual exclusion. To visualize, since each thread creates a new object when calling the method, there will be two spaces and two keys.
Class Lock#
Situation 1:
Two different synchronized methods are called directly on a class in two threads.
Result: Mutual exclusion occurs.
Explanation: Locking the class (.class) means there is only one class object, which can be understood as having only one space at any time, with N rooms inside, and one lock. Therefore, the rooms (synchronized methods) must be mutually exclusive.
Note: This situation is the same as declaring an object using the singleton pattern to call a non-static method, as there is only this one object. Thus, access to synchronized methods must be mutually exclusive.
Situation 2:
A static object of a class calls static or non-static methods in two threads.
Result: Mutual exclusion occurs.
Explanation: Because it is one object calling, similar to Situation 1.
Situation 3:
An object calls a static synchronized method and a non-static synchronized method in two threads.
Result: No mutual exclusion occurs.
Explanation: Although it is one object calling, the lock types of the two methods are different. The static method is actually called by the class object, meaning the locks produced by these two methods are not the same object lock, so they do not mutually exclude and can execute concurrently.
Lock Optimization#
JDK 1.6 introduced a lot of optimizations for lock implementation, such as spin locks, adaptive spin locks, lock elimination, lock coarsening, biased locks, lightweight locks, etc., to reduce the overhead of lock operations. Locks mainly exist in four states: no lock state, biased lock state, lightweight lock state, and heavyweight lock state, which will gradually upgrade with the intensity of competition. Note that locks can upgrade but cannot downgrade; this strategy is to improve the efficiency of acquiring and releasing locks.
Lock Elimination#
To ensure data integrity, we need to synchronize control over certain operations. However, in some cases, the JVM detects that there cannot be shared data competition, so it will eliminate these synchronization locks based on data support from escape analysis.
Lock Coarsening#
A series of continuous lock and unlock operations may lead to unnecessary performance loss, so the concept of lock coarsening is introduced, which connects multiple continuous lock and unlock operations together, expanding them into a larger range lock.
Spin Lock#
Thread blocking and waking require the CPU to switch from user mode to kernel mode. Frequent blocking and waking are a heavy burden on the CPU and will inevitably put great pressure on the system's concurrency performance (associative memory of zero-copy
, which also reduces unnecessary switching).
What is a spin lock? A spin lock allows the thread to wait for a while without being immediately suspended, checking whether the thread holding the lock will quickly release it. How to wait? By executing a meaningless loop (spinning).
Spin waiting cannot replace blocking. Not to mention the requirements for the number of processors (multi-core, it seems there are no single-core processors now), although it can avoid the overhead of thread switching, it occupies processor time. If the thread holding the lock quickly releases it, then the efficiency of spinning is very good; conversely, the spinning thread will waste processing resources without doing any meaningful work, which is a typical case of occupying the latrine without pulling shit, leading to performance waste.
Adaptive Spin Lock#
JDK 1.6 introduced a smarter spin lock, namely the adaptive spin lock. Adaptive means that the number of spins is no longer fixed; it is determined by the previous spin time on the same lock and the state of the lock owner. If a thread spins successfully, the next spin count will increase. Conversely, if there are rarely successful spins for a certain lock, the number of spins will decrease or even skip the spinning process to avoid wasting processor resources.
Lightweight Lock#
The performance improvement of lightweight locks is based on the premise that "for the vast majority of locks, there will be no competition throughout their lifecycle." If this premise is broken, there will be additional CAS operations on top of the mutual exclusion overhead. Therefore, in the case of multi-thread competition, lightweight locks are slower than heavyweight locks.
Biased Lock#
The locking and unlocking operations of lightweight locks rely on multiple CAS atomic instructions. Biased locks improve performance by checking whether the Mark Word is in a biased state; if so, it skips the CAS operation and directly executes the synchronized code block.
Heavyweight Lock#
Heavyweight locks are implemented through the internal monitor of the object, where the essence of the monitor relies on the Mutex Lock implementation of the underlying operating system. The operating system's implementation of thread switching requires switching from user mode to kernel mode, which has a very high switching cost.