Basic Concepts of Java Concurrency#
Hardware Level#
Since the speed of CPU instruction execution is very fast, but the speed of memory access is much slower, differing by several orders of magnitude, to prevent memory from becoming a bottleneck in computer program processing, a solution is implemented by adding a cache between the CPU and memory.
Cache Consistency#
Adding a cache between the CPU and main memory can lead to cache consistency issues in multi-threaded scenarios. In other words, in a multi-core CPU, the cached content of the same data in each core's cache may be inconsistent.
Two solutions to cache consistency:
- By adding a LOCK# lock on the bus (modern computers are multi-core CPUs, and bus locking will prevent other CPUs from accessing memory, leading to inefficiency).
- Through cache coherence protocols (Cache Coherence Protocol).
MESI Cache Coherence Protocol#
The most famous cache coherence protocol is Intel's MESI protocol, which ensures that copies of shared variables used in each cache are consistent.
The core idea of MESI is: when the CPU writes data, if it finds that the variable being operated on is a shared variable, meaning that a copy of this variable exists in other CPUs, it will send a signal to notify other CPUs to invalidate the cache line of that variable. Therefore, when other CPUs need to read this variable, they will find that the cache line for that variable in their own cache is invalid, and they will read it from memory again.
In the MESI protocol, each cache can have four states:
M(Modified)
: This data line is valid, the data has been modified, and is inconsistent with the data in memory; the data exists only in this cache.E(Exclusive)
: This data line is valid, the data is consistent with the data in memory; the data exists only in this cache.S(Shared)
: This data line is valid, the data is consistent with the data in memory; the data exists in many caches.I(Invalid)
: This data line is invalid.
The MESI protocol can ensure cache consistency but cannot guarantee real-time performance.
Processor Optimization and Instruction Reordering#
Processor Optimization
: To make the internal computational units of the processor as fully utilized as possible, the processor may perform out-of-order execution on the input code.Instruction Reordering
: In addition to many popular processors performing optimization and out-of-order processing on code, many programming language compilers also have similar optimizations. For example, the Just-In-Time (JIT) compiler of the Java Virtual Machine also performs instruction reordering.- Recall:
- In Spark, tasks without dependency can be executed concurrently to optimize computation.
- Programs without resource contention can execute concurrently. For example, if a program is competing for I/O resources, it can execute other programs that do not compete for I/O resources first, saving wait time.
Three Concepts in Concurrent Programming#
Atomicity#
Atomicity means that in an operation, the CPU cannot pause midway and then reschedule; it must either complete the operation or not execute it at all (recalling the atomicity of database transaction processing).
Visibility#
Visibility means that when multiple threads access the same variable, if one thread modifies the value of this variable, other threads can immediately see the modified value.
Ordering#
Ordering means that the execution order of the program follows the sequence of the code.
In fact, the issues of atomicity, visibility, and ordering are abstract definitions created by people. The underlying issues of this abstraction are the cache consistency problem, processor optimization problem, and instruction reordering problem mentioned earlier. The cache consistency problem is essentially a visibility problem, while processor optimization
may lead to atomicity issues, and instruction reordering
can lead to ordering issues.
Java Memory Model#
Basic Concepts#
-
Java concurrency adopts a "shared memory" model, where threads communicate through reading and writing shared memory states. Multiple threads cannot interact by directly passing data; their interaction can only be achieved through shared variables.
-
The Java Memory Model (JMM) itself is an abstract concept that does not exist in reality. It describes a set of rules or specifications that state all variables exist in main memory, similar to
ordinary memory
, while each thread has its own working memory, analogous tocache
. Therefore, thread operations are primarily based on working memory, and they can only access their own working memory, synchronizing values back to main memory before and after work.
ps: This is somewhat similar to a cache + DB system architecture, where all variables exist in main memory, similar to DB
, and each thread has its own working memory, analogous to cache
.
Implementation of the Java Memory Model#
-
The Java Memory Model (JMM) stipulates that all variables are stored in main memory, and each thread has its own working memory:
- The working memory of a thread holds copies of the variables used by that thread (copied from main memory). All operations on variables must be executed in working memory and cannot directly access variables in main memory.
- Different threads cannot directly access each other's working memory variables; the transfer of variable values between threads must be completed through main memory.
-
Communication between Java threads is controlled by the memory model JMM (Java Memory Model):
- JMM determines when a thread's write to a variable becomes visible to another thread.
- Shared variables between threads are stored in main memory.
- Each thread has a private local memory that stores copies of read/write shared variables.
- JMM provides memory visibility guarantees for programmers by controlling the interaction between each thread's local memory.
-
Memory interaction operations:
- lock (locking): Acts on a variable in main memory, marking it as exclusively owned by one thread.
- unlock (unlocking): Acts on a variable in main memory, releasing a variable that is in a locked state, allowing other threads to lock it afterward.
- read (reading): Acts on a variable in main memory, reading a variable from main memory into working memory.
- load (loading): Acts on working memory, loading the variable read into the working memory's variable copy.
- use (using): Acts on a variable in working memory, passing the variable value to an execution engine.
- assign (assigning): Acts on a variable in working memory, assigning the value received by the execution engine to the working memory variable.
- store (storing): Passing the value of the working memory variable to main memory.
- write (writing): Writing the value from the store operation into the variable in main memory.
Note: Main memory and working memory are not the same level of memory division as the Java heap, stack, method area, etc., in the JVM memory structure.
Implementation of Concurrency in Java#
Implementation of Atomicity#
-
In Java, to ensure atomicity, two high-level bytecode instructions, monitorenter and monitorexit, are provided, corresponding to the synchronized keyword.
-
The Atomic class can also achieve atomicity.
Based on the CAS principle, refer to Why volatile cannot guarantee atomicity while Atomic can.
Implementation of Visibility#
-
Use the volatile keyword to mark memory barriers to ensure visibility.
-
Use the synchronized keyword to define synchronized code blocks or synchronized methods to ensure visibility.
-
Use the Lock interface to ensure visibility.
-
Use Atomic types to ensure visibility.
-
Use the final keyword to achieve visibility.
Fields modified by final, once initialized (static variables or initialized in the constructor), and if the constructor does not pass the reference of "this" out (escaping the "this" reference is very dangerous, as other threads may access an "incompletely initialized" object through this reference), then the final field's value can be seen in other threads.
Implementation of Ordering#
In Java, synchronized and volatile can be used to ensure the ordering of operations between multiple threads.
- The volatile keyword will prevent instruction reordering.
- The synchronized keyword ensures that only one thread can operate at a time.
Happens-Before Principle
#
The JMM has some inherent ordering that can be guaranteed without any means, usually referred to as the happens-before principle. The "JSR-133: Java Memory Model and Thread Specification" defines the following happens-before rules:
Program Order Rule
: Within a thread, semantic serializability must be guaranteed, meaning execution must follow the order of the code.Monitor Lock Rule
: Unlocking a thread happens-before subsequent locking of that thread.Volatile Variable Rule
: Writing to a volatile domain happens-before subsequent reading of that volatile domain.Transitivity
: If A happens-before B, and B happens-before C, then A happens-before C.start() Rule
: The start() method of a thread happens-before each of its actions. If thread A modifies the value of a shared variable before executing thread B's start method, then when thread B executes the start method, thread A's modification to the shared variable is visible to thread B.join() Thread Termination Principle
: All operations of a thread happen-before the thread's termination. If A executes ThreadB.join() and successfully returns, then any operation in thread B happens-before thread A successfully returns from ThreadB.join().interrupt() Thread Interruption Principle
: Calling the thread's interrupt() method happens-before the interrupted thread's code detects the occurrence of the interruption event, which can be checked using the Thread.interrupted() method.finalize() Object Finalization Principle
: The completion of an object's initialization happens-before the start of its finalize() method.