MySQL Summary#
1. Four Major Characteristics of Transactions (ACID)#
Atomicity
#
Atomicity means that a transaction is an indivisible unit of work; the operations within a transaction either all occur or none occur.
Consistency
#
If the database is in a consistent state before the transaction executes, then after the transaction ends, regardless of whether it was successful, the database remains in a consistent state (the database's consistent state: when all data in a database conforms to all constraints defined in the database, it can be said that the database is in a consistent state).
Isolation
#
Isolation means that when multiple users access the database concurrently, one user's transaction should not be interfered with by another user's transaction; data between multiple concurrent transactions should be isolated from each other.
Durability
#
Durability means that once a transaction is committed, the changes it made to the data in the database are permanent, and subsequent database failures should not affect this.
Extensions#
-
CAP Theorem and BASE Theory: Reference: CAP and BASE Theory
dubbo+zookeeper
mainly implementsCP
springcloud eureka [hystrix]
mainly implementsAP
-
A Business Scenario Consideration
Transaction has been committed, but data is lost
2. Database Isolation Levels#
Read Uncommitted
#
Reading uncommitted content; at this isolation level, all transactions can see the execution results of other uncommitted transactions. This isolation level is rarely used in practical applications because its performance is not significantly better than other levels. Reading uncommitted data is also known as Dirty Read
.
Read Committed
#
Reading committed content; this is the default isolation level for most database systems (but not the default for MySQL). It satisfies the simple definition of isolation: a transaction can only see changes made by committed transactions. This isolation level may lead to what is called Nonrepeatable Read
, because other instances of the same transaction may have new commits while this instance is processing, so the same select may return different results.
Repeatable Read
#
This is the default transaction isolation level for MySQL, ensuring that multiple instances of the same transaction see the same rows when reading data concurrently. However, theoretically, this can lead to another tricky problem: Phantom Read
.
Serializable
#
Serialization; this is the highest isolation level, which prevents transactions from conflicting by forcing them to be ordered, thus solving the phantom read problem. In simple terms, it adds a shared lock to each read data row. At this level, it may lead to a large number of timeouts and lock contention.
Dirty Read | Nonrepeatable Read | Phantom Read | |
---|---|---|---|
Read Uncommitted | ✔️ | ✔️ | ✔️ |
Read Committed | ✖️ | ✔️ | ✔️ |
Repeatable Read | ✖️ | ✖️ | ✔️ |
Serializable | ✖️ | ✖️ | ✖️ |
Dirty Read#
If one transaction updates a piece of data, and another transaction reads the same piece of data at that time, if for some reason the first transaction rolls back, then the data read by the second transaction will be incorrect.
Phantom Read#
In two queries of a transaction, the number of data entries is inconsistent; for example, one transaction queries several rows of data, while another transaction inserts several new rows at that time. The previous transaction will find that there are several rows of data that it did not have in the subsequent query. The InnoDB and Falcon storage engines solve this problem through a Multi-Version Concurrency Control (MVCC) mechanism.
3. MySQL Lock Mechanism#
Reference link: MySQL Lock Summary
Associative memory: Happens Before
semantics in Java (the volatile
keyword).
Shared Locks and Exclusive Locks#
Shared Lock (Read Lock)
: Other transactions can read but cannot write.Exclusive Lock (Write Lock)
: Other transactions cannot read or write.
Lock Granularity#
-
Table-Level Lock
- Low overhead, quick to lock; deadlocks do not occur; large locking granularity, highest probability of lock conflicts, and lowest concurrency.
- Table-level locks are more suitable for query-dominant applications with few concurrent users and only a small amount of data updated based on index conditions, such as web applications.
- These storage engines avoid deadlocks by always acquiring all necessary locks at once and always acquiring table locks in the same order.
-
Row-Level Lock
- High overhead, slow to lock; deadlocks can occur; the smallest locking granularity, lowest probability of lock conflicts, and highest concurrency.
- Maximally supports concurrency while also bringing the highest lock overhead.
- Row-level locks are only implemented at the storage engine level, while the MySQL server layer does not implement them. Row-level locks are more suitable for applications with a large number of concurrent updates to a small amount of different data based on index conditions, while also having concurrent queries, such as some Online Transaction Processing (OLTP) systems.
-
Page Lock
: The overhead and locking time are between table locks and row locks; deadlocks can occur; locking granularity is between table locks and row locks, with general concurrency.
By default, both table locks and row locks are automatically obtained without additional commands. However, in some cases, users need to explicitly lock tables or control transactions to ensure the integrity of the entire transaction, which requires using transaction control and locking statements.
Various Engine Locks#
MyISAM and MEMORY
storage engines usetable-level locking
.BDB
storage engine usespage-level locking
, but also supports table-level locking.InnoDB
storage engine supports bothrow-level locking
and table-level locking, but defaults to row-level locking.In InnoDB, locks are obtained incrementally, except for transactions composed of a single SQL statement, which determines that deadlocks can occur in InnoDB.
MyISAM Table Locks#
MyISAM Table-Level Locking Modes#
-
Table Shared Read Lock
: Does not block other users' read requests for the same table but blocks write requests for the same table. -
Table Exclusive Write Lock
: Blocks other users' read and write operations for the same table.Read operations and write operations on MyISAM tables are serial. When a thread obtains a write lock on a table, only the thread holding the lock can perform update operations on the table. Other threads' read and write operations will wait until the lock is released.
By default, write locks have a higher priority than read locks. When a lock is released, it will prioritize the requests waiting in the write lock queue before granting requests waiting in the read lock queue.
This is also the reason why MyISAM tables are not well-suited for applications with a large number of update and query operations, as a large number of update operations can make it difficult for query operations to obtain read locks, potentially causing indefinite blocking.
MyISAM Locking Method#
In the case of automatic locking, MyISAM always obtains all the locks required by SQL statements at once, which is why MyISAM tables do not experience deadlocks (Deadlock Free).
The MyISAM storage engine supports concurrent inserts to reduce contention between read and write operations on a given table:
If there are no free blocks in the middle of the MyISAM table's data file, rows are always inserted at the end of the data file. In this case, you can freely mix concurrent INSERT and SELECT statements on the MyISAM table without locking—you can insert rows into the MyISAM table while other threads are performing read operations. Free blocks in the middle of the file may arise from deleted or updated rows. If there are free blocks in the middle of the file, concurrent inserts will be disabled, but once all free blocks are filled with new data, it will automatically re-enable. To control this behavior, you can use MySQL's concurrent_insert system variable:
- When concurrent_insert is set to 0, concurrent inserts are not allowed.
- When concurrent_insert is set to 1, if there are no holes in the MyISAM table (i.e., no deleted rows in the middle of the table), MyISAM allows one thread to read the table while another thread inserts records from the end of the table. This is also the default setting for MySQL.
- When concurrent_insert is set to 2, concurrent inserts are allowed at the end of the table regardless of whether there are holes in the MyISAM table.
Querying Table-Level Lock Contention#
You can analyze table lock contention on the system by checking the table_locks_waited and table_locks_immediate status variables. If the value of Table_locks_waited is relatively high, it indicates a serious table-level lock contention situation:
mysql> SHOW STATUS LIKE 'Table%';
+-----------------------+---------+
| Variable_name | Value |
+-----------------------+---------+
| Table_locks_immediate | 1151552 |
| Table_locks_waited | 15324 |
+-----------------------+---------+
InnoDB Row-Level Locks and Table-Level Locks#
InnoDB Lock Modes#
InnoDB implements the following two types of row locks:
Shared Lock (S)
: Allows one transaction to read a row, preventing other transactions from obtaining exclusive locks on the same dataset.Exclusive Lock (X)
: Allows the transaction holding the exclusive lock to update the data, preventing other transactions from obtaining shared read locks and exclusive write locks on the same dataset.
To allow row locks and table locks to coexist, InnoDB also has two types of internal intention locks (Intention Locks), both of which are table locks:
Intention Shared Lock (IS)
: A transaction intends to add a shared lock to a data row; the transaction must first obtain the IS lock on the table before adding a shared lock to a data row.Intention Exclusive Lock (IX)
: A transaction intends to add an exclusive lock to a data row; the transaction must first obtain the IX lock on the table before adding an exclusive lock to a data row.
InnoDB Locking Method#
- Intention locks are automatically added by InnoDB, requiring no user intervention.
- For UPDATE, DELETE, and INSERT statements, InnoDB automatically adds exclusive locks (X) to the affected datasets.
- For ordinary SELECT statements, InnoDB does not add any locks.
- Transactions can explicitly add shared or exclusive locks to record sets using the following statements:
(1) Shared Lock (S): SELECT * FROM table_name WHERE ... LOCK IN SHARE MODE (2) Exclusive Lock (X): SELECT * FROM table_name WHERE ... FOR UPDATE.
Implicit Locking#
InnoDB uses the two-phase locking protocol (associative memory to the 2PC protocol) during transaction execution:
- Locking can be executed at any time; InnoDB will automatically add locks as needed based on the isolation level.
- Locks are only released when executing commit or rollback, and all locks are released at the same time.
Explicit Locking#
select ... lock in share mode // Shared Lock
select ... for update // Exclusive Lock
select for update
- When executing this select query, the corresponding index access entry will be locked with an exclusive lock (X lock), meaning that the lock corresponding to this statement is equivalent to the effect of an update.
- The usage scenario for select *** for update: To ensure that the data retrieved is the latest data and that only the current transaction can modify the retrieved data, the for update clause is needed.
- Other sessions can query the record but cannot add shared or exclusive locks to it; they must wait to obtain the lock.
select lock in share mode
-
The in share mode clause adds a share lock to the retrieved data, indicating that other transactions can only perform simple select operations on this data and cannot perform DML operations.
-
To ensure that the data retrieved has not been modified by other transactions, meaning that the retrieved data is the latest data and that no one else can modify it. However, the current transaction may not be able to modify the data because other transactions may have also applied an S lock to this data using in share mode.
-
Other sessions can still query the record and can also add a share mode shared lock to the record. However, if the current transaction needs to update the record, it may likely cause a deadlock.
- The difference between the two: for update is an exclusive lock (X lock); once a transaction obtains this lock, other transactions cannot execute on this data; lock in share mode is a shared lock, allowing multiple transactions to execute on the same data simultaneously.
-
InnoDB Row Lock Implementation#
- InnoDB row locks are implemented by
locking the index entries on the index
, which differs from MySQL and Oracle, where the latter locks the corresponding data rows in the data blocks. This characteristic of InnoDB's row lock implementation means that InnoDB only uses row-level locks when retrieving data through index conditions; otherwise, InnoDB will use table locks! - Whether using primary key indexes, unique indexes, or ordinary indexes, InnoDB will use row locks to lock the data.
- Row locks can only be used when the execution plan actually uses the index; even if index fields are used in the conditions, whether the index is used to retrieve data is determined by MySQL based on the cost of different execution plans. If MySQL determines that a full table scan is more efficient, such as for very small tables, it will not use the index. In this case, InnoDB will use table locks instead of row locks. Therefore, when analyzing lock conflicts,
- Since MySQL's row locks are locks on indexes, not on records,
even if multiple sessions access different records, if they use the same index key, lock conflicts will occur
(subsequent sessions using these indexes must wait for the locks held by the first session to be released before they can obtain the locks).
MySQL Lock Algorithms#
Record Lock
: Lock on a single row record.Gap Lock
: A gap lock locks a range but does not include the record itself. The purpose of the GAP lock is to prevent phantom reads from occurring during two current reads of the same transaction.Next-Key Lock
: Record + Gap, locks a range and the record itself. This method is used for row queries, primarily to solve the phantom read problem.- No nonsense, what locks are added by various SQL?
4. MySQL's MVCC
Mechanism#
What is the MVCC
Mechanism?#
It involves adding two hidden columns at the end of each record to record the creation version number and deletion version number, with each transaction having a unique incrementing version number when it starts. In InnoDB
, two hidden fields are added to each row to implement MVCC, both used to store the transaction version number, incrementing with each new transaction.
Consistent Non-Locking Read#
Consistent read
allows InnoDB to provide a snapshot of the database at a certain point in time using multiple versions. If the isolation level is REPEATABLE READ, then all consistent reads in the same transaction read the snapshot obtained from the first such read in the transaction; if it is READ COMMITTED, then each consistent read in a transaction reads its own refreshed snapshot version. Consistent read is the default mode for ordinary SELECT statements under READ COMMITTED and REPEATABLE READ isolation levels. Consistent reads do not add any form of locks to the tables they access, allowing other transactions to concurrently modify them.
Solving Nonrepeatable Reads#
When an MVCC database needs to update a data record, it does not directly overwrite the old data with the new data; instead, it marks the old data as obsolete and adds the new version of the data elsewhere. This way, multiple versions of the data are stored, but only one is the latest. This method allows readers to read data that existed before they read it, even if it was modified or deleted by someone else during the read process, without affecting the user who was reading earlier. It ensures that multiple reads of the same data within the same transaction return the same result, solving the nonrepeatable read problem.
The downside is:
This multi-version approach avoids the overhead of filling gaps caused by delete operations in memory and disk storage structures, but it requires the system to periodically clean up (sweep through) to actually delete old, obsolete data.
In summary: MVCC is a way to temporarily retain multiple versions of the same data to achieve concurrent control.
Reference Links#
- Learn about the
MVCC
mechanism throughetcd
: etcd
5. MySQL Storage Engines#
InnoDB
#
Overview#
- Supports ACID transactions and the four isolation levels of transactions;
- Supports row-level locks and foreign key constraints; thus, it can support write concurrency;
- Does not store the total number of rows;
- An InnoDB engine is stored in a file space (shared tablespace, table size is not controlled by the operating system, a table may be distributed across multiple files), and it may also be multiple (set as independent tablespace, table size is limited by the operating system file size, generally 2G), subject to the operating system file size limit;
- Primary key indexes use clustered indexes (the index data field stores the data file itself), and the data field of auxiliary indexes stores the primary key values; therefore, to find data from auxiliary indexes, you need to first find the primary key value through the auxiliary index and then access the auxiliary index;
- It is best to use auto-increment primary keys to prevent large adjustments to the file structure when inserting data to maintain the B+ tree structure;
- Suitable for OLTP (Online Transaction Processing), with high real-time requirements.
Main Features#
Insert buffer, double write, adaptive hash, asynchronous IO, flush neighbor page.
References#
- Key Features of InnoDB
- InnoDB Architecture, Understand in One Picture!
- Does InnoDB Support Hash Index?
- InnoDB Concurrency is So High, the Reason is Actually Here?
- InnoDB, Five Best Practices, Know the Reasons?
- Digging Pits, Seven Types of Locks in InnoDB
- InnoDB Concurrent Inserts, Actually Use Intention Locks?
- Inserting into InnoDB Auto-Increment Columns, Actually is Table Lock
- Awesome, Methods to Debug InnoDB Deadlocks!
- InnoDB, Why Does Select Block Insert?
- InnoDB, Snapshot Read, What are the Differences Under RR and RC?
MyISAM
#
- Does not support transactions, but each query is atomic;
- Supports table-level locks, meaning each operation locks the entire table;
- Stores the total number of rows in the table;
- A MyISAM table consists of three files: index file, table structure file, and data file;
- Uses non-clustered indexes, where the data field of the index file stores pointers to the data file. Auxiliary indexes are similar to primary indexes, but auxiliary indexes do not need to guarantee uniqueness.
- Suitable for OLAP (Online Analytical Processing), with low real-time requirements but generally large data volumes.
MEMORY#
ARCHIVE#
Reference Links#
- Introduction to MySQL Database Storage Engines
- Understand the Index Differences Between MyISAM and InnoDB in One Minute
- What Storage Engine is More Suitable for Frequent Inserts?
6. MySQL Indexes#
The main types are B+ indexes and hash indexes, with differences:#
- If it is an equality query, then hash indexes have a clear advantage because they only need to go through one algorithm to find the corresponding key value; of course, this premise is that the key values are unique. If the key values are not unique, you need to first find the position of the key and then scan the linked list until you find the corresponding data.
- If it is a range query, hash indexes are useless because the originally ordered key values may become discontinuous after the hash algorithm, making it impossible to utilize the index for range query retrieval.
- Similarly, hash indexes cannot be used to complete sorting or partial fuzzy queries like 'xxx%'. (This type of partial fuzzy query is essentially also a range query.)
- Hash indexes do not support the leftmost matching rule for multi-column joint indexes.
- The keyword retrieval efficiency of B+ tree indexes is relatively average, unlike B trees, which have large fluctuations. In cases with a large number of duplicate key values, the efficiency of hash indexes is also very low due to the so-called hash collision problem.
B+ Index Data Structure and Differences from B Trees#
-
B Tree: Ordered array + balanced multi-way tree
Its characteristics are:
(1) It is no longer a binary search but an m-way search;
(2) Leaf nodes and non-leaf nodes both store data;
(3) In-order traversal can obtain all nodes; -
B+ Tree: Ordered array linked list + balanced multi-way tree, with some improvements made on the basis of B trees
(1) Non-leaf nodes no longer store data; data is only stored in the leaf nodes at the same level.
(2) A linked list is added between the leaves, so obtaining all nodes no longer requires in-order traversal. -
Improved Characteristics of B+ Trees
(1) For range lookups, after locating min and max, the intermediate leaf nodes are the result set, eliminating the need for in-order backtracking.
(2) Leaf nodes store actual record rows, with record rows stored relatively tightly, suitable for large data volume disk storage; non-leaf nodes store the PK for query acceleration, suitable for memory storage.
(3) If non-leaf nodes do not store actual records but only store record keys, then under the same memory conditions, B+ trees can store more indexes.
Why B+ Trees are Suitable as Index Structures#
- Unlike binary search trees, B trees are m-way, significantly reducing tree height, allowing for the storage of large amounts of data.
- Very suitable for disk storage, fully utilizing the principle of locality and disk pre-reading.
(1) Memory read and write blocks are slow, and disk read and write are much slower.
(2) Disk pre-reading: Disk reads are not done on demand but are pre-read by pages; a page (4K of data) is read at once, loading more data. If the data to be read in the future is in this page, it can avoid future disk IO, improving efficiency.
(3) The principle of locality: Software design should follow the principle of "data reading concentration" and "using one data likely means using nearby data," allowing disk pre-reading to fully improve disk IO. - Both MyISAM and InnoDB use B+ trees as index storage structures, but the storage methods of data on the leaves differ. The former separates index files and data files, with index files only saving pointers to the pages where records are located (physical locations), while the latter directly stores data or stores primary key values (retrieving auxiliary indexes based on primary key values, which effectively performs a secondary query, increasing IO times).
Index Classification#
- Normal Index: The most basic index with no restrictions.
- Unique Index: Similar to "normal index," but the values of the indexed columns must be unique, allowing for null values.
- Primary Key Index: A special type of unique index that does not allow null values.
- Full-Text Index: Only available for MyISAM tables, generating full-text indexes for larger data is time-consuming and space-consuming. (MATCH... AGAINST...)
- Composite Index: To further improve MySQL efficiency, composite indexes can be established, following the "leftmost prefix" principle.
- Covering Index: An index that includes (covers) all the values of the fields that need to be queried.
Explain Simulating SQL Query Plans#
Information Contained in Explain Execution Plans#
id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
---|
Detailed Explanation of Corresponding Meanings#
-
id
: The sequence number of the select query, containing a set of numbers indicating the order of executing select clauses or operating tables in the query. -
select_type
: The type of query, divided into:- SIMPLE: A simple select query that does not contain subqueries or unions.
- PRIMARY: If the query contains any complex subparts, the outermost query is marked as primary.
- SUBQUERY: Contains a subquery in the select or where list.
- DERIVED: A subquery in the from list is marked as derived; MySQL executes these subqueries recursively and puts the results in a temporary table.
- UNION: If a second select appears after union, it is marked as union; if the union is included in a subquery in the from clause, the outer select will be marked as derived.
- UNION RESULT: A select that retrieves results from the union table.
-
type
: Access type, an important indicator in SQL query optimization, with values from good to bad in the following order:
system > const > eq_ref > ref > fulltext > ref_or_null > index_merge > unique_subquery > index_subquery > range > index > ALL
Generally, a good SQL query should at least reach therange
level, preferably achievingref
.system
: The table has only one record (equal to a system table), which is a special case of const and usually does not occur, so it can be ignored.const
: Indicates that the index was found in one go; const is used for comparing primary keys or unique indexes. Since only one row of data needs to be matched, it is very fast. If the primary key is placed in the where list, MySQL can convert the query into a const.eq_ref
: Unique index scan; for each index key, there is only one record in the table that matches it. Commonly seen in primary key or unique index scans.ref
: Non-unique index scan; returns all rows matching a single value. Essentially, it is also an index access, returning all rows matching a single value, but it may find multiple matching rows, so it should be considered a hybrid of lookup and scan.range
: Only retrieves rows within a given range, using an index to select rows. The key column shows which index was used. Generally, this occurs when the where clause includes between, <, >, in, etc.index
: Full Index Scan; the difference between index and ALL is that index only traverses the index tree. This is usually a block of ALL, as index files are typically smaller than data files. (Although both Index and ALL read the entire table, index reads from the index while ALL reads from the disk.)ALL
: Full Table Scan; traverses the entire table to find matching rows.
-
possible_keys
: If there are indexes on the fields involved in the query, those indexes will be listed, but they may not necessarily be used in the query. -
key
: The actual index used; if NULL, no index was used. If a covering index was used in the query, it will only appear in the key list. -
key_len
: Indicates the number of bytes used in the index, the length of the index used in the query (maximum possible length), not the actual length used; theoretically, the shorter the length, the better. key_len is calculated based on the table definition, not retrieved from the table. -
ref
: Shows which column of the index was used, if possible, as a constant const. -
rows
: Based on table statistics and index selection, it roughly estimates the number of rows that need to be read to find the required records. -
Extra
: Important additional information that is not suitable to display in other fields.Using filesort
: MySQL uses an external index sort on the data instead of sorting and reading according to the table's index. This means that MySQL cannot utilize the index to complete the sorting operation, resulting in "file sorting."Using temporary
: A temporary table is used to save intermediate results, meaning that MySQL used a temporary table when sorting the query results, commonly seen in order by and group by.Using index
: Indicates that the corresponding select operation used a covering index (Covering Index), avoiding access to the data rows of the table, which is efficient; if Using where appears simultaneously, it indicates that the index was used to perform the lookup of index key values; if it does not appear simultaneously, it indicates that the index was used to read data rather than perform a lookup action.Using Where
: Indicates that where filtering was used.Using join buffer
: Indicates that a join buffer was used.Impossible WHERE
: The values in the where clause are always false, making it impossible to retrieve any tuples.select tables optimized away
: In the absence of a group by clause, based on index optimization for MIN/MAX operations or optimizing COUNT(*) operations for MyISAM storage engines, the optimization can be completed during the query execution plan generation phase without waiting for the execution phase.distinct
: Optimizes distinct operations, stopping the search for the same value after finding the first matching tuple.
Reference Links#
- MySQL Advanced: Detailed Explanation of Explain Execution Plans
- Detailed Explanation of MySQL Indexes
Differences Between Clustered Indexes and Non-Clustered Indexes#
Clustered Index (Clustered)#
Definition: The physical order of data rows is the same as the logical order of column values (usually the column of the primary key); a table can only have one clustered index. If no primary key is defined, a unique non-null index will be chosen as a substitute; if no such index exists, a primary key will be implicitly defined as the clustered index.
Non-Clustered Index (Unclustered)#
Definition: The logical order of the index does not match the physical storage order of rows on disk; a table can have multiple non-clustered indexes.
References#
7. Master-Slave Replication in Databases#
- Even if MySQL is split into multiple instances, there must be a master and slave; all write operations must be completed on the master MySQL.
- All data on slave MySQL comes from (is synchronized with) master MySQL.
- In MySQL master-slave setups, if a business (a method in a service) contains both R and W operations, since W operations must be performed on the master MySQL, all data in a single transaction must come from the master.
Extensions#
How to Optimize Long Delays in MySQL Master-Slave
8. Normalization Design#
- The First Normal Form (1NF) is a basic requirement for relational schemas; a database that does not satisfy the first normal form (1NF) is not a relational database, meaning that each column in the database table is an indivisible basic data item, and there cannot be multiple values in the same column.
- The Second Normal Form (2NF) requires that each instance or row in the database table can be uniquely distinguished. That is, there is no partial dependency between fields and the primary key.
- The Third Normal Form (3NF) requires that a database table does not contain non-primary key information that is already included in other tables. That is, based on the second normal form, there is no transitive dependency (redundant data is not allowed).
9. Extensions#
- 58.com MySQL Practical Experience
- Things You Must Know About the MySQL Kernel!
- Things You Might Not Know About MySQL!
- Three Solutions for Redundant Data in MySQL
- Single KEY Business, Practice of Database Horizontal Sharding Architecture
- Database Smooth Expansion Architecture Plan in Seconds
- Detailed Analysis of MySQL Transaction Logs (Redo Log and Undo Log)