MySQL Summary#
I. Four Major Characteristics of Transactions (ACID)#
Atomicity
#
Atomicity means that a transaction is an indivisible unit of work; the operations within a transaction either all occur or none occur.
Consistency
#
If the database is in a state of integrity before the transaction executes, then after the transaction ends, regardless of whether the transaction was successful, the database remains in a state of integrity (the integrity state of a database: when all data in a database conforms to all constraints defined in the database, it can be said that the database is in a state of integrity).
Isolation
#
Isolation refers to the fact that when multiple users concurrently access the database, one user's transaction cannot be interfered with by other users' transactions; data among multiple concurrent transactions must be isolated from each other.
Durability
#
Durability means that once a transaction is committed, the changes it made to the data in the database are permanent, and subsequent database failures should not affect it.
Extensions#
-
CAP theorem and BASE theory: Reference: CAP and BASE Theory
dubbo+zookeeper
mainly implementsCP
springcloud eureka [hystrix]
mainly implementsAP
-
A business scenario consideration
Transaction has been committed, but data is lost
II. Database Isolation Levels#
Read Uncommitted
#
Reading uncommitted content; at this isolation level, all transactions can see the execution results of other uncommitted transactions. This isolation level is rarely used in practical applications because its performance is not much better than other levels. Reading uncommitted data is also known as Dirty Read
.
Read Committed
#
Reading committed content; this is the default isolation level for most database systems (but not MySQL's default). It satisfies the simple definition of isolation: a transaction can only see changes made by committed transactions. This isolation level may lead to what is known as Nonrepeatable Read
, because other instances of the same transaction may have new commits while this instance is processing, so the same select may return different results.
Repeatable Read
#
This is the default transaction isolation level for MySQL, ensuring that multiple instances of the same transaction see the same rows of data when reading concurrently. However, theoretically, this can lead to another tricky problem: Phantom Read
.
Serializable
#
Serialization; this is the highest isolation level, which prevents conflicts by enforcing transaction ordering, thus solving the phantom read problem. In simple terms, it adds shared locks to each row of data read. At this level, it may lead to a large number of timeout phenomena and lock contention.
Dirty Read | Nonrepeatable Read | Phantom Read | |
---|---|---|---|
Read Uncommitted | ✔️ | ✔️ | ✔️ |
Read Committed | ✖️ | ✔️ | ✔️ |
Repeatable Read | ✖️ | ✖️ | ✔️ |
Serializable | ✖️ | ✖️ | ✖️ |
Dirty Read#
If one transaction has updated a piece of data, and another transaction reads the same data at that time, if for some reason the first transaction rolls back, then the data read by the second transaction will be incorrect.
Phantom Read#
In two queries of a transaction, the number of data entries is inconsistent; for example, one transaction queries several rows of data, while another transaction inserts new rows at that time. The previous transaction, in the subsequent query, will find that there are several rows of data that it did not have before. InnoDB and Falcon storage engines solve this problem through a Multi-Version Concurrency Control (MVCC) mechanism.
III. MySQL Lock Mechanism#
Reference link: MySQL Lock Summary
Associative memory: Happens Before
semantics in Java (volatile
keyword)
Shared Locks and Exclusive Locks#
Shared Lock (Read Lock)
: Other transactions can read but cannot write.Exclusive Lock (Write Lock)
: Other transactions cannot read or write.
Lock Granularity#
-
Table-Level Lock
- Low overhead, fast locking; no deadlocks; large locking granularity, highest probability of lock conflicts, and lowest concurrency.
- Table-level locks are more suitable for query-dominated applications with few concurrent users and only a small amount of data updated by index conditions, such as web applications.
- These storage engines avoid deadlocks by always acquiring all required locks at once and always acquiring table locks in the same order.
-
Row-Level Lock
- High overhead, slow locking; deadlocks may occur; smallest locking granularity, lowest probability of lock conflicts, and highest concurrency.
- Maximally supports concurrency while also bringing the highest lock overhead.
- Row-level locks are only implemented at the storage engine level, while the MySQL server layer does not implement them. Row-level locks are more suitable for applications with a large number of concurrent updates to a small amount of different data, while also having concurrent queries, such as some online transaction processing (OLTP) systems.
-
Page Lock
: The overhead and locking time are between table locks and row locks; deadlocks may occur; locking granularity is between table locks and row locks, with average concurrency.By default, both table locks and row locks are automatically obtained without additional commands. However, in some cases, users need to explicitly lock tables or control transactions to ensure the integrity of the entire transaction, which requires using transaction control and locking statements.
Various Engine Locks#
MyISAM and MEMORY
storage engines usetable-level locking
.BDB
storage engine usespage-level locking
, but also supports table-level locks.InnoDB
storage engine supports bothrow-level locking
and table-level locks, but defaults to row-level locking.In InnoDB, except for transactions composed of a single SQL statement, locks are acquired incrementally, which means that deadlocks can occur in InnoDB.
MyISAM Table Locks#
MyISAM Table-Level Lock Modes#
-
Table Shared Read Lock
: Does not block other users' read requests on the same table but blocks write requests on the same table. -
Table Exclusive Write Lock
: Blocks other users' read and write operations on the same table.Read operations and write operations on MyISAM tables are serial. When a thread acquires a write lock on a table, only the thread holding the lock can perform update operations on the table. Other threads' read and write operations will wait until the lock is released.
By default, write locks have a higher priority than read locks. When a lock is released, it will first be given to the waiting write lock requests, and then to the waiting read lock requests.
This is also why MyISAM tables are not suitable for applications with a large number of update and query operations, because a large number of update operations can make it difficult for query operations to obtain read locks, potentially leading to indefinite blocking.
MyISAM Locking Method#
In the case of automatic locking, MyISAM always acquires all the locks needed for the SQL statement at once, which is why MyISAM tables do not experience deadlocks (Deadlock Free).
The MyISAM storage engine supports concurrent inserts to reduce contention between read and write operations on a given table:
If there are no free blocks in the middle of the MyISAM table's data file, rows are always inserted at the end of the data file. In this case, you can freely mix concurrent use of INSERT and SELECT statements on the MyISAM table without locking— you can insert rows into the MyISAM table while other threads are performing read operations. Free blocks in the middle of the file may arise from deleted or updated rows from the table. If there are free blocks in the middle of the file, concurrent inserts will be disabled, but when all free blocks are filled with new data, it will automatically re-enable. To control this behavior, you can use MySQL's concurrent_insert system variable:
- When concurrent_insert is set to 0, concurrent inserts are not allowed.
- When concurrent_insert is set to 1, if there are no holes in the MyISAM table (i.e., no deleted rows in the middle of the table), MyISAM allows one thread to read the table while another thread inserts records from the end of the table. This is also the default setting for MySQL.
- When concurrent_insert is set to 2, concurrent inserts are allowed at the end of the table regardless of whether there are holes in the MyISAM table.
Querying Table-Level Lock Contention#
You can analyze the contention for table locks on the system by checking the table_locks_waited and table_locks_immediate status variables. If the value of Table_locks_waited is relatively high, it indicates a serious contention for table-level locks:
mysql> SHOW STATUS LIKE 'Table%';
+-----------------------+---------+
| Variable_name | Value |
+-----------------------+---------+
| Table_locks_immediate | 1151552 |
| Table_locks_waited | 15324 |
+-----------------------+---------+
InnoDB Row-Level Locks and Table-Level Locks#
InnoDB Lock Modes#
InnoDB implements the following two types of row locks:
Shared Lock (S)
: Allows a transaction to read a row, preventing other transactions from obtaining exclusive locks on the same data set.Exclusive Lock (X)
: Allows the transaction that obtains the exclusive lock to update data, preventing other transactions from obtaining shared read locks and exclusive write locks on the same data set.
To allow row locks and table locks to coexist, InnoDB also has two types of intention locks (Intention Locks) used internally, both of which are table locks:
Intention Shared Lock (IS)
: A transaction intends to add a shared lock to a data row; the transaction must first obtain the IS lock on the table before adding a shared lock to a data row.Intention Exclusive Lock (IX)
: A transaction intends to add an exclusive lock to a data row; the transaction must first obtain the IX lock on the table before adding an exclusive lock to a data row.
InnoDB Locking Method#
- Intention locks are automatically added by InnoDB, requiring no user intervention.
- For UPDATE, DELETE, and INSERT statements, InnoDB automatically adds exclusive locks (X) to the affected data sets.
- For ordinary SELECT statements, InnoDB does not add any locks.
- Transactions can explicitly add shared or exclusive locks to record sets using the following statements:
(1) Shared Lock (S): SELECT * FROM table_name WHERE ... LOCK IN SHARE MODE (2) Exclusive Lock (X): SELECT * FROM table_name WHERE ... FOR UPDATE.
Implicit Locking#
InnoDB uses two-phase locking protocol (associative memory to 2pc protocol) during transaction execution:
- Locking can occur at any time; InnoDB will automatically add locks as needed based on the isolation level.
- Locks are only released when commit or rollback is executed, and all locks are released simultaneously.
Explicit Locking#
select ... lock in share mode // Shared Lock
select ... for update // Exclusive Lock
select for update
- When executing this select query, the corresponding index access entries will be locked with an exclusive lock (X lock), meaning that the lock corresponding to this statement is equivalent to the effect of an update.
- The usage scenario for select *** for update: To ensure that the data I retrieve is the latest data and that the data retrieved can only be modified by myself, the for update clause is needed.
- Other sessions can query the record but cannot add shared or exclusive locks to it; they must wait to obtain the lock.
select lock in share mode
-
The purpose of the in share mode clause is to add a share lock to the data found, indicating that other transactions can only perform simple select operations on this data and cannot perform DML operations.
-
To ensure that the data I retrieve is not being modified by other transactions, meaning to ensure that the data retrieved is the latest data, and that others are not allowed to modify the data. However, I may not necessarily be able to modify the data because other transactions may also have used the in share mode method to add an S lock to this data.
-
Other sessions can still query the record and can also add shared locks in share mode to that record. However, if the current transaction needs to update that record, it may very likely cause a deadlock.
- The difference between the two: for update is an exclusive lock (X lock), once a transaction obtains this lock, other transactions cannot execute on this data; lock in share mode is a shared lock, multiple transactions can simultaneously execute on the same data.
-
InnoDB Row Lock Implementation#
- InnoDB row locks are implemented by
locking the index entries on the index
, which is different from Oracle, which locks the corresponding data rows in the data block. This characteristic of InnoDB's row lock implementation means that InnoDB only uses row-level locks when retrieving data through index conditions; otherwise, InnoDB will use table locks! - Whether using primary key indexes, unique indexes, or ordinary indexes, InnoDB will use row locks to lock data.
- Row locks can only be used when the execution plan actually uses the index: even if an indexed field is used in the condition, whether the index is used to retrieve data is determined by MySQL based on the cost of different execution plans. If MySQL determines that a full table scan is more efficient, such as for very small tables, it will not use the index. In this case, InnoDB will use table locks instead of row locks. Therefore, when analyzing lock conflicts,
- Since MySQL's row locks are locks on indexes, not on records,
even if multiple sessions access different records, if they use the same index key, lock conflicts will occur
(subsequent sessions using these indexes must wait for the session that first used the index to release the lock before they can obtain the lock).
MySQL Lock Algorithms#
Record Lock
: Lock on a single row record.Gap Lock
: A gap lock locks a range but does not include the record itself. The purpose of GAP locks is to prevent phantom reads from occurring during two current reads of the same transaction.Next-Key Lock
: Record + Gap, locks a range and the record itself. For row queries, this method is used, primarily to solve the phantom read problem.- Don't talk nonsense, what locks are added by various SQL?
IV. MySQL's MVCC
Mechanism#
What is the MVCC
Mechanism?#
It actually adds two hidden columns at the end of each record, recording the creation version number and the deletion version number, and each transaction has a unique incrementing version number when it starts. In InnoDB
, two hidden fields are added to each row to implement MVCC, both used to store the version number of the transaction, and the version number of the transaction increments with each new transaction started.
Consistent Non-Locking Reads#
Consistent read
allows InnoDB to provide a snapshot of the database at a certain point in time using multiple versions. If the isolation level is REPEATABLE READ, then all consistent reads in the same transaction read the snapshot obtained from the first such read in the transaction; if it is READ COMMITTED, then each consistent read in a transaction will read its own refreshed snapshot version. Consistent read is the default mode for ordinary SELECT statements under READ COMMITTED and REPEATABLE READ isolation levels. Consistent read does not add any form of locks to the tables it accesses, allowing other transactions to concurrently modify them.
Solving Nonrepeatable Reads#
When an MVCC database needs to update a data record, it does not directly overwrite the old data with new data; instead, it marks the old data as obsolete and adds the new version of the data elsewhere. This way, multiple versions of data are stored, but only one is the latest. This method allows readers to read data that existed before they read it, even if that data was modified or deleted by someone else during the read process, it does not affect the user who was reading earlier. It ensures that multiple reads of the same data within the same transaction return the same result, solving the nonrepeatable read problem.
The downside is:
This multi-version approach avoids the overhead of filling gaps caused by delete operations in memory and disk storage structures, but it requires the system to periodically sweep through to genuinely delete old, obsolete data.
In summary: MVCC is a way to temporarily retain multiple versions of the same data to achieve concurrency control.
Reference Links#
- Learn about the
MVCC
mechanism throughetcd
: etcd
V. MySQL Storage Engines#
InnoDB
#
Introduction#
- Supports ACID transactions and the four isolation levels of transactions;
- Supports row-level locks and foreign key constraints; thus, it can support write concurrency;
- Does not store the total number of rows;
- An InnoDB engine is stored in a file space (shared tablespace, table size is not controlled by the operating system, a table may be distributed across multiple files), it may also be multiple (set as independent tablespace, table size is limited by the operating system file size, generally 2G), limited by the operating system file size;
- Primary key indexes use clustered indexes (the index's data domain stores the data file itself), and the data domain of auxiliary indexes stores the primary key values; thus, to find data from auxiliary indexes, you need to first find the primary key value through the auxiliary index and then access the auxiliary index;
- It is best to use auto-increment primary keys to prevent large adjustments to the file structure to maintain the B+ tree structure during data insertion;
- Suitable for OLTP (Online Transaction Processing), with high real-time requirements.
Main Features#
Insert buffer, double write, adaptive hash index, async IO, flush neighbor page.
References#
- Key Features of InnoDB
- InnoDB Architecture, Understand it in a Picture!
- Does InnoDB Support Hash Index?
- Why is InnoDB Concurrency So High?
- InnoDB, Five Best Practices, Know the Reasons?
- Digging Pits, Seven Types of Locks in InnoDB
- InnoDB Concurrent Inserts, Actually Using Intention Locks?
- Inserting into InnoDB Auto-Increment Column, Actually a Table Lock
- Awesome, Methods for Debugging Deadlocks in InnoDB!
- InnoDB, Why Does Select Block Insert?
- InnoDB, Snapshot Read, What Are the Differences Under RR and RC?
MyISAM
#
- Does not support transactions, but each query is atomic;
- Supports table-level locks, meaning each operation locks the entire table;
- Stores the total number of rows in the table;
- A MyISAM table consists of three files: index file, table structure file, and data file;
- Uses non-clustered indexes, where the data domain of the index file stores pointers to the data file. The auxiliary index is similar to the primary index, but the auxiliary index does not need to guarantee uniqueness.
- Suitable for OLAP (Online Analytical Processing), with low real-time requirements but generally large data volumes.
MEMORY#
ARCHIVE#
Reference Links#
- Introduction to MySQL Database Storage Engines
- Understand the Index Differences Between MyISAM and InnoDB in One Minute
- What Storage Engine is More Suitable for Frequent Inserts?
VI. MySQL Indexes#
Mainly B+ indexes and hash indexes, differences:#
- If it is an equality query, then hash indexes have a clear advantage because they only need to go through the algorithm once to find the corresponding key value; of course, this premise is that the key values are unique. If the key values are not unique, you need to first find the position of the key and then scan through the linked list until you find the corresponding data.
- If it is a range query, hash indexes are useless because the originally ordered key values may become discontinuous after the hash algorithm is applied, making it impossible to utilize the index for range query retrieval.
- Similarly, hash indexes cannot be used to complete sorting or partial fuzzy queries like 'xxx%' (this kind of partial fuzzy query is essentially also a range query).
- Hash indexes do not support the leftmost matching rule for multi-column joint indexes.
- The keyword retrieval efficiency of B+ tree indexes is relatively average, unlike B trees, which have large fluctuations. In cases with a large number of duplicate key values, the efficiency of hash indexes is also extremely low due to the so-called hash collision problem.
B+ Index Data Structure and Differences from B Trees#
-
B Tree: Ordered Array + Balanced Multi-way Tree
Its characteristics are:
(1) No longer a binary search, but an m-way search;
(2) Both leaf nodes and non-leaf nodes store data;
(3) In-order traversal can obtain all nodes; -
B+ Tree: Ordered Array Linked List + Balanced Multi-way Tree, with some improvements over B Trees
(1) Non-leaf nodes no longer store data; data is only stored in the leaf nodes at the same level.
(2) A linked list is added between the leaves, allowing access to all nodes without needing in-order traversal. -
Improved Characteristics of B+ Trees
(1) For range searches, after locating min and max, the intermediate leaf nodes are the result set, eliminating the need for in-order backtracking.
(2) Leaf nodes store actual record rows, with relatively compact storage of record rows, suitable for large data volume disk storage; non-leaf nodes store the PK for query acceleration, suitable for memory storage.
(3) Non-leaf nodes do not store actual records but only store the record's KEY, allowing B+ trees to store more indexes under the same memory conditions.
Why B+ Trees Are Suitable as Index Structures#
- Unlike binary search trees, B trees are m-way, significantly reducing tree height, allowing for the storage of large amounts of data.
- Very suitable for disk storage, fully utilizing the principle of locality and disk pre-reading.
(1) Memory read and write blocks are much slower than disk read and write;
(2) Disk pre-reading: Disk reading is not done on demand but is pre-read by pages, reading a page (4K of data) at a time. If the data to be read in the future is in this page, it can avoid future disk IO, improving efficiency.
(3) Principle of locality: Software design should adhere to the principle of "data reading concentration" and "using a piece of data likely means using nearby data," allowing disk pre-reading to significantly improve disk IO. - Both MyISAM and InnoDB use B+ trees as index storage structures, but the storage methods of data on the leaves differ. The former separates index files and data files, with the index file only saving pointers to the pages where records are located (physical locations), while the latter directly stores data or stores primary key values (retrieving auxiliary indexes based on primary key values, which actually performs a secondary query, increasing IO times).
Index Classification#
- Ordinary Index: The most basic index with no restrictions.
- Unique Index: Similar to "ordinary index," but the values in the indexed column must be unique, allowing for null values.
- Primary Key Index: A special type of unique index that does not allow null values.
- Full-Text Index: Only available for MyISAM tables; generating a full-text index for large data is time-consuming and space-consuming. (MATCH... AGAINST...)
- Composite Index: To further improve MySQL efficiency, composite indexes can be established, following the "leftmost prefix" principle.
- Covering Index: An index that contains (covers) all the values of the fields that need to be queried.
Explain Simulating SQL Query Plans#
Information Contained in Explain Execution Plans#
id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
---|
Detailed Corresponding Meanings#
-
id
: The sequence number of the select query, containing a group of numbers indicating the order of executing select clauses or operating tables in the query. -
select_type
: The type of query, divided into:- SIMPLE: A simple select query that does not contain subqueries or unions.
- PRIMARY: The outermost query is marked as primary if it contains any complex sub-parts.
- SUBQUERY: Contains a subquery in the select or where list.
- DERIVED: A subquery in the from list is marked as derived; MySQL executes these subqueries recursively and places the results in a temporary table.
- UNION: If a second select appears after a union, it is marked as union; if the union is included in a subquery of the from clause, the outer select will be marked as derived.
- UNION RESULT: A select that retrieves results from the union table.
-
type
: Access type, a very important indicator in SQL query optimization, with values ranging from good to bad as follows:
system > const > eq_ref > ref > fulltext > ref_or_null > index_merge > unique_subquery > index_subquery > range > index > ALL
Generally, a good SQL query should at least reach therange
level, and preferably theref
level.system
: The table has only one record (equal to a system table), which is a special case of const and usually does not occur, so it can be ignored.const
: Indicates that the index was found in one go; const is used for comparisons with primary keys or unique indexes. Since only one row of data needs to be matched, it is very fast. If the primary key is placed in the where list, MySQL can convert the query into a const.eq_ref
: Unique index scan; for each index key, there is only one record in the table that matches it. Commonly seen in primary key or unique index scans.ref
: Non-unique index scan; returns all rows matching a single value. Essentially, it is also a type of index access, returning all rows matching a single value, but it may find multiple rows that meet the conditions, so it should be considered a hybrid of lookup and scan.range
: Only retrieves rows within a given range, using an index to select rows. The key column shows which index was used. Generally, this occurs when the where clause includes between, <, >, in, etc.index
: Full Index Scan; the difference between index and ALL is that index type only traverses the index tree. This is usually the ALL block, as index files are typically smaller than data files. (Although both index and ALL read the entire table, index reads from the index while ALL reads from the disk.)ALL
: Full Table Scan; traverses the entire table to find matching rows.
-
possible_keys
: If there are indexes on the fields involved in the query, those indexes will be listed, but they may not necessarily be used in the actual query. -
key
: The actual index used; if NULL, no index was used. If a covering index was used in the query, it will only appear in the key list. -
key_len
: Indicates the number of bytes used in the index; the length of the index used in the query (maximum possible length), not the actual length used. Theoretically, the shorter the length, the better. key_len is calculated based on the table definition, not retrieved from the table. -
ref
: Shows which column of the index was used; if possible, it is a constant const. -
rows
: Based on table statistics and index selection, it roughly estimates the number of rows that need to be read to find the required records. -
Extra
: Additional information that is not suitable to be displayed in other fields but is very important.Using filesort
: MySQL uses an external index sort on the data rather than sorting and reading according to the index within the table. This means MySQL cannot utilize the index to complete the sorting operation, resulting in "file sorting."Using temporary
: A temporary table is used to save intermediate results, meaning MySQL used a temporary table when sorting the query results, commonly seen in order by and group by.Using index
: Indicates that the corresponding select operation used a covering index (Covering Index), avoiding access to the data rows of the table, thus improving efficiency; if it appears simultaneously with Using where, it indicates that the index was used to execute the lookup of the index key values; if it does not appear simultaneously with Using where, it indicates that the index was used to read data rather than execute a lookup action.Using Where
: Indicates that where filtering was used.Using join buffer
: Indicates that a join buffer was used.Impossible WHERE
: The values in the where clause are always false, making it impossible to retrieve any tuples.select tables optimized away
: In the absence of a group by clause, optimizes MIN/MAX operations based on the index or optimizes COUNT(*) operations for MyISAM storage engines, completing the optimization during the query execution plan generation phase rather than waiting for the execution phase.distinct
: Optimizes distinct operations, stopping the search for the same value after finding the first matching tuple.
Reference Links#
- MySQL Advanced: Detailed Explanation of Explain Execution Plans
- Detailed Explanation of MySQL Indexes
Differences Between Clustered Index and Non-Clustered Index#
Clustered Index (Clustered)#
Definition: The physical order of data rows is the same as the logical order of column values (usually the column of the primary key); a table can only have one clustered index. If no primary key is defined, a unique non-null index will be chosen instead; if there is no such index, a primary key will be implicitly defined as the clustered index.
Non-Clustered Index (Unclustered)#
Definition: The logical order of the index in this index differs from the physical storage order of rows on disk; a table can have multiple non-clustered indexes.
References#
VII. Master-Slave Replication in Databases#
- Even if MySQL is split into multiple instances, there must be a master and a slave; all write operations must be completed on the master MySQL.
- All data from the slave MySQL comes from (is synchronized with) the master MySQL.
- In MySQL master-slave setups, if a business (a method in a service) has both R operations and W operations, since W operations must occur on the master MySQL, all data must come from a single transaction.
Extensions#
How to Optimize Long Delays in MySQL Master-Slave Replication
VIII. Normalization Design#
- The First Normal Form (1NF) is a basic requirement for relational schemas; a database that does not satisfy the first normal form (1NF) is not a relational database, meaning that each column in the database table must be an indivisible basic data item, and the same column cannot have multiple values.
- The Second Normal Form (2NF) requires that each instance or row in the database table can be uniquely distinguished. That is, there is no partial dependency between fields and the primary key.
- The Third Normal Form (3NF) requires that a database table does not contain non-primary key information that is already included in other tables. That is, based on the second normal form, there is no transitive dependency (redundant data is not allowed).
IX. Extensions#
- 58.com MySQL Practical Experience
- What You Must Know About the MySQL Kernel!
- Things You Might Not Know About MySQL!
- Three Solutions for Redundant Data in MySQL
- Single KEY Business, Practice of Database Horizontal Sharding Architecture
- Database Smooth Expansion Architecture Scheme in Seconds
- Detailed Analysis of MySQL Transaction Logs (Redo Log and Undo Log)