Introduction
SQL databases run multiple transactions in parallel. This is a must-have in a modern world when we need to deal with hundreds of active connections per second. One thing we need to be aware of is that transactions can influence each other, so we need to make a conscious decision on how to configure isolation levels. We already touched upon this topic in our other blog post.
Isolation levels are a practical way of solving an actual engineering problem. It’s worth keeping in mind that there is no such thing as isolation level in https://en.wikipedia.org/wiki/Relational_algebra - that’s because in the world of mathematics we often assume that things work infinitely fast and do not fail because of faulty hard drive or broken power cord. However, reality is not that shiny. We need to face the hardware issues, limitations of physics, and non-optimal environments.
Let’s leave theory aside, and see how things are actually implemented behind the scenes.
Locks
Locks are synchronization primitives used to protect the shared memory from corruption. We need them because we can’t perform operations atomically. Let’s see what that means in practice.
CPUs operate on memory sliced into cells of various sizes. Typical CPU can access a fixed size long memory cell that we typically call an “int”. This depends on the architecture of the CPU and the actual operation we want to execute - some operations can perform on blocks as little as 8 bits, some operations can run on 512 bits at once. Some CPUs default to 32 bits, some others to 64 bits, some others to 36, and so on. Probably the most common CPU architecture today is Intel 64 architecture (not to be confused with IA-64 or Intel Itanium architecture) in which the native data size is 64 bits. This means that the CPU uses 64-bits long registers for most of the operations.
What it means in practice is that the CPU cannot write more than 64 bits in one operation. If we were to write 512 bits, then the CPU would need to actually perform 8 independent writes. It sometimes can run all of them in parallel (so they take one time slice), but most of the time the CPU needs to actually run 8 operations one after another. This is especially important when we need to modify data structures that are longer than 64 bits because then the CPU can’t modify them in one time slice. However, since modern CPUs are multi-core, then some other CPU core may start reading the structure before it’s fully modified. We need to protect our memory from this, so we need to somehow mark the structure as “unavailable” for other CPU cores, so they will need to wait for the structure to be ready to be used again. And this is how locks are used.
Memory structures we’re talking about in the SQL world are tables, rows, pages, and the database itself. They are very often much longer than 64 bits. What’s more, one SQL operation may want to update multiple rows at once, so this is something that the CPU will not be able to perform with just one operation. Therefore, we need to have a mechanism to support data modification without losing data consistency. This is where we use locks.
Lock types
Types of the locks we can use depend on the SQL engine we’re dealing with. However, multiple databases use a similar approach. Let’s start with lock modes.
Each lock can be acquired with a mode specifying what others are allowed to do with the lock. Some modes are:
Exclusive
- Only the transaction acquiring the lock can use it. Other transactions will not be able to acquire the lock.
- Used when the transaction modifies the data (DELETE, INSERT, UPDATE).
- Can be acquired on the page or row level typically.
Shared
- Read lock. Multiple transactions can acquire it to read the data.
- Typically used as a first step before the lock is converted to another lock allowing for modifications.
Update
- Indicates that an update will be performed. Once the owner is ready to modify the data, the lock is converted to the Exclusive lock.
Intent lock
- Indicates that there is a lock on a subpart of the locked structure. For instance, intent lock can be placed on the table to indicate that the row of this table will be locked as well.
- This prevents others from taking exclusive locks on the structure.
- Let’s take the following situation: imagine that transaction A wanted to modify a row inside a table. Transaction A takes the exclusive lock on the row. Now, transaction B wants to modify multiple rows, so B takes exclusive lock on the table. We now have a conflict - transaction A already holds the lock for the row, however, transaction B didn’t notice that.
- To solve that, we take the intent lock on the table. This way transaction B knows there is something going on inside the table, and transaction B doesn’t need to consult all the rows manually.
- There are various intent types: intent exclusive, intent shared, intent update, shared with intent exclusive, shared with intent update, update with intent exclusive.
Schema
- Locks for modifying the schema. It takes time for the SQL engine to prepare for the schema modification, so other transactions need to be stopped from modifying the schema but not from reading or changing the data.
Bulk update
- Lock used for bulk import operations
So many types. To make things even more complicated, each lock can be put on a different object in the hierarchy. It goes this way:
Database
- Typically, it typically takes a shared lock.
Table
- Typically takes an intent lock or exclusive lock.
Page
- Page is a group of rows of a fixed size.
- Typically takes an intent lock.
Row
- Takes shared, exclusive, or update lock.
Also, locks can be escalated from rows to pages and to tables. Or they can be converted to other lock types. Consult your SQL engine documentation to read more about how it works.
Locks and isolation levels
Let’s now see how it works with isolation levels. Depending on the isolation level type, different locks are acquired.
Let’s take the read committed isolation level. The SQL-92 standard says that only dirty reads are not allowed on this isolation level. This means that the transaction can’t read the data that is not committed yet. Let’s say that we scan the table. The way it works is:
- First row is locked with a read lock.
- Data is read and processed.
- The lock is released.
- Second row is locked with a read lock.
- And so on..
Effectively, once the row is fully processed, the lock for the row is released. This means that if some other transaction comes in, processes some row in between, and commits the transaction, then the reading transaction will pick the changes. This is why the reading transaction may get duplicates or miss some rows.
However, if we want to make sure that no other transaction adds rows in between, then we need to acquire a lock on the table. However, this will make all other transactions wait until we’re done, which may take some time and decrease the performance.
To make things even more interesting, there is no requirement that various isolation levels need to be implemented. It’s perfectly fine if all isolation levels are implemented as serializable. Also, some isolation levels may use snapshots or Multi-Version Concurrency Control (MVCC).
What can we do about that? Generally, we need to be aware of the anomalies that can happen on the isolation level we use. We need to understand where the level is configured (is it ORM? Is it the session/connection level? Is it configured per transaction?). We also should have a clear understanding regarding the performance we expect and need.
Lock Hints
Sometimes we might want to improve the performance by indicating our intent when reading data. Some SQL engines support that with so-called Table Hints or Lock Hints. For instance, this is how you do that in MS SQL:
This way we can take the lock on the table immediately. We don’t need to wait for the SQL engine to promote the lock from row level to table level.
However, we shouldn’t apply these hints blindly. The performance changes between version, data volume, indexes, etc. We should always verify whether the given hint makes sense.
Advisory locks
Another thing in terms of locking is how to synchronize logic within stored procedures. We could use the locks on a particular row in the table that we use for locking, however, this may be too cumbersome.
To aid that, some SQL engines support Advisory Locks. They are logical locks that can be queried by name or number, and can be fully controlled by us.
This is how we can use them in PostgreSQL:
With the statement above we take a lock identified with number 10. This way we can synchronize various transactions the way we need.
Locks can be taken on a transaction level or on a session level. This gives us even better control over the granularity of our operations.
Latches
Apart from logical locks protecting the actual data stored in the database, the SQL engine needs to protect its internal data structures as well. Locks in that area are typically called Latches. The engine uses latches when reading and writing memory pages Same applies to I/O operations, buffer operations
Same as with locks, latches come in various types and modes. There are:
Shared
- Used for reading a page
Keep
- Used to keep the page in the buffer (not dispose it)
Update
- Similar to update lock
Exclusive
- Similar to exclusive locks. Prevent other latches from being acquired
Destroy latch
- Acquired when a buffer is to be removed from the cache
Latches typically cannot be modified by the users. They are exclusive to the SQL engine (sometimes called SQL Operating System). Latches are acquired and released automatically. However, we can observe them to see where and why the performance degrades. For example, this is how we can look for latches in the MS SQL:
Putting latch statistics on the dashboard can help us with monitoring the performance, tracking latch contention, or simply correlating the data with the performance of transactions.
Coordination
Locks and latches are used to synchronize and coordinate operations within one SQL node. However, when multiple nodes are in place, we need to use other solutions. We’ll just briefly mention them in this section, and explain their details more in other blog posts.
Transactional outbox pattern
Transactional outbox pattern is used when we need to modify data in two storages but we don’t have a transaction spanning both data sources. Typical use case is to modify the SQL entities and publish a message to the queue or service bus.
The way it works is:
- We create an additional SQL table called Outbox.
- We modify business entities and add a row to the Outbox table in one SQL transaction. This way either all tables are modified or none.
- We implement an additional component called Relay that scans the Outbox table periodically, extracts entities, and pushes them to the queue or service bus.
What does this give us? By using the transactional pattern we never end up in a situation where there is a message posted to the queue, but there are no entities modified in the SQL database. We also don’t allow situations when the database is modified without pushing a message to the queue.
High availability
Our SQL engine may need to be replicated to various machines to support both higher performance and redundancy. In such a case, each transaction must be replicated for all the nodes running the SQL engine. There are various approaches where there is only one active node and multiple standby nodes, or multiple active nodes, where things are modified synchronously or asynchronously.
High availability can be achieved with log shipping, synchronous commits, or multiple other solutions. Depending what what solution we use, we may be able to implement distributed locks between nodes, or we may need to use completely different solutions based on timeouts and optimistic locking.
2-Phase Commit
2-Phase Commit is a protocol for having a true transaction spanning multiple data sources. It works in two phases: first phase checks whether it’s possible to commit and acknowledge the transaction, second phase actually does the job.
The advantage of 2PC (as it’s often called) is that we can commit transactions synchronously and have fully atomic transactions between data storages. However, 2PC is typically very slow, so if we want to scale our system to hundreds of transactions per second, then 2PC may not be the best solution. However, it’s definitely very convenient from the developer perspective.
Summary
In this blog post we learned how to coordinate various operations inside an SQL server. Whether these are transactions within one node or processes between multiple servers, we have multiple solutions and approaches we can take. We always need to understand that it’s all about the performance and the specific need of our business operations. Tuning these performance characteristics is something we can’t do blindly, but we need to understand how things work behind the scenes in our case to get the best performance.