In the previous article, we explored the Consistency aspect of ACID — the fundamental set of properties that define reliable database transactions. As part of this deep dive series on ACID, I’m using PostgreSQL as the reference system, but the principles largely apply to most relational databases with only minor differences in implementation.
Today’s topic is Isolation, the “I” in ACID. Isolation becomes critical when multiple transactions run concurrently. It ensures that the outcome of executing transactions in parallel is the same as if they were executed one after another — avoiding weird side effects, data corruption, or misleading results.
Imagine two transactions running at the same time: one updating a balance and another reading it. Without proper isolation, the reader might get a value that never actually existed, leading to inconsistencies or even security issues.
Each isolation level represents a tradeoff between performance and consistency. The right choice depends on your application’s needs. Some systems prioritize consistency (e.g., banking apps), while others may prefer availability and speed (e.g., analytics dashboards). We’ll explore all major isolation levels with examples and caveats so you know when and why to use each.
Table of Contents
Read Uncommitted
The Read Uncommitted isolation level allows a transaction to read data that has been modified by another transaction but not yet committed. While this may appear beneficial for performance, especially in highly concurrent systems, it introduces a significant risk: the possibility of dirty reads. A dirty read occurs when a transaction reads data that is later rolled back, meaning the data never officially existed in the database.
At this level, no locks are acquired on the data being read, so transactions are free to read rows that might still be in the process of being changed. This non-blocking behavior can improve speed, but it does so at the cost of consistency and reliability. In practice, systems operating under Read Uncommitted may make decisions based on temporary or invalid data, which can be especially dangerous in environments where correctness matters—like financial systems or inventory tracking. Let's take a look with example.
Example
Here’s how a dirty read can happen:
Transaction A starts and updates a user's balance from
100
to200
. However, the change is not yet committed.Transaction B runs while Transaction A is still open and reads the new balance of
200
.Believing the balance is updated, Transaction B takes some action—like approving a purchase or initiating a transfer.
Later, Transaction A fails or is rolled back, which means the update to
200
is discarded and the balance remains100
.Now, Transaction B has already acted on the assumption that the balance was
200
, even though that value was never officially stored.
This is a dirty read: Transaction B relied on data that was never committed and ultimately did not exist. As a result, the system may end up in an incorrect state—something that is especially dangerous when financial or business-critical logic is involved.
Because of such risks, Read Uncommitted provides no isolation guarantees and is rarely suitable for real-world applications. It might be acceptable in certain reporting or analytics tasks where minor inconsistencies are tolerable, but in most cases, more reliable isolation levels like Read Committed are preferred.
Read Committed
The Read Committed isolation level ensures that a transaction can only read data that has already been committed by other transactions. This provides a significant improvement over Read Uncommitted, as it prevents dirty reads. A transaction will never see partial or rollback-prone changes made by others.
However, while this level avoids dirty reads, it doesn't completely eliminate concurrency issues. One key problem that can still occur under Read Committed is a write-write conflict, often referred to as a lost update. This happens when two transactions read the same data, modify it independently, and then overwrite each other’s results, unintentionally discarding one set of changes.
Example
Here’s how a lost update can occur under Read Committed:
-
Transaction A reads a value
x = 0
from the database. -
Transaction B also reads the same value
x = 0
shortly after. - Both transactions independently increment
x
to1
. -
Transaction A writes
x = 1
back to the database. -
Transaction B then also writes
x = 1
, overwriting A’s result.
The final value in the database is 1
, even though both transactions performed an increment, and logically, the value should have been 2
.
This type of issue happens because Read Committed does not prevent two transactions from reading the same data at the same time, nor does it coordinate their updates. Without additional mechanisms like explicit locking, optimistic concurrency control, or application-level safeguards, such conflicts can go undetected.
Read Committed is the default isolation level in PostgreSQL, offering a reasonable balance between performance and safety for many use cases. It’s a safe starting point for most applications, but developers must still be aware of its limitations under concurrent write scenarios.
Repeatable Read
The Repeatable Read isolation level ensures that once a transaction reads data, that data stays the same throughout the life of the transaction. Even if other transactions update or insert rows in the background, your transaction continues to operate on a consistent snapshot of the data. This prevents anomalies like non-repeatable reads and in PostgreSQL, even phantom reads.
This level is useful when you need strong consistency within a transaction — like generating reports, doing multi-step calculations, or reading the same data multiple times while expecting it not to change.
Example
Here’s what Repeatable Read helps prevent:
Non-Repeatable Reads
Transaction A starts and reads a row where
x = 1
.Meanwhile, Transaction B updates that same row to
x = 3
and commits.If Transaction A reads
x
again, it still seesx = 1
, not the updated value.This guarantees that any value you read once stays the same, regardless of external changes.
Phantom Reads
Transaction A runs a query like
SELECT * FROM users WHERE age > 30
and sees 5 users.At the same time, Transaction B inserts a new user with
{ name: "John", age: 35 }
and commits.If Transaction A reruns the same query, it still sees only the original 5 users.
The new user (“John”) doesn’t appear because the snapshot taken at the start of the transaction excludes any changes made afterward.
Behind the scenes, PostgreSQL uses MVCC (Multi-Version Concurrency Control) to maintain this behavior. It captures a snapshot of the database when the transaction starts, and all reads are based on that version. If a conflicting update is attempted, PostgreSQL may throw a serialization error, requiring the transaction to retry.
While PostgreSQL offers strong protection under Repeatable Read, behavior varies across databases. For example, in MySQL, phantom reads can still occur unless you use extra locking. So even though the name is the same, the guarantees may differ — always check how your specific database implements it.
Sure! Here's the Serializable Reads section rewritten in a medium-length, article-friendly format with a clear explanation and example under a single subheading.
Serializable Read
The Serializable isolation level is the strictest isolation guarantee offered by PostgreSQL. It ensures that all concurrent transactions behave as if they were executed one after another, in some serial order — even if they’re actually running in parallel.
PostgreSQL doesn’t use traditional locking to achieve this. Instead, it uses a smarter, more efficient technique called Serializable Snapshot Isolation (SSI). This approach avoids blocking other transactions during reads or writes. Instead of holding locks, PostgreSQL watches for patterns that could lead to inconsistencies and rolls back one of the transactions if a conflict is detected.
This ensures consistency but introduces the possibility of serialization failures, meaning your transaction might be rolled back and you’ll need to retry it.
Example — Preventing Subtle Conflicts
Transaction A reads from a table of available coupons where
active = true
.Transaction B simultaneously marks one of those coupons as
active = false
and commits.Transaction A, unaware of the change, proceeds to assign the now-inactive coupon to a user.
PostgreSQL detects that the sequence of actions would not be valid if executed in order and rolls back Transaction A.
This rollback ensures the system remains consistent, as if Transaction B had completed before Transaction A even started.
Key point: In PostgreSQL, Serializable doesn’t block others like traditional locking — it lets transactions run in parallel and only intervenes if the outcome would be invalid. Other databases like MySQL or SQL Server often rely on row or table locking to achieve similar guarantees, which can reduce performance or cause deadlocks.
Conclusion
Isolation is a crucial piece of the puzzle when it comes to building reliable, concurrent applications. Picking the right isolation level is all about understanding your use case — whether you're building a banking system that demands full consistency or a high-throughput analytics app that prioritizes availability.
PostgreSQL gives you excellent control and guarantees, especially through MVCC and SSI. But remember — with stronger isolation often comes increased complexity and performance trade-offs.
Before I end this post, I want to take a moment to express my deep love and unwavering support for our Palestinian Muslim brothers and sisters 🇵🇸. Their lives, their struggles, and their voices matter. In a world that too often turns a blind eye, we must continue to speak up and stand with the oppressed. May justice rise, may peace prevail, and may the people of Palestine live with dignity, freedom, and hope. You are not forgotten—your life matters.The recent surge in terrorist activities and the brutal actions by Israel are unequivocally condemned.