Table of Contents
EF Core’s default optimistic concurrency model is a great starting point, but it’s not a silver bullet. When write contention heats up, its limitations can lead to performance bottlenecks and data integrity challenges. Understanding the trade-offs between optimistic and pessimistic concurrency is crucial for building robust, scalable applications.
This article explores the conceptual costs and benefits of each strategy, helping you decide when to stick with the default and when to reach for explicit locking.
A Quick Refresher on Concurrency Strategies
- Optimistic Concurrency: Assumes conflicts are rare. You read data, make changes, and when you go to save, the database checks if anyone else has modified the data in the meantime. EF Core manages this with a concurrency token. If a conflict is found, it throws an exception, and it’s up to you to handle it (usually by retrying).
- Pessimistic Concurrency: Assumes conflicts are likely. You lock the data record(s) when you read them. No other process can modify (or sometimes even read) that data until you release the lock. This prevents conflicts from happening, but at the cost of reduced concurrency.
We’ll focus on the “why” and “when” of these patterns, rather than a step-by-step setup guide.
Representing Concurrency Models in Code
Let’s imagine we’re working with an Order entity.
To enable optimistic concurrency, we can add a RowVersion property, which EF Core will map to a SQL Server rowversion type.
public class Order
{
public int Id { get; set; }
public decimal Amount { get; set; }
public string Status { get; set; }
[Timestamp] // This tells EF Core to use this for concurrency checks
public byte[] RowVersion { get; set; }
}
An update using the optimistic approach is straightforward. EF Core handles the concurrency check automatically during SaveChangesAsync().
// Optimistic approach (EF Core default)
public async Task UpdateOrderOptimistic(int orderId, decimal newAmount)
{
var order = await _context.Orders.FindAsync(orderId);
if (order != null)
{
order.Amount = newAmount;
// Throws DbUpdateConcurrencyException if another user changed the order
await _context.SaveChangesAsync();
}
}
A pessimistic update is more explicit. We must begin a transaction and acquire a database lock on the rows we intend to modify.
// Pessimistic with explicit locking
public async Task UpdateOrderPessimistic(int orderId, decimal newAmount)
{
using var transaction = await _context.Database.BeginTransactionAsync();
try
{
var order = await _context.Orders
.FromSqlRaw("SELECT * FROM Orders WITH (UPDLOCK, ROWLOCK) WHERE Id = {0}", orderId)
.FirstAsync();
order.Amount = newAmount;
await _context.SaveChangesAsync();
await transaction.CommitAsync();
}
catch
{
await transaction.RollbackAsync();
throw;
}
}
Note: The UPDLOCK and ROWLOCK hints are specific to SQL Server. Other databases have different syntax for pessimistic locking.
And here is a sequence diagram showing how pessimistic locking makes one client wait while another finishes its transaction:
sequenceDiagram
participant ClientA as Client A
participant ClientB as Client B
participant AppServer as Application Server
participant Database
ClientA->>+AppServer: "Begin transaction to update Order #123"
AppServer->>+Database: "BEGIN TRAN and SELECT * WITH (UPDLOCK) WHERE Id=123"
Database-->>-AppServer: "Returns Order (Row locked)"
AppServer-->>-ClientA: "Shows Order data for editing"
ClientB->>+AppServer: "Try to update Order #123"
AppServer->>+Database: "BEGIN TRAN and SELECT * WITH (UPDLOCK) WHERE Id=123"
Note right of Database: Client B's request is blocked<br/>waiting for lock release
ClientA->>+AppServer: "Submit changes (Amount = $75)"
AppServer->>+Database: "UPDATE Orders SET Amount=75 WHERE Id=123"
AppServer->>+Database: "COMMIT TRAN"
Note right of Database: Lock on Order #123 is released
Database-->>-AppServer: "Update successful"
AppServer-->>-ClientA: "Success"
Note right of Database: Client B's request resumes<br/>fetches latest data
Database-->>-AppServer: "Returns updated Order"
AppServer-->>-ClientB: "Shows latest Order data for editing"
Pessimistic Locking: Client B waits for Client A to finish before proceeding.
The Hidden Cost of Optimistic Concurrency: Retry Storms
Optimistic concurrency seems efficient until you face high write contention. When multiple users try to update the same record simultaneously, only the first one succeeds. The rest receive a DbUpdateConcurrencyException.
The standard solution is to implement a retry loop, often with an exponential backoff delay.
public async Task UpdateWithRetry(int orderId, decimal newAmount, int maxRetries = 3)
{
for (int attempt = 0; attempt < maxRetries; attempt++)
{
try
{
await UpdateOrderOptimistic(orderId, newAmount);
return; // Success!
}
catch (DbUpdateConcurrencyException)
{
if (attempt == maxRetries - 1) throw; // Max retries reached
// Clear the change tracker to refetch the latest data
_context.ChangeTracker.Clear();
await Task.Delay(TimeSpan.FromMilliseconds(100 * Math.Pow(2, attempt)));
}
}
}
Under light load, this works perfectly. But during a high-traffic event (like a flash sale), this pattern can lead to a “retry storm.”
- Increased Latency: Each retry adds delay, increasing the overall response time for the user.
- Thundering Herd Problem: Many failed requests retrying at once can hammer the database, causing a spike in connections and CPU usage.
- Wasted Resources: The work done in a failed attempt (reading data, running business logic) is thrown away, consuming server resources for nothing.
In poorly-designed systems, retry storms can consume more CPU and database resources than the actual business logic itself.
The Hidden Cost of Pessimistic Locking: Reduced Throughput
Pessimistic locking avoids retry storms by making competing processes wait. However, this introduces its own set of problems.
The primary cost is blocking. While one transaction holds an update lock on a row, other processes that need to read or write that same row are forced to wait.
- Cascading Delays: A slow transaction can cause a chain reaction, delaying many other operations. A simple reporting query that needs to scan the table could be blocked, grinding a dashboard to a halt.
- Lock Escalation: If a single operation locks too many individual rows, the database engine may “escalate” the lock to cover the entire table, dramatically reducing concurrency.
- Deadlocks: The risk of deadlocks increases. A deadlock occurs when two transactions are waiting for each other to release a lock that the other needs. The database will resolve the deadlock by killing one of the transactions, but this still results in a failed operation that must be handled.
Consider a reporting query trying to get recent orders. If the Orders table has rows locked by pessimistic updates, this simple read operation could be blocked.
// This read could be blocked by pessimistic updates
public async Task<List<OrderSummary>> GetRecentOrders()
{
return await _context.Orders
.AsNoTracking() // Good practice for read-only queries
.Where(o => o.CreatedDate > DateTime.UtcNow.AddHours(-1))
.Select(o => new OrderSummary { Id = o.Id, Amount = o.Amount })
.ToListAsync(); // This may wait if rows are locked
}
Finding the Middle Ground: The Hybrid Approach
A “one-size-fits-all” strategy is rarely optimal. The most effective solutions often mix optimistic and pessimistic concurrency based on the specific workload.
- Optimistic by Default: Use optimistic concurrency for the majority of your operations where conflicts are rare.
- Pessimistic for Hotspots: For a small number of known “hotspots” (like updating inventory counts for a popular product), use explicit pessimistic locking.
This hybrid approach gives you the best of both worlds: the high throughput of optimistic concurrency for general use cases, and the data integrity guarantee of pessimistic locking where it matters most.
// Example: Pessimistically lock only the inventory count update
public async Task AdjustInventoryCount(int productId, int adjustment)
{
using var transaction = await _context.Database.BeginTransactionAsync();
try
{
var product = await _context.Products
.FromSqlRaw("SELECT * FROM Products WITH (UPDLOCK, ROWLOCK) WHERE Id = {0}", productId)
.FirstAsync();
if (product.StockCount + adjustment >= 0)
{
product.StockCount += adjustment;
await _context.SaveChangesAsync();
}
await transaction.CommitAsync();
}
catch
{
await transaction.RollbackAsync();
throw;
}
}
Observability: You Can’t Optimize What You Don’t Measure
Before switching to pessimistic locking, you need data. Instrument your application to track key concurrency metrics:
DbUpdateConcurrencyExceptionRate: Log every time this exception occurs. What is the rate of conflicts per entity or operation? A high rate (e.g., >5% of updates) is a strong indicator that optimistic concurrency is struggling.- Transaction Duration: Measure how long your database transactions take. Are they getting slower under load?
- Retry Attempts: Track how many retries are needed for successful operations. If most operations require 2 or more retries, you have a contention problem.
- Database Lock Waits: Use database-specific tools (like
sys.dm_os_wait_statsin SQL Server) to monitor how much time your queries spend waiting for locks.
-- SQL Server query to check for lock-related waits
SELECT
wait_type,
waiting_tasks_count,
wait_time_ms,
signal_wait_time_ms
FROM sys.dm_os_wait_stats
WHERE wait_type LIKE '%LCK_M%' -- Focus on lock waits
ORDER BY wait_time_ms DESC;
Decision Matrix: A Guide to Choosing Your Strategy
This matrix provides a conceptual framework for making a decision. The “conflict rate” is the percentage of update attempts that fail due to a concurrency exception.
| Workload Type | Estimated Conflict Rate | Recommended Approach | Key Considerations |
|---|---|---|---|
| User Profiles, Settings | < 1% | Optimistic | Default choice, minimal overhead. |
| Product Catalog Updates | 1-5% | Optimistic with Retry | Monitor exception rate and retry depth. |
| Inventory, Counters | 5-15% | Hybrid or Pessimistic | High contention makes retries expensive. |
| Financial Transactions | >10% or Critical | Pessimistic | Data integrity is non-negotiable. |
| Real-time Bidding/Scores | Very High | Pessimistic | Transactions must be fast to minimize lock duration. |
Generally, if your conflict rate creeps above 5%, the overhead of handling optimistic concurrency exceptions and retries often starts to outweigh the cost of pessimistic locking.
Final Recommendations
Don’t treat EF Core’s default optimistic strategy as a “free” feature. It’s a powerful tool, but its effectiveness depends entirely on your workload.
- Start with Measurement: Before changing your code, instrument your current optimistic strategy. Log concurrency exceptions and identify which entities or operations are causing the most conflicts.
- Identify the Hotspots: Is the problem widespread, or is it confined to a few specific tables or rows?
- Be Surgical: Apply pessimistic locking only where it’s demonstrably needed. Avoid global changes.
- Tune for Your Database: If you use pessimistic locking, consider enabling snapshot isolation in your database (
READ_COMMITTED_SNAPSHOTin SQL Server) to prevent read queries from being blocked by write locks.
Concurrency strategy isn’t a global architectural decision to be made once. It’s a per-entity, per-workload choice that should adapt as your application and its usage patterns evolve. Measure your conflict rates, understand the trade-offs, and choose wisely.
References
- Microsoft Docs: Handling Concurrency Conflicts in EF Core
- SQL Server Transaction Locking and Row Versioning Guide
- Microsoft Docs: Transaction and Concurrency