EF Core’s default optimistic concurrency model is a great starting point, but it’s not a silver bullet. When write contention heats up, its limitations can lead to performance bottlenecks and data integrity challenges. Understanding the trade-offs between optimistic and pessimistic concurrency is crucial for building robust, scalable applications.

This article explores the conceptual costs and benefits of each strategy, helping you decide when to stick with the default and when to reach for explicit locking.

A Quick Refresher on Concurrency Strategies

  • Optimistic Concurrency: Assumes conflicts are rare. You read data, make changes, and when you go to save, the database checks if anyone else has modified the data in the meantime. EF Core manages this with a concurrency token. If a conflict is found, it throws an exception, and it’s up to you to handle it (usually by retrying).
  • Pessimistic Concurrency: Assumes conflicts are likely. You lock the data record(s) when you read them. No other process can modify (or sometimes even read) that data until you release the lock. This prevents conflicts from happening, but at the cost of reduced concurrency.

We’ll focus on the “why” and “when” of these patterns, rather than a step-by-step setup guide.

Representing Concurrency Models in Code

Let’s imagine we’re working with an Order entity.

To enable optimistic concurrency, we can add a RowVersion property, which EF Core will map to a SQL Server rowversion type.

public class Order
{
    public int Id { get; set; }
    public decimal Amount { get; set; }
    public string Status { get; set; }
    
    [Timestamp] // This tells EF Core to use this for concurrency checks
    public byte[] RowVersion { get; set; }
}

An update using the optimistic approach is straightforward. EF Core handles the concurrency check automatically during SaveChangesAsync().

// Optimistic approach (EF Core default)
public async Task UpdateOrderOptimistic(int orderId, decimal newAmount)
{
    var order = await _context.Orders.FindAsync(orderId);
    if (order != null)
    {
        order.Amount = newAmount;
        // Throws DbUpdateConcurrencyException if another user changed the order
        await _context.SaveChangesAsync();
    }
}

A pessimistic update is more explicit. We must begin a transaction and acquire a database lock on the rows we intend to modify.

// Pessimistic with explicit locking
public async Task UpdateOrderPessimistic(int orderId, decimal newAmount)
{
    using var transaction = await _context.Database.BeginTransactionAsync();
    
    try
    {
        var order = await _context.Orders
            .FromSqlRaw("SELECT * FROM Orders WITH (UPDLOCK, ROWLOCK) WHERE Id = {0}", orderId)
            .FirstAsync();
        
        order.Amount = newAmount;
        await _context.SaveChangesAsync();
        await transaction.CommitAsync();
    }
    catch
    {
        await transaction.RollbackAsync();
        throw;
    }
}

Note: The UPDLOCK and ROWLOCK hints are specific to SQL Server. Other databases have different syntax for pessimistic locking.

And here is a sequence diagram showing how pessimistic locking makes one client wait while another finishes its transaction:




sequenceDiagram
    participant ClientA as Client A
    participant ClientB as Client B
    participant AppServer as Application Server
    participant Database

    ClientA->>+AppServer: "Begin transaction to update Order #123"
    AppServer->>+Database: "BEGIN TRAN and SELECT * WITH (UPDLOCK) WHERE Id=123"
    Database-->>-AppServer: "Returns Order (Row locked)"
    AppServer-->>-ClientA: "Shows Order data for editing"

    ClientB->>+AppServer: "Try to update Order #123"
    AppServer->>+Database: "BEGIN TRAN and SELECT * WITH (UPDLOCK) WHERE Id=123"
    Note right of Database: Client B's request is blocked<br/>waiting for lock release

    ClientA->>+AppServer: "Submit changes (Amount = $75)"
    AppServer->>+Database: "UPDATE Orders SET Amount=75 WHERE Id=123"
    AppServer->>+Database: "COMMIT TRAN"
    Note right of Database: Lock on Order #123 is released
    Database-->>-AppServer: "Update successful"
    AppServer-->>-ClientA: "Success"

    Note right of Database: Client B's request resumes<br/>fetches latest data
    Database-->>-AppServer: "Returns updated Order"
    AppServer-->>-ClientB: "Shows latest Order data for editing"

    

Pessimistic Locking: Client B waits for Client A to finish before proceeding.

The Hidden Cost of Optimistic Concurrency: Retry Storms

Optimistic concurrency seems efficient until you face high write contention. When multiple users try to update the same record simultaneously, only the first one succeeds. The rest receive a DbUpdateConcurrencyException.

The standard solution is to implement a retry loop, often with an exponential backoff delay.

public async Task UpdateWithRetry(int orderId, decimal newAmount, int maxRetries = 3)
{
    for (int attempt = 0; attempt < maxRetries; attempt++)
    {
        try
        {
            await UpdateOrderOptimistic(orderId, newAmount);
            return; // Success!
        }
        catch (DbUpdateConcurrencyException)
        {
            if (attempt == maxRetries - 1) throw; // Max retries reached
            
            // Clear the change tracker to refetch the latest data
            _context.ChangeTracker.Clear(); 
            
            await Task.Delay(TimeSpan.FromMilliseconds(100 * Math.Pow(2, attempt)));
        }
    }
}

Under light load, this works perfectly. But during a high-traffic event (like a flash sale), this pattern can lead to a “retry storm.”

  • Increased Latency: Each retry adds delay, increasing the overall response time for the user.
  • Thundering Herd Problem: Many failed requests retrying at once can hammer the database, causing a spike in connections and CPU usage.
  • Wasted Resources: The work done in a failed attempt (reading data, running business logic) is thrown away, consuming server resources for nothing.

In poorly-designed systems, retry storms can consume more CPU and database resources than the actual business logic itself.

The Hidden Cost of Pessimistic Locking: Reduced Throughput

Pessimistic locking avoids retry storms by making competing processes wait. However, this introduces its own set of problems.

The primary cost is blocking. While one transaction holds an update lock on a row, other processes that need to read or write that same row are forced to wait.

  • Cascading Delays: A slow transaction can cause a chain reaction, delaying many other operations. A simple reporting query that needs to scan the table could be blocked, grinding a dashboard to a halt.
  • Lock Escalation: If a single operation locks too many individual rows, the database engine may “escalate” the lock to cover the entire table, dramatically reducing concurrency.
  • Deadlocks: The risk of deadlocks increases. A deadlock occurs when two transactions are waiting for each other to release a lock that the other needs. The database will resolve the deadlock by killing one of the transactions, but this still results in a failed operation that must be handled.

Consider a reporting query trying to get recent orders. If the Orders table has rows locked by pessimistic updates, this simple read operation could be blocked.

// This read could be blocked by pessimistic updates
public async Task<List<OrderSummary>> GetRecentOrders()
{
    return await _context.Orders
        .AsNoTracking() // Good practice for read-only queries
        .Where(o => o.CreatedDate > DateTime.UtcNow.AddHours(-1))
        .Select(o => new OrderSummary { Id = o.Id, Amount = o.Amount })
        .ToListAsync(); // This may wait if rows are locked
}

Finding the Middle Ground: The Hybrid Approach

A “one-size-fits-all” strategy is rarely optimal. The most effective solutions often mix optimistic and pessimistic concurrency based on the specific workload.

  1. Optimistic by Default: Use optimistic concurrency for the majority of your operations where conflicts are rare.
  2. Pessimistic for Hotspots: For a small number of known “hotspots” (like updating inventory counts for a popular product), use explicit pessimistic locking.

This hybrid approach gives you the best of both worlds: the high throughput of optimistic concurrency for general use cases, and the data integrity guarantee of pessimistic locking where it matters most.

// Example: Pessimistically lock only the inventory count update
public async Task AdjustInventoryCount(int productId, int adjustment)
{
    using var transaction = await _context.Database.BeginTransactionAsync();
    
    try
    {
        var product = await _context.Products
            .FromSqlRaw("SELECT * FROM Products WITH (UPDLOCK, ROWLOCK) WHERE Id = {0}", productId)
            .FirstAsync();
        
        if (product.StockCount + adjustment >= 0)
        {
            product.StockCount += adjustment;
            await _context.SaveChangesAsync();
        }
        
        await transaction.CommitAsync();
    }
    catch
    {
        await transaction.RollbackAsync();
        throw;
    }
}

Observability: You Can’t Optimize What You Don’t Measure

Before switching to pessimistic locking, you need data. Instrument your application to track key concurrency metrics:

  • DbUpdateConcurrencyException Rate: Log every time this exception occurs. What is the rate of conflicts per entity or operation? A high rate (e.g., >5% of updates) is a strong indicator that optimistic concurrency is struggling.
  • Transaction Duration: Measure how long your database transactions take. Are they getting slower under load?
  • Retry Attempts: Track how many retries are needed for successful operations. If most operations require 2 or more retries, you have a contention problem.
  • Database Lock Waits: Use database-specific tools (like sys.dm_os_wait_stats in SQL Server) to monitor how much time your queries spend waiting for locks.
-- SQL Server query to check for lock-related waits
SELECT 
    wait_type,
    waiting_tasks_count,
    wait_time_ms,
    signal_wait_time_ms
FROM sys.dm_os_wait_stats 
WHERE wait_type LIKE '%LCK_M%' -- Focus on lock waits
ORDER BY wait_time_ms DESC;

Decision Matrix: A Guide to Choosing Your Strategy

This matrix provides a conceptual framework for making a decision. The “conflict rate” is the percentage of update attempts that fail due to a concurrency exception.

Workload TypeEstimated Conflict RateRecommended ApproachKey Considerations
User Profiles, Settings< 1%OptimisticDefault choice, minimal overhead.
Product Catalog Updates1-5%Optimistic with RetryMonitor exception rate and retry depth.
Inventory, Counters5-15%Hybrid or PessimisticHigh contention makes retries expensive.
Financial Transactions>10% or CriticalPessimisticData integrity is non-negotiable.
Real-time Bidding/ScoresVery HighPessimisticTransactions must be fast to minimize lock duration.

Generally, if your conflict rate creeps above 5%, the overhead of handling optimistic concurrency exceptions and retries often starts to outweigh the cost of pessimistic locking.

Final Recommendations

Don’t treat EF Core’s default optimistic strategy as a “free” feature. It’s a powerful tool, but its effectiveness depends entirely on your workload.

  1. Start with Measurement: Before changing your code, instrument your current optimistic strategy. Log concurrency exceptions and identify which entities or operations are causing the most conflicts.
  2. Identify the Hotspots: Is the problem widespread, or is it confined to a few specific tables or rows?
  3. Be Surgical: Apply pessimistic locking only where it’s demonstrably needed. Avoid global changes.
  4. Tune for Your Database: If you use pessimistic locking, consider enabling snapshot isolation in your database (READ_COMMITTED_SNAPSHOT in SQL Server) to prevent read queries from being blocked by write locks.

Concurrency strategy isn’t a global architectural decision to be made once. It’s a per-entity, per-workload choice that should adapt as your application and its usage patterns evolve. Measure your conflict rates, understand the trade-offs, and choose wisely.

About the Author

Abhinaw Kumar is a software engineer who builds real-world systems: from resilient ASP.NET Core backends to clean, maintainable Angular frontends. With over 11+ years in production development, he shares what actually works when you're shipping software that has to last.

Read more on the About page or connect on LinkedIn.

References

Frequently Asked Questions

What is optimistic concurrency in EF Core?

Optimistic concurrency assumes conflicts are rare. EF Core uses a version token (like a timestamp or rowversion) to check if data has changed between when it was read and when it’s being saved. If a change is detected, it throws a DbUpdateConcurrencyException.

When should you consider pessimistic concurrency?

Consider pessimistic concurrency for high-contention scenarios where data integrity is absolutely critical and the cost of a conflict is high. Examples include financial systems or inventory management during a flash sale. It locks data at the source to prevent conflicts from ever happening.

What are the potential costs of optimistic concurrency?

Under heavy write contention, the application can experience ‘retry storms,’ where multiple clients repeatedly fail to save data and retry. This can lead to increased CPU usage, more database round trips, and higher application latency.

Does pessimistic locking hurt performance?

Yes, it can. By locking rows, pessimistic concurrency can block other operations that need to access that same data, leading to lower throughput. Long-held locks can also increase the risk of deadlocks, where two or more processes are stuck waiting for each other to release a lock.

How can I decide which strategy to use?

A good starting point is to analyze the nature of your data and workload. Start with optimistic concurrency (the default) for low-contention data. For entities prone to frequent write conflicts (e.g., >5% conflict rate), a pessimistic or hybrid approach might be more suitable.

Can I mix optimistic and pessimistic strategies?

Absolutely. A hybrid approach is often the most effective solution. You can use optimistic concurrency by default and apply pessimistic locking selectively only for specific, high-contention operations or on ‘hot’ rows of data.

Related Posts