TL;DR:

Async/await improves scalability, not speed. Use it strategically for I/O-bound operations or high concurrency. Overusing async in enterprise apps can lead to slower performance, harder debugging, and unnecessary complexity.

I’ll be honest, I used to slap async and await on everything. Database calls, file operations, even simple method calls that returned in milliseconds. I thought I was writing “modern” C# code. Turns out, I was just making my applications more complex without any real performance gain.

If you’ve been following the “async all the things” mentality that dominated C# discussions around 2018, you’re not alone. But here’s the reality: async isn’t a magic performance booster. It’s a tool for specific concurrency scenarios, and overusing it creates more problems than it solves.

Important Distinction: Scalability != Speed
Async improves scalability, the ability of your system to handle more concurrent users without blocking threads.
It usually does not improve the speed of individual operations. In fact, async can make a single request slightly slower due to state machine overhead.

Think of it this way:

  • Synchronous code: Serve 50 people quickly, but only 50 at a time.
  • Async code: Serve 5000 people simultaneously, but each might wait a bit longer.

Use async to handle more requests, not to make a single one faster.

This isn’t an anti-async rant. Async/await is powerful when used correctly. The problem is that most enterprise C# applications don’t need it everywhere, they need it strategically.

The Async Hype: How We Got Here

The push for async everywhere started with ASP.NET Core’s scalability guidance. Microsoft’s documentation emphasized how async controllers could handle thousands of concurrent requests without blocking threads. This advice made sense for high-traffic web applications, but it got applied universally.

Blog posts and conference talks reinforced this message: “Use async for all I/O operations!” The problem? Most enterprise applications aren’t Netflix or Stack Overflow. They’re internal business systems serving dozens or hundreds of users, not thousands of concurrent connections.

Personal Experience:
In 2019, I rewrote an entire order processing service to use async/await because I thought it would improve performance. After deployment, response times actually got worse. The service was handling maybe 50 concurrent requests, and the async overhead was more expensive than the alleged “thread blocking” I was trying to avoid.

When Async Actually Helps

Async shines in specific scenarios where you’re genuinely I/O-bound with high latency operations:

  • Remote HTTP calls: Third-party APIs, microservice communication
  • Cloud storage operations: S3, Azure Blob, file uploads
  • Message queue operations: Service Bus, RabbitMQ
  • High-concurrency web applications: 1000+ simultaneous users

The key insight: async improves scalability under load, not raw performance. If your application serves 50 users simultaneously, async won’t make individual requests faster, it might make them slower.

// This makes sense - calling external APIs with high latency
public async Task<WeatherData> GetWeatherAsync(string city)
{
    using var client = new HttpClient();
    var response = await client.GetAsync($"https://api.weather.com/v1/current?city={city}");
    var json = await response.Content.ReadAsStringAsync();
    return JsonSerializer.Deserialize<WeatherData>(json);
}

Pro Tip:
Async is great when you have 5000+ concurrent connections, most enterprise apps don’t. If you’re not hitting thread pool exhaustion, you’re probably optimizing the wrong thing.

When Async Just Adds Complexity

Database calls are the biggest offender. Modern database connections are fast, especially with connection pooling. Adding async to simple CRUD operations often creates overhead without benefit:

// Probably unnecessary async - EF Core query executes in ~42.7μs with 5% overhead
public async Task<Customer> GetCustomerAsync(int id)
{
    return await _context.Customers.FirstOrDefaultAsync(c => c.Id == id);
}

// This is often better for simple queries - executes in ~41.5μs
public Customer GetCustomer(int id)
{
    return _context.Customers.FirstOrDefault(c => c.Id == id);
}

A Middle Ground: ValueTask for Hot Paths

If you’re worried about allocations but still need async for framework consistency (e.g., your repository interface is async), consider using ValueTask<T> in hot paths where results are often cached or already computed.

This avoids unnecessary Task allocations when no actual asynchrony is involved.

public ValueTask<Customer> GetCustomerAsync(int id)
{
    if (_cache.TryGetValue(id, out var customer))
        return new ValueTask<Customer>(customer);

    // Falls back to actual async DB call only when needed
    return new ValueTask<Customer>(_context.Customers.FirstOrDefaultAsync(c => c.Id == id));
}

Why use ValueTask?

  • Avoids heap allocations when the result is already computed.
  • Reduces async overhead in high-frequency methods.
  • Maintains consistency when your API surface expects async.

Performance Note: Our benchmarks show ValueTask performs significantly better than Task for immediate results - only 1.6x slower than sync (32.7ns vs 19.9ns) compared to Task’s 72x overhead (1,435ns). However, for database operations, ValueTask was surprisingly 42% slower than sync, suggesting EF Core’s async implementation adds complexity that ValueTask doesn’t optimize away.

Marc Gravell’s research shows similar patterns: Read Marc Gravell’s detailed benchmarks –>

CPU-bound operations should stay synchronous. Wrapping synchronous code in Task.Run doesn’t improve performance, it adds context switching overhead:

// Don't do this - you're adding 60% overhead and 10x memory allocations
public async Task<decimal> CalculateTotalAsync(List<OrderItem> items)
{
    return await Task.Run(() => items.Sum(x => x.Price * x.Quantity));
}

// Just do this - 60% faster with 10x less memory usage
public decimal CalculateTotal(List<OrderItem> items)
{
    return items.Sum(x => x.Price * x.Quantity);
}

Debug Note:
I spent hours debugging a deadlock at 2 AM because someone added ConfigureAwait(false) to a method that didn’t need to be async in the first place. The async call stack made the error harder to trace than it needed to be.

The Performance Fallacy: Async Isn’t Always Faster

Async has overhead. The C# compiler generates a state machine for every async method. This adds memory allocation and CPU cycles.

Microsoft’s own research shows that async methods have substantial performance overhead when there’s no actual I/O benefit. Our benchmarks demonstrate this clearly: simple async method calls can be 72x slower than their synchronous counterparts, while CPU-bound work wrapped in Task.Run shows 60% overhead with 10x more memory allocations.

// This generates a state machine with heap allocation
public async Task<string> GetDataAsync()
{
    var data = await _repository.GetDataAsync();
    return data.ToString();
}

// This doesn't
public string GetData()
{
    var data = _repository.GetData();
    return data.ToString();
}

Performance Research: Detailed benchmarks by Marc Gravell show that async operations can have ~300 bytes per operation overhead on x64 platforms. For high-frequency operations, this adds up quickly.

Concrete Performance Numbers: Benchmark Results

To support the claims in this post, I ran actual benchmarks comparing async vs sync performance across different scenarios. Here are the real numbers from a .NET 8 application running on an Intel i5-1345U:

Database Query Performance

| Method              | Mean     | Ratio | Allocated | Alloc Ratio |
|-------------------- |---------:|------:|----------:|------------:|
| GetCustomerSync     | 41.5 μs  | 1.00  |   77,456 B|       1.00  |
| GetCustomerAsync    | 42.7 μs  | 1.05  |   77,608 B|       1.002 |
| GetCustomerValueTask| 57.9 μs  | 1.42  |   77,656 B|       1.003 |

Key findings:

  • Async adds 5% overhead for simple database queries (42.7μs vs 41.5μs)
  • ValueTask is 42% slower than sync for this EF Core scenario
  • Memory allocations are nearly identical, suggesting the overhead is CPU-bound

CPU-Bound Calculation Performance

| Method                    | Mean     | Ratio | Allocated | Alloc Ratio |
|-------------------------- |---------:|------:|----------:|------------:|
| CalculateTotalSync        | 19.3 μs  | 1.00  |      40 B |       1.00  |
| CalculateTotalAsync       | 30.8 μs  | 1.60  |     432 B |      10.8x  |
| CalculateTotalAsyncDirect | 37.7 μs  | 1.95  |     206 B |       5.2x  |

Key findings:

  • Task.Run overhead: 60% slower execution (30.8μs vs 19.3μs)
  • Memory allocations: 10.8x more memory for Task.Run approach
  • Direct async wrapper: 95% slower, still 5x more memory

Simple Method Call Performance

| Method              | Mean      | Ratio | Allocated | Alloc Ratio |
|-------------------- |----------:|------:|----------:|------------:|
| ProcessDataSync     |  19.9 ns  | 1.00  |      80 B |       1.00  |
| ProcessDataAsync    | 1,435 ns  | 72x   |     279 B |       3.5x  |
| ProcessDataValueTask|  32.7 ns  | 1.6x  |      80 B |       1.00  |

Key findings:

  • Async overhead is massive: 72x slower for simple operations (1,435ns vs 19.9ns)
  • ValueTask optimization: Only 1.6x slower when returning immediate results
  • Memory efficiency: ValueTask has same allocations as sync

What These Numbers Tell Us

  1. Database queries: Async overhead is minimal (5%) but still present
  2. CPU-bound work: Async wrappers add significant overhead (60-95% slower)
  3. Simple operations: Async can be catastrophically slow (72x overhead)
  4. ValueTask sweet spot: Good middle ground for immediate results scenarios

These benchmarks confirm that async isn’t free - it has measurable overhead that varies dramatically based on the operation type.

Real-World Enterprise Scenarios

API Controllers: If your controller calls the database and returns results, sync is often fine unless you’re CPU-bound or making multiple remote calls:

[ApiController]
[Route("api/[controller]")]
public class CustomersController : ControllerBase
{
    private readonly CustomerService _customerService;

    // This is fine for most enterprise apps
    [HttpGet("{id}")]
    public ActionResult<Customer> GetCustomer(int id)
    {
        var customer = _customerService.GetCustomer(id);
        return customer == null ? NotFound() : Ok(customer);
    }

    // Use async when you're actually making I/O-bound calls
    [HttpGet("{id}/orders")]
    public async Task<ActionResult<List<Order>>> GetCustomerOrders(int id)
    {
        // This makes multiple database calls or calls external services
        var orders = await _orderService.GetOrdersWithShippingStatusAsync(id);
        return Ok(orders);
    }
}

Batch Jobs: Sequential processing rarely benefits from async. If you’re processing records one by one, sync is simpler and often faster:

// For sequential processing, sync is cleaner
public void ProcessOrders(List<Order> orders)
{
    foreach (var order in orders)
    {
        _paymentService.ProcessPayment(order);
        _inventoryService.UpdateInventory(order);
        _emailService.SendConfirmation(order);
    }
}

// Use async when you can parallelize
public async Task ProcessOrdersAsync(List<Order> orders)
{
    var tasks = orders.Select(async order => 
    {
        await _paymentService.ProcessPaymentAsync(order);
        await _inventoryService.UpdateInventoryAsync(order);
        await _emailService.SendConfirmationAsync(order);
    });
    
    await Task.WhenAll(tasks);
}

Personal Experience:
In a financial services project, we added async to a report generation service because it “seemed right.” The reports took longer to generate, debugging became harder, and we gained zero throughput improvement. We reverted to sync and saw better performance.

When Should You Actually Use Async?

A Simple Rule of Thumb: Use Async When L.A.T.E.

Before you decide to make a method async, ask yourself: Is it L.A.T.E.?

L.A.T.E. =

  • Latency-bound
    • Is the operation waiting on something slow like a network or disk?
  • Awaiting remote I/O
    • Are you calling APIs, message queues, or cloud services?
  • Thread pool constrained
    • Is your app under high concurrent load (e.g., web servers, message processors)?
  • Expensive to block
    • Would blocking cause bottlenecks in user-facing APIs or background workers?

If none of these are true, async might not help, and could hurt performance.

Quick Decision Table

ScenarioUse Async?Why
Database query (< 100ms)Usually NoConnection pooling handles concurrency
HTTP API callYesNetwork latency justifies async
File I/O operationsDependsSmall files: no. Large files: yes
CPU-bound calculationNoNo I/O waiting involved
Multiple parallel operationsYesConcurrent processing benefits

The real question: Are you waiting on I/O that could benefit from yielding the thread? If not, stay synchronous.

Official Microsoft Guidance

Microsoft’s documentation explicitly states: “In most applications using async will have no noticeable benefits and even could be detrimental”. They recommend profiling and measuring impact before committing to async.

The Entity Framework Core documentation explains when async database operations are beneficial, primarily for web applications under load, not universally.

Best Practices for Async in Enterprise C#

Profile before you async. Use tools like dotMemory or Application Insights to identify actual bottlenecks:

// Measure thread pool usage
ThreadPool.GetAvailableThreads(out int workerThreads, out int completionPortThreads);
_logger.LogInformation("Available threads: {WorkerThreads}/{CompletionPortThreads}", 
    workerThreads, completionPortThreads);

Keep async at application boundaries. Don’t let async infect your domain logic:

// Good - async stays at the controller level
[HttpPost]
public async Task<IActionResult> CreateOrder(CreateOrderRequest request)
{
    var order = _orderService.CreateOrder(request); // Sync domain logic
    await _orderRepository.SaveAsync(order);        // Async I/O
    return CreatedAtAction(nameof(GetOrder), new { id = order.Id }, order);
}

Common Async Overuse Mistakes

Wrapping CPU-bound code in Task.Run:

// Wrong - adds overhead without benefit
public async Task<int> CalculateAsync(int[] numbers)
{
    return await Task.Run(() => numbers.Sum());
}

// Right
public int Calculate(int[] numbers)
{
    return numbers.Sum();
}

Adding async for “future-proofing”:

// Wrong - adds complexity for hypothetical future needs
public async Task<User> GetUserAsync(int id)
{
    return await Task.FromResult(_users.Find(u => u.Id == id));
}

// Right - add async when you actually need it
public User GetUser(int id)
{
    return _users.Find(u => u.Id == id);
}

When Async Infects Your Code: The Library Contamination Problem

Sometimes you don’t even want async, but you’re forced into it because third-party libraries only expose async APIs.

This is what I call “async contamination.”

Common Cases

  • EF Core
    _context.Customers.FirstOrDefaultAsync() forces async into your service, even if you’re just fetching one record.

  • HTTP clients
    HttpClient.SendAsync() has no sync equivalent.

  • Cloud SDKs
    Azure, AWS, and GCP libraries are async-first, sometimes async-only.

How to Handle It

Keep async at the boundaries of your application.

  • Controllers: Accept async Task<IActionResult>
  • Repositories / APIs: Use async if the library demands it
  • Domain Layer / Business Logic: Keep synchronous unless you’re truly parallelizing work

Why?
Async everywhere makes your core domain logic harder to test, debug, and reason about.
Async should be an infrastructure concern, not a business logic concern.

Example

// Good: Async stays in infrastructure
public class CustomerService
{
    private readonly ICustomerRepository _repository;

    public Customer GetCustomer(int id)
    {
        return _repository.GetCustomer(id); // Sync domain logic
    }
}

// Infrastructure layer - forced async due to EF Core
public class CustomerRepository : ICustomerRepository
{
    private readonly AppDbContext _context;

    public Customer GetCustomer(int id)
    {
        return _context.Customers.FirstOrDefaultAsync(c => c.Id == id).GetAwaiter().GetResult();
    }
}

Warning

Using .GetAwaiter().GetResult() can cause deadlocks in UI apps or ASP.NET if not handled carefully.

In ASP.NET Core, it’s generally safe because there’s no synchronization context, but be cautious.

Performance Benchmarks and Research

For detailed performance analysis, check these resources:

You can clone and run the benchmarks yourself to see the performance differences on your own hardware. The project includes tests for database operations, CPU-bound calculations, and simple method calls.

Wrapping Up: My Async Rule of Thumb

Here’s my personal rule: If I’m not solving a concurrency problem, I start synchronous.

Most enterprise applications have clear bottlenecks, database queries, external API calls, file operations. Profile your application, identify the real performance issues, then apply async strategically to those specific problems.

Start simple. Optimize later when you prove there’s a need. I’ve seen more bugs from async overuse than from sync bottlenecks. Your future self (and your teammates) will thank you for keeping the code simple until complexity is justified.

Heads-up:
The next time someone suggests “making everything async,” ask them to show you the performance metrics that justify the added complexity. You might be surprised by the answer.

Remember: async/await is a tool, not a best practice. Use it when it solves actual concurrency problems, not because it feels more “modern.”

About the Author

Abhinaw Kumar is a software engineer who builds real-world systems: from resilient ASP.NET Core backends to clean, maintainable Angular frontends. With over 11+ years in production development, he shares what actually works when you're shipping software that has to last.

Read more on the About page or connect on LinkedIn.

Further Reading

Related Posts