Table of Contents
TL;DR: Speed vs User Experience Trade-offs
Task.WhenAll() wins for raw performance:
- 10-40x faster total processing time
- Maximum throughput and parallel execution
- Best when you need all data before proceeding
IAsyncEnumerable wins for user experience:
- Immediate responsiveness - show results as they arrive
- Constant memory usage regardless of dataset size
- Graceful error handling - continue processing when some items fail
- Better perceived performance - users see progress immediately
The choice isn’t about which is “better”—it’s about what your users need most: speed or responsiveness.
Picture this: you’re building a dashboard that needs to load 10,000 customer records from multiple APIs. Your first instinct? Create a List<Task>
, fire off all requests simultaneously with Task.WhenAll()
, and wait for the magic to happen.
This works great for raw throughput—our benchmarks show Task.WhenAll()
is 10-40x faster for total processing time. But what about user experience? Your users stare at a loading spinner for 15 seconds while all data loads, even though the first results are available in milliseconds.
I’ve seen this exact scenario in production dashboards where users complained about “slow” APIs that were actually fast—they just had to wait for everything to complete before seeing anything.
The choice isn’t about which approach is objectively “better.” It’s about trade-offs: speed vs responsiveness, throughput vs user experience.
Task.WhenAll()
maximizes performance through parallel execution, while IAsyncEnumerable<T>
optimizes for immediate feedback and streaming user experience—even though it’s significantly slower for total processing time.
The Trade-offs: List<Task>
+ Task.WhenAll()
Here’s the pattern most C# developers know by heart:
public async Task<List<CustomerData>> LoadCustomersAsync(IEnumerable<int> customerIds)
{
// HttpClient should be injected via DI in production to avoid socket exhaustion
var tasks = customerIds.Select(async id =>
{
using var client = new HttpClient();
var response = await client.GetAsync($"https://api.example.com/customers/{id}");
return await response.Content.ReadFromJsonAsync<CustomerData>();
}).ToList();
return (await Task.WhenAll(tasks)).ToList();
}
This code provides excellent throughput and works well for many scenarios. However, it has specific trade-offs:
Memory Scaling Issues
With 50,000 customer records at 2KB each, you’re buffering 100MB+ of data before processing a single item. The actual memory footprint is often 3-5x higher due to task overhead, HTTP client buffers, and GC pressure.
User Experience Impact
Users can’t see any results until every single API call completes. If one endpoint takes 30 seconds due to network issues, your entire UI waits 30 seconds—even though 90% of the data arrived in the first few seconds.
All-or-Nothing Error Handling
One failing task can cause Task.WhenAll()
to throw, potentially losing all successfully retrieved data. You need additional error handling logic to make operations resilient.
Real-World Example: I once worked on a reporting dashboard where users complained about “slow” data loading. The APIs were actually fast (50-200ms each), but users waited 10+ seconds to see anything because we waited for all 500 API calls to complete. Switching to streaming results improved perceived performance dramatically.
These aren’t flaws—they’re trade-offs. Task.WhenAll()
optimizes for maximum throughput, while IAsyncEnumerable
optimizes for responsiveness and user experience.
Streaming Data in C# with IAsyncEnumerable<T>
IAsyncEnumerable<T>
represents an asynchronous sequence of data. Instead of loading everything upfront, it produces and consumes items one at a time as they become available.
Here’s the streaming version of our customer loader:
public async IAsyncEnumerable<CustomerData> LoadCustomersStreamAsync(
IEnumerable<int> customerIds,
[EnumeratorCancellation] CancellationToken cancellationToken = default)
{
// Inject HttpClient through DI in production
using var client = _httpClientFactory.CreateClient();
foreach (var id in customerIds)
{
cancellationToken.ThrowIfCancellationRequested();
CustomerData customer = null;
try
{
var response = await client.GetAsync($"https://api.example.com/customers/{id}", cancellationToken);
customer = await response.Content.ReadFromJsonAsync<CustomerData>(cancellationToken: cancellationToken);
}
catch (Exception ex)
{
// Log error but continue processing other customers
_logger.LogError(ex, "Failed to load customer {CustomerId}", id);
continue;
}
if (customer != null)
yield return customer;
}
}
// Consuming the stream
await foreach (var customer in LoadCustomersStreamAsync(customerIds))
{
// Process each customer immediately as it arrives
await ProcessCustomerAsync(customer);
}
Production Note: The
CustomerData
type must be serializable for JSON deserialization. Use[JsonPropertyName]
attributes for property mapping if needed.
Immediate Benefits
Constant memory usage. You only hold one customer record in memory at a time, regardless of whether you’re processing 100 or 100,000 records.
Instant responsiveness. The UI can display the first customer as soon as it arrives, creating a smooth streaming experience.
Graceful error handling. Failed requests don’t break the entire stream. You can log errors, skip problematic items, and continue processing.
When C# Async Streaming Becomes Essential
EF Core Database Queries with Large Result Sets
Entity Framework Core has native support for streaming query results:
public async IAsyncEnumerable<Order> GetLargeOrderSetAsync()
{
await foreach (var order in _context.Orders
.Where(o => o.CreatedDate > DateTime.Now.AddYears(-1))
.AsAsyncEnumerable())
{
yield return order;
}
}
EF Core Streaming Tip: Use
AsAsyncEnumerable()
with LINQ queries to stream database results directly without buffering. This works particularly well with SQL Server when processing millions of rows.
File Processing Operations
Reading massive CSV files or processing log files becomes memory-efficient:
public async IAsyncEnumerable<LogEntry> ParseLogFileAsync(string filePath)
{
using var reader = new StreamReader(filePath);
string line;
while ((line = await reader.ReadLineAsync()) != null)
{
if (TryParseLogEntry(line, out var entry))
yield return entry;
}
}
External API Integrations
When consuming paginated APIs or webhook streams, you can process data as it arrives instead of buffering everything.
Advanced IAsyncEnumerable
Patterns in .NET
Cancellation Support for Responsive Apps
public async IAsyncEnumerable<T> ProcessItemsAsync<T>(
IEnumerable<T> items,
[EnumeratorCancellation] CancellationToken cancellationToken = default)
{
foreach (var item in items)
{
cancellationToken.ThrowIfCancellationRequested();
var processed = await ProcessSingleItemAsync(item);
yield return processed;
}
}
Exception Handling Within Streams
Prevent complete failure with proper error handling:
public async IAsyncEnumerable<Result<T>> SafeProcessItemsAsync<T>(IEnumerable<T> items)
{
foreach (var item in items)
{
Result<T> result;
try
{
var processed = await ProcessItemAsync(item);
result = Result<T>.Success(processed);
}
catch (Exception ex)
{
result = Result<T>.Failure(ex.Message);
}
yield return result;
}
}
ConfigureAwait
Best Practices
Prevent deadlocks in synchronization contexts:
public async IAsyncEnumerable<T> GetDataAsync<T>()
{
var data = await FetchDataAsync().ConfigureAwait(false);
foreach (var item in data)
{
yield return await TransformAsync(item).ConfigureAwait(false);
}
}
Combining Both Approaches: Parallel Streaming
You can get the best of both worlds by combining controlled parallelism with streaming:
public async IAsyncEnumerable<CustomerData> LoadCustomersOptimizedAsync(
IEnumerable<int> customerIds,
int degreeOfParallelism = 10,
[EnumeratorCancellation] CancellationToken cancellationToken = default)
{
var semaphore = new SemaphoreSlim(degreeOfParallelism, degreeOfParallelism);
var channel = Channel.CreateUnbounded<CustomerData>();
var writer = channel.Writer;
var tasks = customerIds.Select(async id =>
{
await semaphore.WaitAsync(cancellationToken);
try
{
var customer = await _apiSimulator.GetCustomerAsync(id);
await writer.WriteAsync(customer, cancellationToken);
}
catch (Exception ex)
{
_logger.LogError(ex, "Failed to load customer {CustomerId}", id);
}
finally
{
semaphore.Release();
}
});
// Start all tasks but don't await them yet
var allTasks = Task.WhenAll(tasks);
// Complete the writer when all tasks finish
_ = allTasks.ContinueWith(_ => writer.Complete(), cancellationToken);
// Yield results as they become available
await foreach (var customer in channel.Reader.ReadAllAsync(cancellationToken))
{
yield return customer;
}
await allTasks; // Ensure all tasks completed
}
This approach provides:
- Controlled parallelism: Process multiple items concurrently without overwhelming the system
- Immediate results: Stream data as soon as it becomes available
- Memory efficiency: Don’t buffer all results before yielding them
- Error resilience: Individual failures don’t stop the entire stream
Expert Tip: Use
IAsyncEnumerable
with parallel processing by combining it withParallel.ForEachAsync()
in .NET 6+. You get streaming benefits plus controlled parallelism without the memory explosion ofTask.WhenAll()
. Monitor performance withdotnet-counters
to track memory usage and throughput.
The Reality Check: Performance Benchmarks
Based on actual benchmarks using BenchmarkDotNet, here’s what the data reveals:
Approach | Item Count | Total Processing Time | Performance Difference | Memory Allocation |
---|---|---|---|---|
Task.WhenAll() | 10 | 15.54 ms | Baseline | 49.98 KB |
IAsyncEnumerable | 10 | 152.49 ms | 10x slower | 49.1 KB |
Task.WhenAll() | 25 | 15.61 ms | Baseline | 124.26 KB |
IAsyncEnumerable | 25 | 383.36 ms | 25x slower | 122.58 KB |
Task.WhenAll() | 50 | 12.66 ms | Baseline | 248.1 KB |
IAsyncEnumerable | 50 | 581.84 ms | 46x slower | 243.61 KB |
The Uncomfortable Truth
Task.WhenAll()
is objectively faster. There’s no sugar-coating this—for raw processing speed, parallel execution wins by a massive margin.
But speed isn’t everything. The benchmark shows total processing time, not user experience metrics like:
- Time until first result appears
- UI responsiveness during operation
- Memory pressure on large datasets
- Error recovery capabilities
Key Performance Insights
Why Task.WhenAll()
is faster: Parallel execution means all operations run simultaneously. With minimal API latency in our benchmark, this approach dominates.
Why IAsyncEnumerable
is slower: Sequential processing means each item waits for the previous one to complete. For pure throughput, this is inefficient.
Memory efficiency: IAsyncEnumerable
uses 2-4% less memory per operation, but the real advantage comes with streaming processing—you don’t buffer everything in memory.
Time to first result: Both approaches show similar time to first result (~13-15ms) because we’re simulating fast API calls. In real-world scenarios with variable latency, IAsyncEnumerable
can show results much sooner.
Making the Right Choice: Speed vs Experience
The benchmark results clearly show that IAsyncEnumerable
doesn’t “beat” Task.WhenAll()
in raw performance. So when should you choose each approach?
Choose Task.WhenAll()
when:
- Speed is paramount: You need maximum throughput and can wait for all results
- All-or-nothing processing: Your application requires the complete dataset before proceeding
- Simple error handling: You can handle failures globally and retry entire operations
- Batch processing: You’re processing data in background jobs where user experience isn’t a factor
Choose IAsyncEnumerable
when:
- User experience matters: You want to show progress and prevent UI freezing
- Progressive processing: You can act on partial results while waiting for more
- Memory constraints: You’re dealing with large datasets that shouldn’t be buffered entirely
- Error resilience: You want to continue processing even when some operations fail
- Real-time feel: Users expect immediate feedback and responsiveness
The Real-World Impact
Consider a search results page loading 1000 items:
With Task.WhenAll()
:
- Users wait 2-3 seconds
- See all 1000 results at once
- Get “This page is slow” perception
- Lose all results if something fails
With IAsyncEnumerable
:
- Users see first results in 50-100ms
- Results populate progressively
- Get “This is fast and responsive” perception
- Keep successful results even if some fail
The slower total processing time becomes irrelevant when users perceive better performance.
IAsyncEnumerable
isn’t about being faster—it’s about being smarter with user experience, memory usage, and error handling. Sometimes the “slower” approach provides better overall value.
Frequently Asked Questions
Is IAsyncEnumerable faster than Task.WhenAll in C#?
Does EF Core support IAsyncEnumerable?
How do I handle exceptions in IAsyncEnumerable streams?
Can I use IAsyncEnumerable with parallel processing?
References
- Microsoft Documentation: IAsyncEnumerable
- Entity Framework Core: Streaming Query Results
- .NET 6 Performance Improvements
- Async Streams in C# 8
- BenchmarkDotNet Performance Testing