Lock-Free Thread Synchronization
Interlocked operations provide atomic thread-safe updates without the overhead of locks. They're faster than traditional locking mechanisms and prevent threads from blocking, making them ideal for high-throughput scenarios where contention is common.
The Interlocked class maps directly to CPU-level atomic instructions. When you increment a counter with Interlocked.Increment, the processor guarantees that read-modify-write happens as a single operation. No other thread can observe the variable in an intermediate state. This eliminates race conditions on shared counters, flags, and references without the complexity of lock statements.
You'll learn when to use Interlocked instead of locks, how to implement common patterns like thread-safe counters and compare-and-swap operations, and understand the performance benefits these techniques provide in concurrent applications.
Thread-Safe Counters with Interlocked.Increment
The most common use case for Interlocked is maintaining shared counters across threads. Without synchronization, incrementing a counter from multiple threads produces incorrect results because the increment operation isn't atomic—it involves reading the value, adding one, and writing back. Interlocked.Increment performs all three steps atomically.
This pattern appears frequently in metrics collection, connection pools, and request tracking where multiple threads need to update the same counter simultaneously. The lock-free nature means threads never wait, providing consistent performance even under heavy load.
public class RequestCounter
{
private int _unsafeCount;
private int _interlockedCount;
private readonly object _lockObj = new();
private int _lockedCount;
// UNSAFE: Race condition possible
public void IncrementUnsafe()
{
_unsafeCount++; // Three operations: read, add, write
}
// SAFE: Atomic operation
public void IncrementInterlocked()
{
Interlocked.Increment(ref _interlockedCount);
}
// SAFE: But slower than Interlocked
public void IncrementLocked()
{
lock (_lockObj)
{
_lockedCount++;
}
}
public void SimulateConcurrentRequests()
{
const int iterations = 100_000;
var threads = new Thread[10];
for (int i = 0; i < threads.Length; i++)
{
threads[i] = new Thread(() =>
{
for (int j = 0; j < iterations; j++)
{
IncrementUnsafe();
IncrementInterlocked();
IncrementLocked();
}
});
threads[i].Start();
}
foreach (var thread in threads)
thread.Join();
int expected = iterations * threads.Length;
Console.WriteLine($"Expected: {expected:N0}");
Console.WriteLine($"Unsafe count: {_unsafeCount:N0} (likely wrong)");
Console.WriteLine($"Interlocked count: {_interlockedCount:N0}");
Console.WriteLine($"Locked count: {_lockedCount:N0}");
}
}
The unsafe increment will almost certainly show a count lower than expected due to lost updates when threads interleave operations. Both Interlocked and lock versions produce correct counts, but Interlocked avoids the overhead of acquiring and releasing locks on every operation.
Compare-and-Swap with CompareExchange
Interlocked.CompareExchange implements the compare-and-swap pattern: it compares a variable to an expected value and only updates it if they match. This atomic operation returns the original value, letting you know whether the swap succeeded. It's the foundation for building lock-free data structures and optimistic concurrency patterns.
You'll use CompareExchange when updating shared state based on its current value. For example, implementing a lazy initialization pattern where multiple threads might try to initialize simultaneously, but you only want one initialization to succeed.
public class ExpensiveResource
{
public ExpensiveResource()
{
Console.WriteLine("Initializing expensive resource...");
Thread.Sleep(100); // Simulate expensive initialization
}
public string Id { get; } = Guid.NewGuid().ToString();
}
public class LazyResourceManager
{
private ExpensiveResource? _resource;
public ExpensiveResource GetResource()
{
// If already initialized, return immediately
if (_resource != null)
return _resource;
// Create new instance
var newResource = new ExpensiveResource();
// Try to set it atomically - only succeeds if still null
var original = Interlocked.CompareExchange(
ref _resource, newResource, null);
// If original was null, we won (our instance is now _resource)
// If original wasn't null, another thread won (use their instance)
return original ?? newResource;
}
}
// Usage
var manager = new LazyResourceManager();
var tasks = Enumerable.Range(0, 5)
.Select(_ => Task.Run(() =>
{
var resource = manager.GetResource();
Console.WriteLine($"Got resource: {resource.Id}");
}))
.ToArray();
Task.WaitAll(tasks);
Even though five threads attempt initialization simultaneously, CompareExchange ensures only one ExpensiveResource gets created. The atomic comparison and swap prevents race conditions without blocking threads with locks.
Atomic Value Replacement with Exchange
Interlocked.Exchange atomically replaces a variable's value and returns the previous value. Unlike CompareExchange which only swaps if a condition is met, Exchange always performs the replacement. This is useful for implementing flags, state switches, and single-writer scenarios where you need to know the previous value.
public enum CircuitState { Closed = 0, Open = 1 }
public class SimpleCircuitBreaker
{
private int _state = (int)CircuitState.Closed;
private int _failureCount;
public bool TryExecute(Action operation)
{
// Check current state atomically
var currentState = (CircuitState)Interlocked.CompareExchange(
ref _state, (int)CircuitState.Closed, (int)CircuitState.Closed);
if (currentState == CircuitState.Open)
{
Console.WriteLine("Circuit is OPEN - request rejected");
return false;
}
try
{
operation();
Interlocked.Exchange(ref _failureCount, 0); // Reset on success
return true;
}
catch (Exception ex)
{
var failures = Interlocked.Increment(ref _failureCount);
Console.WriteLine($"Failure #{failures}: {ex.Message}");
if (failures >= 3)
{
var previous = (CircuitState)Interlocked.Exchange(
ref _state, (int)CircuitState.Open);
Console.WriteLine($"Circuit state: {previous} → Open");
}
return false;
}
}
public void Reset()
{
Interlocked.Exchange(ref _state, (int)CircuitState.Closed);
Interlocked.Exchange(ref _failureCount, 0);
Console.WriteLine("Circuit manually reset to Closed");
}
}
The circuit breaker tracks failures and opens after three consecutive failures, all without locks. Exchange updates the state atomically while Increment tracks failures thread-safely. This pattern is common in resilience libraries where multiple threads call the same protected operation.
Benchmark Lock vs Interlocked
Compare the performance difference between traditional locking and Interlocked operations. You'll see how Interlocked avoids thread blocking and provides superior throughput under contention.
Steps
- Scaffold project:
dotnet new console -n InterlockedBench
- Navigate:
cd InterlockedBench
- Install BenchmarkDotNet package
- Replace Program.cs with benchmark code
- Run with
dotnet run -c Release
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFramework>net8.0</TargetFramework>
<Nullable>enable</Nullable>
<ImplicitUsings>enable</ImplicitUsings>
</PropertyGroup>
<ItemGroup>
<PackageReference Include="BenchmarkDotNet" Version="0.13.*" />
</ItemGroup>
</Project>
using BenchmarkDotNet.Attributes;
using BenchmarkDotNet.Running;
BenchmarkRunner.Run();
[MemoryDiagnoser]
public class CounterBenchmarks
{
private int _interlockedCounter;
private int _lockedCounter;
private readonly object _lock = new();
[Benchmark(Baseline = true)]
public void InterlockedIncrement()
{
Interlocked.Increment(ref _interlockedCounter);
}
[Benchmark]
public void LockedIncrement()
{
lock (_lock)
{
_lockedCounter++;
}
}
[Benchmark]
public int InterlockedAdd()
{
return Interlocked.Add(ref _interlockedCounter, 5);
}
[Benchmark]
public int LockedAdd()
{
lock (_lock)
{
return _lockedCounter += 5;
}
}
}
What You'll See
| Method | Mean | Allocated |
|-------------------- |----------:|----------:|
| InterlockedIncrement| ~2.5 ns | - |
| LockedIncrement | ~15.0 ns | - |
| InterlockedAdd | ~3.0 ns | - |
| LockedAdd | ~16.0 ns | - |
Interlocked operations are ~5-6x faster than locks