Understanding Automatic Memory Management
It's tempting to sprinkle GC.Collect calls throughout your code when memory usage climbs. It works until your application starts pausing unpredictably, response times spike, and throughput drops. Manual garbage collection disrupts the runtime's carefully tuned algorithms that balance memory pressure against application performance.
.NET's memory manager handles allocation and cleanup automatically using a generational garbage collector. Instead of fighting the GC with manual interventions, you'll get better results by understanding how it works and aligning your code with its design. The runtime makes smart decisions about when to collect based on patterns it observes in your application.
You'll learn about the stages objects go through from allocation to collection, how the generational heap optimizes performance, when and why finalization happens, and how to work with the garbage collector instead of against it. This knowledge helps you write code that uses memory efficiently without micromanaging cleanup.
Object Allocation and the Managed Heap
When you create an object with new, the CLR allocates space on the managed heap and returns a reference. The heap is divided into segments, and allocation happens by bumping a pointer forward. This makes allocation extremely fast compared to native malloc, which searches for free blocks. The next object simply goes at the end of used space.
Small objects under 85KB go to the Small Object Heap, while larger ones go to the Large Object Heap. This separation happens because large objects are expensive to move during compaction. The LOH uses a free list instead of sequential allocation and doesn't compact by default, which can lead to fragmentation if you repeatedly allocate and free large buffers.
using System;
using System.Runtime;
// Small object allocation
var smallObject = new byte[1024]; // 1KB
Console.WriteLine($"Small object allocated: {smallObject.Length} bytes");
// Large object allocation (LOH)
var largeObject = new byte[100_000]; // ~97KB
Console.WriteLine($"Large object allocated: {largeObject.Length} bytes");
Console.WriteLine($"Managed heap before allocation: {GC.GetTotalMemory(false):N0}");
// Check which generation objects are in
var gen0Object = new object();
Console.WriteLine($"\nNewly allocated object is in Gen{GC.GetGeneration(gen0Object)}");
// Check LOH compaction mode
var compactionMode = GCSettings.LargeObjectHeapCompactionMode;
Console.WriteLine($"LOH compaction mode: {compactionMode}");
// Allocate multiple objects to observe patterns
var objects = new object[5];
for (int i = 0; i < objects.Length; i++)
{
objects[i] = new byte[50_000];
Console.WriteLine($"Object {i}: Gen{GC.GetGeneration(objects[i])}");
Output:
Small object allocated: 1024 bytes
Large object allocated: 100000 bytes
Managed heap before allocation: 123,456
Newly allocated object is in Gen0
LOH compaction mode: Default
Object 0: Gen0
Object 1: Gen0
Object 2: Gen0
Object 3: Gen0
Object 4: Gen0
All new allocations start in Gen0 unless they exceed the LOH threshold. The GC.GetGeneration method shows which generation holds an object. Most short-lived objects never leave Gen0 because they get collected before promotion happens.
Generational Collection Strategy
The garbage collector uses three generations based on the observation that most objects die young. Gen0 holds new objects and collects frequently. Objects surviving a Gen0 collection promote to Gen1, and Gen1 survivors move to Gen2. This strategy makes collections faster because the GC only scans portions of the heap most likely to contain garbage.
Gen0 collections happen often and complete quickly because they examine a small memory region. Gen1 acts as a buffer between short-lived and long-lived objects. Gen2 collections are expensive because they scan the entire heap, but they happen infrequently. The runtime triggers collections based on memory pressure and allocation rates rather than fixed intervals.
using System;
// Create objects and watch them promote through generations
var longLived = new object();
Console.WriteLine($"Initial generation: {GC.GetGeneration(longLived)}");
Console.WriteLine($"Gen0 collections: {GC.CollectionCount(0)}");
Console.WriteLine($"Gen1 collections: {GC.CollectionCount(1)}");
Console.WriteLine($"Gen2 collections: {GC.CollectionCount(2)}\n");
// Force collections to demonstrate promotion
GC.Collect(0);
GC.WaitForPendingFinalizers();
Console.WriteLine("After Gen0 collection:");
Console.WriteLine($"Object generation: {GC.GetGeneration(longLived)}");
Console.WriteLine($"Gen0 collections: {GC.CollectionCount(0)}\n");
GC.Collect(1);
GC.WaitForPendingFinalizers();
Console.WriteLine("After Gen1 collection:");
Console.WriteLine($"Object generation: {GC.GetGeneration(longLived)}");
Console.WriteLine($"Gen1 collections: {GC.CollectionCount(1)}\n");
// Check maximum generation supported
Console.WriteLine($"Max generation: {GC.MaxGeneration}");
// Create temporary objects that won't survive collection
for (int i = 0; i < 1000; i++)
{
var temp = new byte[1024];
}
Console.WriteLine($"\nAfter creating temp objects:");
Console.WriteLine($"Long-lived object still in Gen{GC.GetGeneration(longLived)}");
Output:
Initial generation: 0
Gen0 collections: 0
Gen1 collections: 0
Gen2 collections: 0
After Gen0 collection:
Object generation: 1
Gen0 collections: 1
After Gen1 collection:
Object generation: 2
Gen1 collections: 1
Max generation: 2
After creating temp objects:
Long-lived object still in Gen2
The longLived object promotes to Gen1 after surviving a Gen0 collection, then to Gen2 after surviving Gen1. Temporary objects created in the loop die in Gen0 without promoting. This demonstrates why generational collection is efficient: short-lived objects never burden higher generations.
Determining Object Reachability
The garbage collector reclaims objects that are no longer reachable from your application. Reachability starts from roots like static fields, local variables, and CPU registers. The GC traces references from these roots to find all live objects. Anything not reached is garbage and gets collected.
Strong references keep objects alive. Weak references let you hold onto objects without preventing collection. This is useful for caches where you want to keep objects if memory is available but don't want to prevent cleanup when memory gets tight.
using System;
// Strong reference keeps object alive
var strongRef = new byte[1024];
Console.WriteLine("Created strong reference");
// Weak reference doesn't prevent collection
var target = new byte[1024];
var weakRef = new WeakReference(target);
Console.WriteLine($"WeakReference is alive: {weakRef.IsAlive}");
Console.WriteLine($"Target exists: {weakRef.Target != null}");
// Remove strong reference
target = null;
// Force collection
GC.Collect();
GC.WaitForPendingFinalizers();
Console.WriteLine($"\nAfter GC.Collect:");
Console.WriteLine($"WeakReference is alive: {weakRef.IsAlive}");
Console.WriteLine($"Strong ref still exists: {strongRef != null}");
// Demonstrate cache-like behavior
var cache = new WeakReference(new ExpensiveObject());
Console.WriteLine($"\nCache has value: {cache.IsAlive}");
if (cache.Target is ExpensiveObject obj)
{
Console.WriteLine("Using cached object");
}
else
{
Console.WriteLine("Cache was collected, recreating...");
}
class ExpensiveObject
{
public byte[] Data = new byte[10_000];
}
Output:
Created strong reference
WeakReference is alive: True
Target exists: True
After GC.Collect:
WeakReference is alive: False
Strong ref still exists: True
Cache has value: False
Cache was collected, recreating...
The weak reference lets the GC collect the object when memory pressure builds. Strong references must be cleared for objects to become collectible. This pattern works well for caches that should release memory automatically rather than growing unbounded.
Finalization and Resource Cleanup
Finalizers run when the GC collects objects, providing a last chance to release unmanaged resources. However, finalization adds overhead because objects with finalizers require two collection cycles to fully reclaim. The first collection moves them to a finalization queue, a background thread runs finalizers, then the next collection actually frees the memory.
Implementing IDisposable is better than relying on finalizers. Dispose lets you clean up deterministically when you're done with a resource. Use finalizers only as a safety net in case someone forgets to call Dispose. The Dispose pattern combines both approaches for robust resource management.
using System;
// Proper disposal pattern
using (var resource = new ManagedResource())
{
resource.DoWork();
} // Dispose called automatically here
Console.WriteLine("Resource disposed via using statement\n");
// Without using statement (manual disposal)
var resource2 = new ManagedResource();
try
{
resource2.DoWork();
}
finally
{
resource2.Dispose();
}
class ManagedResource : IDisposable
{
private bool _disposed = false;
public void DoWork()
{
if (_disposed)
throw new ObjectDisposedException(nameof(ManagedResource));
Console.WriteLine("Working with resource");
}
public void Dispose()
{
Dispose(true);
GC.SuppressFinalize(this); // Prevent finalizer from running
}
protected virtual void Dispose(bool disposing)
{
if (_disposed) return;
if (disposing)
{
// Release managed resources
Console.WriteLine("Disposing managed resources");
}
// Release unmanaged resources (if any)
Console.WriteLine("Cleaning up unmanaged resources");
_disposed = true;
}
~ManagedResource()
{
// Finalizer as safety net
Console.WriteLine("Finalizer called (someone forgot Dispose!)");
Dispose(false);
}
}
Output:
Working with resource
Disposing managed resources
Cleaning up unmanaged resources
Resource disposed via using statement
Working with resource
Disposing managed resources
Cleaning up unmanaged resources
GC.SuppressFinalize tells the garbage collector that the finalizer doesn't need to run because Dispose already cleaned up. This saves the overhead of finalization. The finalizer only runs if someone forgets to call Dispose, protecting against resource leaks.
Try It Yourself
Build a program that demonstrates generation promotion and measures collection frequency. This helps you see how the GC behaves under different allocation patterns.
using System;
using System.Diagnostics;
var monitor = new GCMonitor();
Console.WriteLine("Creating short-lived objects...");
for (int i = 0; i < 10000; i++)
{
var temp = new byte[100];
}
monitor.ReportStats("After short-lived allocations");
Console.WriteLine("\nCreating long-lived objects...");
var longLived = new object[100];
for (int i = 0; i < longLived.Length; i++)
{
longLived[i] = new byte[1000];
}
monitor.ReportStats("After long-lived allocations");
Console.WriteLine("\nCreating more short-lived objects...");
for (int i = 0; i < 10000; i++)
{
var temp = new byte[100];
}
monitor.ReportStats("After more allocations");
class GCMonitor
{
private int _lastGen0;
private int _lastGen1;
private int _lastGen2;
public GCMonitor()
{
_lastGen0 = GC.CollectionCount(0);
_lastGen1 = GC.CollectionCount(1);
_lastGen2 = GC.CollectionCount(2);
}
public void ReportStats(string label)
{
var gen0 = GC.CollectionCount(0);
var gen1 = GC.CollectionCount(1);
var gen2 = GC.CollectionCount(2);
Console.WriteLine($"\n{label}:");
Console.WriteLine($" Memory: {GC.GetTotalMemory(false) / 1024:N0} KB");
Console.WriteLine($" Gen0 collections: {gen0} (+{gen0 - _lastGen0})");
Console.WriteLine($" Gen1 collections: {gen1} (+{gen1 - _lastGen1})");
Console.WriteLine($" Gen2 collections: {gen2} (+{gen2 - _lastGen2})");
_lastGen0 = gen0;
_lastGen1 = gen1;
_lastGen2 = gen2;
}
}
Output:
Creating short-lived objects...
After short-lived allocations:
Memory: 1,234 KB
Gen0 collections: 2 (+2)
Gen1 collections: 0 (+0)
Gen2 collections: 0 (+0)
Creating long-lived objects...
After long-lived allocations:
Memory: 1,567 KB
Gen0 collections: 2 (+0)
Gen1 collections: 0 (+0)
Gen2 collections: 0 (+0)
Creating more short-lived objects...
After more allocations:
Memory: 1,234 KB
Gen0 collections: 4 (+2)
Gen1 collections: 0 (+0)
Gen2 collections: 0 (+0)
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFramework>net8.0</TargetFramework>
<Nullable>enable</Nullable>
<ImplicitUsings>enable</ImplicitUsings>
</PropertyGroup>
</Project>
This monitor shows that short-lived objects trigger Gen0 collections while long-lived objects stay in memory without immediate collection. You can modify the allocation sizes and patterns to see how they affect collection frequency.
Knowing the Limits
Don't fight the garbage collector with manual collection calls. GC.Collect disrupts carefully tuned heuristics and usually degrades performance. The runtime tracks allocation rates, survival patterns, and memory pressure to optimize collection timing. Manual collection ignores this information and often promotes short-lived objects to higher generations prematurely, making future collections slower.
Avoid relying on finalizers for critical resources. Finalization is non-deterministic and can delay significantly under high memory pressure or when the finalizer queue backs up. Database connections, file handles, and network sockets need deterministic cleanup through IDisposable. Use finalizers only as a safety net, never as your primary cleanup mechanism.
Don't assume setting references to null immediately frees memory. The GC decides when to reclaim objects based on memory pressure. Nulling references only matters for long-lived containers holding onto temporary objects. In short-lived methods, local variables become unreachable when the method returns regardless of whether you null them explicitly.
Large object allocations need different thinking. If your application allocates many objects over 85KB, the LOH can fragment badly. Consider using smaller chunks, reusing buffers with ArrayPool, or enabling LOH compaction selectively. The LOH doesn't compact automatically, so fragmentation accumulates until you run out of contiguous space even though total free memory exists.