Building a Middleware-Style Pipeline in C#: Generic Handlers, Short-Circuiting & Cross-Cutting Concerns Without MediatR

What You're Actually Buying When You Add MediatR

MediatR solves a real problem elegantly: it decouples request senders from request handlers, layers cross-cutting concerns through pipeline behaviours, and enforces a consistent command/query shape across your application. Most teams add it without questioning the trade-off because the trade-off is invisible until it isn't. The invisible part is a reflection-based dispatch table, an assembly-scanning registration step, a transitive dependency on MediatR.Contracts, and a layer of indirection that makes the call path from API endpoint to handler non-obvious to anyone reading the code for the first time. For most applications, that trade-off is entirely worth it. For some — teams that care about AOT compilation, teams with strict dependency auditing policies, teams that only need the pipeline behaviour subset and not the notification system — it is not.

The good news is that the core value of MediatR — a composable pipeline of handlers with cross-cutting behaviour wrapping a core operation — is not difficult to build from first principles. It requires one generic interface, one delegate type, and one builder that chains them. The result is a pipeline with explicit compile-time-verified registrations, zero reflection at dispatch time, full async and cancellation token support, and complete compatibility with the Result pattern. This article builds it step by step.

The Core Abstractions: Handler Interface & Pipeline Delegate

The entire pipeline rests on two type definitions. The handler interface gives each pipeline step a typed contract. The pipeline delegate is the function signature that flows through the chain — each step receives the request and the next delegate, does its work, and either calls next to continue or returns a result to short-circuit. Everything else in the pipeline is composition of these two types.

Defining Result<T> as the return type rather than T directly is a deliberate choice that pays dividends the moment you add a validation step or a not-found case. Every step in the pipeline can inspect the result shape, return a typed failure without throwing, and let the caller pattern-match the outcome. The pipeline and the Result pattern are designed for each other.

Pipeline/Abstractions.cs — Core Interface & Delegate Definitions
// ── Result: the typed return value for every handler ───────────────────
// Represents either a successful outcome with a payload, or a structured
// failure with error information — without throwing exceptions for
// expected failure cases like validation errors or not-found responses.
public sealed class Result
{
    public bool    IsSuccess { get; }
    public T?      Value     { get; }
    public string? Error     { get; }
    public string? ErrorCode { get; }

    private Result(bool isSuccess, T? value, string? error, string? errorCode)
    {
        IsSuccess = isSuccess;
        Value     = value;
        Error     = error;
        ErrorCode = errorCode;
    }

    public static Result Success(T value)   => new(true,  value, null,  null);
    public static Result Failure(string error, string errorCode = "GENERAL_ERROR")
                                               => new(false, default, error, errorCode);

    // Pattern-match the result without null-checking
    public TOut Match(Func onSuccess, Func onFailure) =>
        IsSuccess ? onSuccess(Value!) : onFailure(Error!);
}

// ── Pipeline delegate: the function signature flowing through the chain ────
// Each step in the pipeline is (or wraps) this delegate.
// TRequest  → the strongly-typed request object (command or query)
// TResponse → the strongly-typed success payload
//
// The CancellationToken is first-class — every step must pass it through.
// Swallowing it breaks cancellation for every step that follows in the chain.
public delegate Task> PipelineDelegate(
    TRequest          request,
    CancellationToken cancellationToken);

// ── IHandler: the contract for pipeline steps ─────────
// Every step in the pipeline — the core handler, validation, caching,
// logging — implements this interface.
// The next delegate is the remainder of the pipeline after this step.
// Calling next() passes control forward. Not calling it short-circuits.
public interface IHandler
{
    Task> HandleAsync(
        TRequest                               request,
        PipelineDelegate  next,
        CancellationToken                      cancellationToken);
}

// ── IRequestHandler: the terminal handler ────────────
// The core business logic handler sits at the end of the pipeline.
// It does not receive a next delegate — there is nothing after it to call.
// This separation makes it clear which implementations are pipeline steps
// and which are the actual business logic handlers.
public interface IRequestHandler
{
    Task> HandleAsync(
        TRequest          request,
        CancellationToken cancellationToken);
}

// ── Example request and response shapes ──────────────────────────────────
// Requests are plain records — no base class required, no marker interface.
public sealed record GetOrderQuery(Guid OrderId);
public sealed record GetOrderResponse(Guid Id, string Status, decimal Total);

public sealed record CreateOrderCommand(Guid CustomerId, IReadOnlyList Lines);
public sealed record CreateOrderResponse(Guid OrderId);
public sealed record OrderLine(Guid ProductId, int Quantity, decimal UnitPrice);

Notice that IHandler<TRequest, TResponse> and IRequestHandler<TRequest, TResponse> are deliberately separate interfaces. Pipeline steps — validation, logging, caching — implement IHandler and receive the next delegate. The terminal business logic handler implements IRequestHandler and has no knowledge of the pipeline surrounding it. This separation means core handlers are pure functions of request → result, easily unit-testable in complete isolation, with no pipeline infrastructure in scope.

Building the Pipeline: Delegate Chaining & the Builder

Pipeline construction works by composing delegates in reverse registration order — each step wraps around the next, so the first-registered step executes first and calls inward toward the terminal handler. This is identical to how ASP.NET Core's IApplicationBuilder.Use pipeline is assembled. The builder accumulates handler registrations and produces a single PipelineDelegate<TRequest, TResponse> that, when called, triggers the entire chain.

Pipeline/PipelineBuilder.cs — Delegate Composition & Builder
// ── PipelineBuilder ──────────────────────────────────
// Accumulates IHandler steps and a terminal IRequestHandler,
// then composes them into a single PipelineDelegate via delegate chaining.
public sealed class PipelineBuilder
{
    private readonly List> _steps = [];
    private IRequestHandler?        _terminalHandler;

    // ── Register pipeline steps (order matters — first registered runs first) ─
    public PipelineBuilder Use(
        IHandler handler)
    {
        _steps.Add(handler);
        return this;   // fluent API for readable registration chains
    }

    // ── Register the terminal handler (must be called exactly once) ────────
    public PipelineBuilder Run(
        IRequestHandler handler)
    {
        _terminalHandler = handler;
        return this;
    }

    // ── Build: compose all steps into a single PipelineDelegate ───────────
    public PipelineDelegate Build()
    {
        if (_terminalHandler is null)
            throw new InvalidOperationException(
                "A terminal handler must be registered via Run() before calling Build().");

        // Start with the terminal handler wrapped as a PipelineDelegate.
        // This is the innermost function in the chain — it receives the request
        // and produces the final Result without calling any next delegate.
        PipelineDelegate pipeline =
            (req, ct) => _terminalHandler.HandleAsync(req, ct);

        // Wrap each step around the current pipeline in REVERSE order.
        // _steps = [Logging, Validation, Caching]
        // After loop: pipeline = Logging(Validation(Caching(terminal)))
        // Execution order when called: Logging → Validation → Caching → terminal
        for (var i = _steps.Count - 1; i >= 0; i--)
        {
            var step         = _steps[i];
            var currentPipeline = pipeline;   // capture for the closure

            // Each lambda captures 'step' and 'currentPipeline'.
            // When called, it invokes step.HandleAsync with the captured
            // currentPipeline as the 'next' delegate.
            pipeline = (req, ct) => step.HandleAsync(req, currentPipeline, ct);
        }

        return pipeline;
    }
}

// ── Usage: building and invoking the pipeline ─────────────────────────────
//
// var pipeline = new PipelineBuilder()
//     .Use(new LoggingHandler(logger))
//     .Use(new ValidationHandler(validators))
//     .Use(new CachingHandler(cache))
//     .Run(new GetOrderQueryHandler(repository))
//     .Build();
//
// Result result = await pipeline(
//     new GetOrderQuery(orderId),
//     cancellationToken);
//
// result.Match(
//     onSuccess: order => Results.Ok(order),
//     onFailure: error  => Results.Problem(error));

// ── What delegate chaining looks like in memory ───────────────────────────
//
// After Build(), the pipeline delegate is equivalent to this nested lambda:
//
// (req, ct) =>
//     loggingHandler.HandleAsync(req,
//         (req2, ct2) =>
//             validationHandler.HandleAsync(req2,
//                 (req3, ct3) =>
//                     cachingHandler.HandleAsync(req3,
//                         (req4, ct4) => terminalHandler.HandleAsync(req4, ct4),
//                     ct3),
//             ct2),
//         ct);
//
// No reflection. No service locator. No assembly scanning.
// Every call is a direct, compiler-verified function invocation.

The reverse-order composition loop is the conceptual heart of the pattern. If you register steps in the order [Logging, Validation, Caching], the loop processes them as [Caching, Validation, Logging] — wrapping each one around the delegate built so far. The result is a chain where Logging is outermost (runs first, sees the raw request and the final result), Validation is next (short-circuits before reaching Caching or the terminal), and Caching sits immediately before the terminal handler (only reached if validation passes). Registration order is execution order — the builder makes that relationship explicit and predictable.

Short-Circuiting Steps & Cross-Cutting Behaviours

Three concrete step implementations cover the majority of real-world pipeline requirements: a validation step that short-circuits before the core handler runs, a caching step that short-circuits on a cache hit, and a logging/timing step that wraps the entire remaining pipeline without short-circuiting. Together they demonstrate all three execution patterns a pipeline step can follow: fail early and return, succeed early and return, or always pass through while observing the outcome.

Pipeline/Steps.cs — Validation, Caching & Logging Steps
// ── Step 1: Validation — short-circuit on invalid request ─────────────────
// Runs BEFORE the terminal handler. Returns a failure Result immediately
// if the request fails validation — next() is never called.
// The terminal handler only ever receives valid, well-formed requests.
public sealed class ValidationHandler(
    IEnumerable> validators)
    : IHandler
{
    public async Task> HandleAsync(
        TRequest                              request,
        PipelineDelegate next,
        CancellationToken                     cancellationToken)
    {
        // Collect all validation failures across all registered validators
        var failures = new List();

        foreach (var validator in validators)
        {
            var validationResult = await validator.ValidateAsync(request, cancellationToken);
            if (!validationResult.IsValid)
                failures.AddRange(validationResult.Errors);
        }

        if (failures.Count > 0)
        {
            // ── Short-circuit: return failure without calling next() ────────
            // The remainder of the pipeline — caching, terminal handler —
            // never executes. The caller receives a structured validation error.
            return Result.Failure(
                error:     string.Join("; ", failures),
                errorCode: "VALIDATION_ERROR");
        }

        // Validation passed — delegate to the next step in the pipeline
        return await next(request, cancellationToken);
    }
}

// ── Step 2: Caching — short-circuit on cache hit ──────────────────────────
// Runs AFTER validation (only reached for valid requests).
// Returns a cached Result immediately on a hit — the terminal handler
// and any steps between caching and terminal never execute on a cache hit.
public sealed class CachingHandler(
    IDistributedCache cache,
    ILogger> logger)
    : IHandler
    where TRequest : ICacheableRequest   // marker interface: exposes CacheKey property
{
    public async Task> HandleAsync(
        TRequest                              request,
        PipelineDelegate next,
        CancellationToken                     cancellationToken)
    {
        var cacheKey = request.CacheKey;
        var cached   = await cache.GetStringAsync(cacheKey, cancellationToken);

        if (cached is not null)
        {
            logger.LogDebug("Cache hit for key {CacheKey}", cacheKey);

            // ── Short-circuit: return cached result without calling next() ──
            var cachedValue = JsonSerializer.Deserialize(cached)!;
            return Result.Success(cachedValue);
        }

        // Cache miss — call next to reach the terminal handler
        var result = await next(request, cancellationToken);

        // Populate the cache with the successful result
        if (result.IsSuccess)
        {
            var serialised = JsonSerializer.Serialize(result.Value);
            await cache.SetStringAsync(cacheKey, serialised,
                new DistributedCacheEntryOptions
                {
                    AbsoluteExpirationRelativeToNow = TimeSpan.FromMinutes(5)
                },
                cancellationToken);
        }

        return result;
    }
}

// ── Step 3: Logging & Timing — always pass through, observe the outcome ───
// Runs FIRST (outermost). Calls next() unconditionally.
// Observes the request, the elapsed time, and the result without
// interfering with the pipeline flow in either direction.
// This is a pure cross-cutting concern — zero business logic, zero
// knowledge of what TRequest or TResponse contain beyond their type names.
public sealed class LoggingHandler(
    ILogger> logger)
    : IHandler
{
    public async Task> HandleAsync(
        TRequest                              request,
        PipelineDelegate next,
        CancellationToken                     cancellationToken)
    {
        var requestName = typeof(TRequest).Name;
        var sw          = Stopwatch.StartNew();

        logger.LogInformation("→ Handling {RequestName}", requestName);

        try
        {
            // ── Always pass through: call next() unconditionally ───────────
            var result = await next(request, cancellationToken);
            sw.Stop();

            if (result.IsSuccess)
            {
                logger.LogInformation(
                    "✓ {RequestName} succeeded in {ElapsedMs}ms",
                    requestName, sw.ElapsedMilliseconds);
            }
            else
            {
                logger.LogWarning(
                    "✗ {RequestName} failed in {ElapsedMs}ms — [{ErrorCode}] {Error}",
                    requestName, sw.ElapsedMilliseconds,
                    result.ErrorCode, result.Error);
            }

            return result;
        }
        catch (OperationCanceledException)
        {
            sw.Stop();
            logger.LogWarning(
                "⊘ {RequestName} cancelled after {ElapsedMs}ms",
                requestName, sw.ElapsedMilliseconds);
            throw;   // re-throw — do not swallow cancellation
        }
        catch (Exception ex)
        {
            sw.Stop();
            logger.LogError(ex,
                "⚡ {RequestName} threw after {ElapsedMs}ms",
                requestName, sw.ElapsedMilliseconds);
            throw;
        }
    }
}

// ── Execution flow summary ────────────────────────────────────────────────
// Request enters → LoggingHandler (starts timer)
//               → ValidationHandler (invalid? → return Failure, timer stops)
//               → CachingHandler    (cache hit? → return Success, timer stops)
//               → TerminalHandler   (business logic, returns Success or Failure)
//               ← CachingHandler    (populates cache on Success)
//               ← ValidationHandler (passes result through unchanged)
//               ← LoggingHandler    (logs outcome, stops timer)
// Result exits  → caller

The logging step's exception handling pattern — catching, logging, and re-throwing — is the correct behaviour for an observability-only pipeline step. Swallowing exceptions here would make the pipeline appear to succeed when the terminal handler actually threw, producing misleading logs and hiding bugs. Re-throwing after logging preserves the exception's original stack trace and lets the caller's exception handling policy apply. The one exception to the re-throw rule: OperationCanceledException should be re-thrown without logging as an error — cancellation is an expected outcome, not a fault.

Terminal Handler & DI Registration With .NET 8 Keyed Services

The terminal handler is where business logic lives — it receives a guaranteed-valid request (validation has already run) and produces a domain result. It has no knowledge of the pipeline, no dependency on any pipeline infrastructure, and is trivially unit-testable in isolation. DI registration uses .NET 8's keyed services to bind each pipeline configuration to a named key, making it possible to have multiple pipelines registered simultaneously without ambiguity.

Handlers/GetOrderHandler.cs + Program.cs — Terminal Handler & DI Wiring
// ── Terminal handler: pure business logic ─────────────────────────────────
// Implements IRequestHandler — no next delegate,
// no pipeline knowledge, no logging, no validation.
// Receives only valid requests (validation step has already run).
// Unit-testable with a mock repository and no pipeline infrastructure.
public sealed class GetOrderQueryHandler(IOrderRepository repository)
    : IRequestHandler
{
    public async Task> HandleAsync(
        GetOrderQuery     request,
        CancellationToken cancellationToken)
    {
        var order = await repository.FindByIdAsync(request.OrderId, cancellationToken);

        if (order is null)
        {
            return Result.Failure(
                error:     $"Order {request.OrderId} was not found.",
                errorCode: "ORDER_NOT_FOUND");
        }

        return Result.Success(
            new GetOrderResponse(order.Id, order.Status, order.Total));
    }
}

// ── IValidator: lightweight validation contract ─────────────────────────
public interface IValidator
{
    Task ValidateAsync(T request, CancellationToken ct);
}

public sealed record ValidationResult(bool IsValid, IReadOnlyList Errors)
{
    public static ValidationResult Success()                        => new(true,  []);
    public static ValidationResult Failure(params string[] errors)  => new(false, errors);
}

// ── GetOrderQuery validator ────────────────────────────────────────────────
public sealed class GetOrderQueryValidator : IValidator
{
    public Task ValidateAsync(
        GetOrderQuery request, CancellationToken ct)
    {
        if (request.OrderId == Guid.Empty)
            return Task.FromResult(
                ValidationResult.Failure("OrderId must not be an empty GUID."));

        return Task.FromResult(ValidationResult.Success());
    }
}

// ── Program.cs — DI registration with .NET 8 keyed services ───────────────
var builder = WebApplication.CreateBuilder(args);

// Register validators
builder.Services.AddScoped, GetOrderQueryValidator>();

// Register terminal handlers
builder.Services.AddScoped,
    GetOrderQueryHandler>();

// Register pipeline steps (transient — stateless, safe to share)
builder.Services.AddTransient(typeof(LoggingHandler<,>));
builder.Services.AddTransient(typeof(ValidationHandler<,>));

// ── Keyed pipeline factory: one registration per request type ─────────────
// .NET 8 keyed services allow multiple registrations of the same interface
// disambiguated by a key — perfect for per-request-type pipeline configuration.
builder.Services.AddKeyedScoped>(
    serviceKey: "GetOrderPipeline",
    implementationFactory: (sp, _) =>
        new PipelineBuilder()
            .Use(sp.GetRequiredService>())
            .Use(new ValidationHandler(
                sp.GetServices>()))
            .Run(sp.GetRequiredService>())
            .Build());

// ── Alternative: extension method for clean registration ──────────────────
// builder.Services.AddPipeline(options =>
//     options.Use>()
//            .Use>()
//            .Run());
//
// The extension method pattern keeps Program.cs clean when registering
// many pipelines — one AddPipeline call per request type, encapsulating
// the builder configuration behind a readable fluent API.

var app = builder.Build();
app.Run();

Keyed services are the right registration mechanism here because multiple pipelines registered as PipelineDelegate<TRequest, TResponse> with different type parameters are already distinct — the generic type parameters make them unambiguous to the DI container. Use keyed services when you need multiple pipelines for the same TRequest type (for example, a fast path and a full path for the same query), or when you want self-documenting registration with a string key that matches the pipeline's purpose. For the common case of one pipeline per request type, generic type parameter disambiguation is sufficient and keyed services add no value.

Consuming the Pipeline From a Minimal API Endpoint

From the caller's perspective — a controller action or a minimal API endpoint handler — consuming the pipeline is a single awaited call that returns a Result<TResponse>. The endpoint has no knowledge of which steps are in the pipeline, how validation is implemented, or whether caching is involved. It receives a typed result and pattern-matches it to an HTTP response. This is the same abstraction boundary MediatR's ISender.Send() provides, without the package dependency.

Endpoints/OrderEndpoints.cs — Pipeline Consumption in Minimal APIs
// ── Minimal API endpoint: consume the pipeline via DI ────────────────────
// The endpoint receives the pipeline delegate directly from the DI container.
// It has no knowledge of what steps the pipeline contains.
// All cross-cutting concerns — logging, validation, caching — are invisible
// to the endpoint code, applied consistently to every request of this type.

app.MapGet("/api/orders/{orderId:guid}", async (
    Guid                orderId,
    [FromKeyedServices("GetOrderPipeline")]
    PipelineDelegate pipeline,
    CancellationToken   ct) =>
{
    var result = await pipeline(new GetOrderQuery(orderId), ct);

    // Pattern-match the Result to an HTTP response.
    // The endpoint knows nothing about validation, caching, or logging —
    // it only handles the two possible outcomes: Success and Failure.
    return result.Match(
        onSuccess: order  => Results.Ok(order),
        onFailure: error  => error.Contains("NOT_FOUND")
            ? Results.NotFound(new { detail = error })
            : Results.UnprocessableEntity(new { detail = error }));
})
.WithName("GetOrder")
.WithOpenApi();

// ── Controller-based consumption (equivalent pattern) ────────────────────
[ApiController]
[Route("api/orders")]
public class OrdersController(
    [FromKeyedServices("GetOrderPipeline")]
    PipelineDelegate _pipeline)
    : ControllerBase
{
    [HttpGet("{orderId:guid}")]
    public async Task GetOrder(Guid orderId, CancellationToken ct)
    {
        var result = await _pipeline(new GetOrderQuery(orderId), ct);

        return result.Match(
            onSuccess: order => Ok(order),
            onFailure: error => error.Contains("NOT_FOUND")
                ? NotFound(new { detail = error })
                : UnprocessableEntity(new { detail = error }));
    }
}

// ── Comparison with MediatR ────────────────────────────────────────────────
//
// MediatR:
//   var result = await _mediator.Send(new GetOrderQuery(orderId), ct);
//
// Hand-rolled pipeline:
//   var result = await _pipeline(new GetOrderQuery(orderId), ct);
//
// The call sites are nearly identical. The difference is entirely in
// the infrastructure: MediatR uses reflection to locate the handler at
// runtime; the hand-rolled pipeline uses a compile-time-bound delegate.
// Both return a typed result. Both support async and cancellation.
// Both support pipeline behaviours layered around the core handler.
// The hand-rolled version adds zero transitive dependencies and is
// fully visible in the debugger as a standard async call stack.

// ── Unit testing the terminal handler in isolation ─────────────────────────
// [Fact]
// public async Task GetOrderQueryHandler_ReturnsFailure_WhenOrderNotFound()
// {
//     var repository = Substitute.For();
//     repository.FindByIdAsync(Arg.Any(), Arg.Any())
//               .Returns((Order?)null);
//
//     var handler = new GetOrderQueryHandler(repository);
//     var result  = await handler.HandleAsync(
//         new GetOrderQuery(Guid.NewGuid()), CancellationToken.None);
//
//     result.IsSuccess.Should().BeFalse();
//     result.ErrorCode.Should().Be("ORDER_NOT_FOUND");
// }
//
// No pipeline infrastructure in scope. No MediatR test helpers.
// Just the handler, a mock repository, and an assertion.

The unit test stub at the bottom of the code block illustrates the concrete payoff of separating IRequestHandler from IHandler: the terminal handler is testable with a single mock, zero pipeline infrastructure, and a straightforward arrange-act-assert structure. No ServiceCollection setup, no pipeline builder invocation, no in-memory DI container. The handler is a function — test it as a function. The pipeline steps are also individually testable: construct a step, call HandleAsync with a mock next delegate, and assert on whether next was called and what result was returned. The separation of concerns that makes the pipeline composable in production makes it testable in isolation.

Extension Points: Adding Steps Without Touching Existing Code

The pipeline's open/closed property — you can add new steps without modifying the handler or any existing step — is its most practically valuable characteristic. An audit step, an idempotency check, a rate limiting step, a distributed tracing step: each is a new class implementing IHandler<TRequest, TResponse> registered in the builder. No existing code changes. No base class modifications. No attribute-driven magic.

The following examples show three common extension points that teams typically add after the initial pipeline is running: an idempotency step for command deduplication, a timeout step for enforcing per-request SLA budgets, and a generic error-handling step that catches unhandled exceptions from downstream steps and converts them to structured Result failures rather than propagating exceptions to the API layer.

Pipeline/Extensions.cs — Idempotency, Timeout & Exception Guard Steps
// ── Idempotency step: deduplicate commands by request ID ──────────────────
// Useful for commands that must not be processed twice — payment charges,
// order creations, email sends. Client supplies an idempotency key;
// the step returns the stored result on a duplicate request.
public sealed class IdempotencyHandler(
    IDistributedCache cache,
    ILogger> logger)
    : IHandler
    where TRequest : IIdempotentRequest   // marker: exposes IdempotencyKey property
{
    public async Task> HandleAsync(
        TRequest                              request,
        PipelineDelegate next,
        CancellationToken                     cancellationToken)
    {
        var key    = $"idempotency:{request.IdempotencyKey}";
        var stored = await cache.GetStringAsync(key, cancellationToken);

        if (stored is not null)
        {
            logger.LogInformation(
                "Idempotency hit for key {Key} — returning stored result", key);
            // Short-circuit: return the stored result from the previous execution
            return Result.Success(
                JsonSerializer.Deserialize(stored)!);
        }

        var result = await next(request, cancellationToken);

        // Store the successful result so duplicate requests return the same value
        if (result.IsSuccess)
        {
            await cache.SetStringAsync(key, JsonSerializer.Serialize(result.Value),
                new DistributedCacheEntryOptions
                {
                    // Keep for 24 hours — long enough for client retry windows
                    AbsoluteExpirationRelativeToNow = TimeSpan.FromHours(24)
                },
                cancellationToken);
        }

        return result;
    }
}

// ── Timeout step: enforce a per-request SLA budget ────────────────────────
// Wraps the remaining pipeline in a timeout. If the inner pipeline does not
// complete within the budget, returns a structured failure rather than
// letting the request hang indefinitely.
public sealed class TimeoutHandler(TimeSpan timeout)
    : IHandler
{
    public async Task> HandleAsync(
        TRequest                              request,
        PipelineDelegate next,
        CancellationToken                     cancellationToken)
    {
        using var timeoutCts = CancellationTokenSource.CreateLinkedTokenSource(
            cancellationToken);
        timeoutCts.CancelAfter(timeout);

        try
        {
            return await next(request, timeoutCts.Token);
        }
        catch (OperationCanceledException) when (!cancellationToken.IsCancellationRequested)
        {
            // The timeout fired — not the caller's cancellation token.
            // Return a structured failure rather than propagating the exception.
            return Result.Failure(
                error:     $"Request exceeded the {timeout.TotalSeconds}s SLA budget.",
                errorCode: "REQUEST_TIMEOUT");
        }
        // If cancellationToken itself was cancelled, the OperationCanceledException
        // is NOT caught here — it propagates up correctly.
    }
}

// ── Exception guard step: convert unhandled exceptions to Result failures ──
// Sits at the outermost position (registered first, executes first).
// Any exception thrown by an inner step or the terminal handler is caught
// and converted to a structured Result.Failure — the API layer never
// sees an unhandled exception from the pipeline.
// The logging step (if registered) should be inside this guard so it
// sees the converted result rather than the raw exception.
public sealed class ExceptionGuardHandler(
    ILogger> logger)
    : IHandler
{
    public async Task> HandleAsync(
        TRequest                              request,
        PipelineDelegate next,
        CancellationToken                     cancellationToken)
    {
        try
        {
            return await next(request, cancellationToken);
        }
        catch (OperationCanceledException)
        {
            throw;   // never swallow cancellation
        }
        catch (Exception ex)
        {
            logger.LogError(ex,
                "Unhandled exception in pipeline for {RequestType}",
                typeof(TRequest).Name);

            return Result.Failure(
                error:     "An unexpected error occurred processing your request.",
                errorCode: "INTERNAL_ERROR");
            // The correlation ID, request details, and full exception are in the log.
            // The caller receives a safe, non-leaking error message.
        }
    }
}

// ── Recommended registration order for a production pipeline ─────────────
// new PipelineBuilder()
//     .Use(exceptionGuard)    // outermost: catches everything below
//     .Use(loggingHandler)    // sees the converted Result, not raw exceptions
//     .Use(timeoutHandler)    // enforces SLA budget for the remaining chain
//     .Use(idempotencyHandler)// deduplicates before validation runs
//     .Use(validationHandler) // short-circuits invalid requests
//     .Run(terminalHandler)   // business logic — only receives valid, deduped requests
//     .Build();

The registration order comment at the bottom of the code block is the most important guidance in this section. Exception guard outermost, logging inside it, timeout inside logging, business steps innermost. Each step should only be responsible for what it can see — the exception guard can only protect what is inside it, the timeout only constrains what is below it, and the logging step should see the final result shape including any failures produced by the exception guard. Get the order wrong and you end up with a timeout that the exception guard swallows silently, or a logging step that only records exceptions rather than structured failures. The builder makes order explicit — use it deliberately.

What Developers Want to Know

When should I build a custom pipeline instead of using MediatR?

Consider a custom pipeline when MediatR's reflection-based dispatch is measurable overhead in your hot path, when you want to eliminate a transitive dependency tree, when your team only needs the pipeline behaviour subset and not MediatR's notification broadcasting or polymorphic dispatch, or when you want explicit compile-time-verified handler registration rather than assembly scanning. A hand-rolled pipeline is also easier to reason about in code review — every handler registration is visible in the DI composition root and every pipeline step is traceable through ordinary C# delegate invocations rather than through a framework dispatch table.

How does delegate chaining produce a middleware-style pipeline?

Each pipeline step is a function that accepts a request and a next delegate, does its work, and either calls next to pass control forward or returns a result to short-circuit. The pipeline is built by composing these functions in reverse registration order — wrapping each step around the previous — so calling the outermost function triggers the entire chain inward. This is identical to how ASP.NET Core's middleware pipeline is assembled: Use wraps each component around the next in registration order. The pattern requires no framework, no reflection, and no base classes — only functions and closures.

What is short-circuiting and when should a pipeline step do it?

Short-circuiting means returning a result from a pipeline step without calling the next delegate — the remainder of the pipeline does not execute. A validation step short-circuits when the request is invalid, returning Result.Failure with structured error information. A caching step short-circuits on a cache hit, returning the cached result without hitting the database. An idempotency step short-circuits on a duplicate request key. Short-circuiting is the correct, expected behaviour in all these cases — it is not an error path, it is the pipeline operating exactly as designed.

How is this different from the Decorator pattern?

The Decorator pattern wraps a concrete implementation behind an interface and adds behaviour by delegating to the wrapped instance — the chain is fixed at construction time through object nesting. The middleware pipeline pattern uses delegates rather than interface wrapping, meaning the chain is constructed by function composition and can be built dynamically from a list of registered steps. The practical difference is flexibility and inspectability: a middleware pipeline's steps are a list you can iterate, log at startup, and reorder without changing any step's implementation. Both patterns produce the same behaviour from the caller's perspective.

Can I use this pipeline pattern with the Result pattern?

Yes — they compose naturally and are designed for each other. The Result pattern gives each handler a typed return value representing either success with a payload or a structured failure. The pipeline pattern gives each step the ability to inspect that result shape, return early with a typed failure, or pass a transformed result forward. A validation step returns Result.Failure before the core handler runs. The core handler returns Result.Success or Result.Failure based on domain logic. A post-processing step can map the result shape without knowing which step produced it. The pipeline becomes a composable transformation chain over Result<T> values.

Does a hand-rolled pipeline support async handlers and cancellation tokens?

Yes, and it should be async-first by design. Define the pipeline delegate as Func<TRequest, CancellationToken, Task<Result<TResponse>>> so every step is awaitable and receives the caller's cancellation token. Pass the same CancellationToken through every next() call so cancellation propagates correctly through the entire chain — including validation steps, cache lookups, database calls, and downstream HTTP requests. A pipeline step that swallows or ignores the cancellation token breaks the cancellation contract for every step that follows it. Never catch OperationCanceledException without re-throwing unless you are explicitly converting it to a structured Result.Failure in a timeout step.

Back to Articles