Modern C# 12 Patterns: Building a Robust "Pipeline" (Result, Errors, Mapping, Testing)
22 min read
Intermediate
You've seen the pattern before. A method throws ValidationException. The caller catches it, formats a message, and re-throws a ServiceException. Something upstream catches that and converts it into an HTTP 400. Every layer adds a try/catch, and the actual business logic drowns in error-handling boilerplate. The error paths are invisible until they explode in production.
There's a better way. Model errors as data, not exceptions. Compose operations through a typed pipeline where every step either succeeds and passes its output to the next step, or fails and short-circuits cleanly — no catch blocks, no hidden control flow. That's what this tutorial builds.
What You'll Build
A Text Import pipeline (tutorials/csharp-language/TextImportPipeline/) — a console app and a test project demonstrating every pattern in a realistic, end-to-end flow:
A Result<T> type with explicit Success and Failure cases, no exceptions for expected errors
A typed error model with safe user-facing messages and debug detail kept separate
Four pipeline stages — parse CSV, validate records, transform to domain objects, persist to store
Map and Bind operators that compose stages without nesting
C# 12 features used throughout: pattern matching exhaustiveness, collection expressions, primary constructors
Async-aware pipeline stages that compose cleanly with Task<Result<T>>
Structured logging integration — safe vs debug error fields in log events
Unit tests covering every stage and the full pipeline, including failure path coverage
Project Setup & Structure
Two projects: a console app for the pipeline itself, and a separate test project. No external frameworks required beyond xUnit — the entire Result<T> pattern is hand-built in about 60 lines.
Terminal
mkdir TextImportPipeline && cd TextImportPipeline
dotnet new sln -n TextImportPipeline
# Console app — the pipeline
dotnet new console -n TextImport.Core
dotnet sln add TextImport.Core/TextImport.Core.csproj
# Test project
dotnet new xunit -n TextImport.Tests
dotnet sln add TextImport.Tests/TextImport.Tests.csproj
dotnet add TextImport.Tests reference TextImport.Core
# FluentAssertions for readable test assertions (optional but recommended)
dotnet add TextImport.Tests package FluentAssertions
Folder Layout
Project Structure
TextImportPipeline/
├── TextImport.Core/
│ ├── Core/
│ │ ├── Result.cs # Result<T> type + Error model
│ │ └── ResultExtensions.cs # Map, Bind, Match operators
│ ├── Pipeline/
│ │ ├── ParseStage.cs # CSV text → raw records
│ │ ├── ValidateStage.cs # raw records → validated records
│ │ ├── TransformStage.cs # validated → domain objects
│ │ └── PersistStage.cs # domain objects → store (async)
│ ├── Models/
│ │ ├── RawRecord.cs
│ │ ├── ValidatedRecord.cs
│ │ └── ImportedProduct.cs
│ └── Program.cs # pipeline composition + logging
└── TextImport.Tests/
├── ParseStageTests.cs
├── ValidateStageTests.cs
├── TransformStageTests.cs
└── PipelineIntegrationTests.cs
Console App + Class Library Pattern
Putting all business logic in TextImport.Core with a thin Program.cs entry point means the test project can reference the core logic directly without spinning up a host or running the console app. This structure scales cleanly to a web API or worker service later — swap Program.cs for an ASP.NET Core endpoint, everything else stays identical.
Defining Result<T> and the Error Model
The Result<T> type is the foundation. It's a discriminated union: either a Success carrying a value of type T, or a Failure carrying an Error. In C# 12, we express this with a sealed record hierarchy — exhaustive pattern matching guarantees you handle both cases at the compiler level.
The Error Model
Define errors before the Result type. Two fields: a safe message (caller-visible, log-safe) and debug detail (internal context, never exposed to external consumers).
Core/Result.cs — Error Model
/// <summary>
/// Represents a pipeline error with two representations:
/// - Message: safe for logging, API responses, and UI display
/// - Detail: internal debug context — never exposed externally
/// </summary>
public sealed record Error(
string Code, // machine-readable error code e.g. "PARSE_INVALID_COLUMN_COUNT"
string Message, // safe human-readable description
string? Detail = null, // debug detail — internal only
ErrorSeverity Severity = ErrorSeverity.Error)
{
// Convenience factories for common error categories
public static Error Validation(string code, string message, string? detail = null)
=> new(code, message, detail, ErrorSeverity.Error);
public static Error Parse(string code, string message, string? detail = null)
=> new(code, message, detail, ErrorSeverity.Error);
public static Error Unexpected(string message, Exception? ex = null)
=> new("UNEXPECTED_ERROR", message,
Detail: ex is null ? null : $"{ex.GetType().Name}: {ex.Message}",
Severity: ErrorSeverity.Critical);
public static Error NotFound(string entity, string identifier)
=> new("NOT_FOUND",
Message: $"{entity} not found.",
Detail: $"Lookup key: {identifier}");
}
public enum ErrorSeverity { Warning, Error, Critical }
The Result<T> Type
Core/Result.cs — Result Type
/// <summary>
/// A discriminated union: either a Success(Value) or a Failure(Error).
/// Pattern-match to handle both cases — the compiler enforces exhaustiveness.
/// </summary>
public abstract record Result<T>
{
// Prevent external subclassing — only Success and Failure exist
private Result() { }
public sealed record Success(T Value) : Result<T>;
public sealed record Failure(Error Error) : Result<T>;
// ─── Convenience constructors ──────────────────────────────────
public static Result<T> Ok(T value) => new Success(value);
public static Result<T> Fail(Error error) => new Failure(error);
// ─── Query properties ──────────────────────────────────────────
public bool IsSuccess => this is Success;
public bool IsFailure => this is Failure;
// Unsafe accessors — use pattern matching or Match() instead
public T Value => (this as Success)?.Value
?? throw new InvalidOperationException("Result is Failure");
public Error Error => (this as Failure)?.Error
?? throw new InvalidOperationException("Result is Success");
}
// Non-generic Result for operations that return no value on success
public sealed record Result
{
public static Result<T> Ok<T>(T value) => Result<T>.Ok(value);
public static Result<T> Fail<T>(Error error) => Result<T>.Fail(error);
}
Why a Sealed Private Constructor?
The private Result() {} constructor prevents any code outside this file from subclassing Result<T>. Combined with Success and Failure being sealed, pattern-matching switches on Result<T> are exhaustive — the compiler knows those are the only two possible states and will warn if you forget a case. This is as close to a true discriminated union as C# gets without a third-party library.
Map, Bind, and Match Operations
Three operations turn Result<T> from a container into a composable pipeline primitive. They do the same job as LINQ's Select and SelectMany — but for the success/failure dimension instead of the sequence dimension.
Core/ResultExtensions.cs
public static class ResultExtensions
{
// ─── Map ──────────────────────────────────────────────────────────────
// Transforms the success value. If already a Failure, passes it through untouched.
// Use when: the transformation cannot itself produce an error.
public static Result<TNext> Map<T, TNext>(
this Result<T> result,
Func<T, TNext> transform) => result switch
{
Result<T>.Success s => Result<TNext>.Ok(transform(s.Value)),
Result<T>.Failure f => Result<TNext>.Fail(f.Error),
_ => throw new UnreachableException()
};
// ─── Bind ─────────────────────────────────────────────────────────────
// Chains a step that itself returns a Result. Flattens Result<Result<T>>.
// Use when: the next step can succeed or fail independently.
public static Result<TNext> Bind<T, TNext>(
this Result<T> result,
Func<T, Result<TNext>> next) => result switch
{
Result<T>.Success s => next(s.Value),
Result<T>.Failure f => Result<TNext>.Fail(f.Error),
_ => throw new UnreachableException()
};
// ─── Match ────────────────────────────────────────────────────────────
// Terminal operation — collapses the Result into a single plain value.
// Use at the edge of the pipeline to produce a final response or side-effect.
public static TOut Match<T, TOut>(
this Result<T> result,
Func<T, TOut> onSuccess,
Func<Error, TOut> onFailure) => result switch
{
Result<T>.Success s => onSuccess(s.Value),
Result<T>.Failure f => onFailure(f.Error),
_ => throw new UnreachableException()
};
// ─── Async overloads ──────────────────────────────────────────────────
// Allow async pipeline steps to compose with the same operators
public static async Task<Result<TNext>> BindAsync<T, TNext>(
this Task<Result<T>> resultTask,
Func<T, Task<Result<TNext>>> next)
{
var result = await resultTask;
return result switch
{
Result<T>.Success s => await next(s.Value),
Result<T>.Failure f => Result<TNext>.Fail(f.Error),
_ => throw new UnreachableException()
};
}
public static async Task<Result<TNext>> MapAsync<T, TNext>(
this Task<Result<T>> resultTask,
Func<T, TNext> transform)
{
var result = await resultTask;
return result.Map(transform);
}
public static async Task<TOut> MatchAsync<T, TOut>(
this Task<Result<T>> resultTask,
Func<T, TOut> onSuccess,
Func<Error, TOut> onFailure)
{
var result = await resultTask;
return result.Match(onSuccess, onFailure);
}
}
Map vs Bind at a Glance
Map vs Bind — Side by Side
// Map: transform always succeeds — just change the value inside the Result
Result<string> raw = Result<string>.Ok(" hello world ");
Result<string> trimmed = raw.Map(s => s.Trim()); // Ok("hello world")
Result<int> length = trimmed.Map(s => s.Length); // Ok(11)
// Bind: next step might fail — returns its own Result
Result<string> input = Result<string>.Ok("42");
Result<int> parsed = input.Bind(s =>
int.TryParse(s, out var n)
? Result<int>.Ok(n)
: Result<int>.Fail(Error.Parse("PARSE_INT", $"'{s}' is not a valid integer")));
// Ok(42)
Result<string> badInput = Result<string>.Ok("abc");
Result<int> badParsed = badInput.Bind(s =>
int.TryParse(s, out var n)
? Result<int>.Ok(n)
: Result<int>.Fail(Error.Parse("PARSE_INT", $"'{s}' is not a valid integer")));
// Failure("PARSE_INT", "'abc' is not a valid integer")
// Failures short-circuit automatically — Map/Bind on a Failure returns the same Failure
var result = Result<string>.Fail(Error.Validation("V001", "Input is empty"))
.Map(s => s.Trim()) // skipped
.Bind(s => ParseCsv(s)) // skipped
.Map(r => r.ToUpperInvariant()); // skipped
// Still: Failure("V001", "Input is empty")
Don't Use .Value Without Checking IsSuccess First
The .Value and .Error shortcut properties on Result<T> throw InvalidOperationException if you access the wrong one. They exist as a convenience for test assertions and REPL exploration. In production pipeline code, always use Match(), Map(), Bind(), or an explicit pattern match switch expression — never .Value on an unchecked result.
C# 12 Features in the Pipeline
Three C# 12 additions keep the pipeline code tight without sacrificing clarity. They're not cosmetic — each one removes a specific category of boilerplate that cluttered the same code in C# 9 and earlier.
Primary Constructors
Primary Constructors on Stage Classes
// C# 11 and earlier — constructor + field declarations + assignment
public class ValidateStage
{
private readonly IProductRepository _repo;
private readonly ILogger<ValidateStage> _logger;
public ValidateStage(IProductRepository repo, ILogger<ValidateStage> logger)
{
_repo = repo;
_logger = logger;
}
}
// C# 12 — primary constructor: parameters available throughout the class body
public class ValidateStage(IProductRepository repo, ILogger<ValidateStage> logger)
{
// repo and logger are available directly — no field declarations needed
public Result<ValidatedRecord> Validate(RawRecord raw) =>
ValidateRow(raw, logger);
}
Collection Expressions
Collection Expressions — Error Lists & Row Data
// C# 11 — verbose array/list literals
var expectedColumns = new string[] { "Name", "Price", "Category", "Stock" };
var errors = new List<Error>();
// C# 12 — collection expressions unify all collection types
string[] expectedColumns = ["Name", "Price", "Category", "Stock"];
List<Error> errors = [];
// Spread operator ('..' ) to concatenate collections without LINQ
string[] requiredFields = ["Name", "Price"];
string[] optionalFields = ["Description", "Tags"];
string[] allFields = [..requiredFields, ..optionalFields]; // ["Name", "Price", "Description", "Tags"]
// Practical use: building an error list from multiple checks
var validationErrors = new List<Error>
{
..CheckName(record),
..CheckPrice(record),
..CheckCategory(record)
};
// validationErrors contains errors from all checks, not just the first failure
Exhaustive Pattern Matching
Exhaustive Switch Expressions on Result
// The compiler enforces that you handle ALL known subtypes of Result<T>
// Because Result<T> is sealed with only Success and Failure subclasses,
// omitting either case produces CS8509 "The switch expression does not handle all possible values"
string Summarise<T>(Result<T> result) => result switch
{
Result<T>.Success s => $"OK: {s.Value}",
Result<T>.Failure f => $"FAIL [{f.Error.Code}]: {f.Error.Message}",
// No _ discard needed — exhaustiveness is guaranteed by the sealed hierarchy
};
// Nested property patterns for richer error handling
string Categorise(Result<ImportedProduct> result) => result switch
{
Result<ImportedProduct>.Success { Value.Category: "Electronics" } s
=> $"Electronic product: {s.Value.Name}",
Result<ImportedProduct>.Success s
=> $"General product: {s.Value.Name}",
Result<ImportedProduct>.Failure { Error.Severity: ErrorSeverity.Critical } f
=> $"Critical failure: {f.Error.Message}",
Result<ImportedProduct>.Failure f
=> $"Error [{f.Error.Code}]: {f.Error.Message}"
};
Use UnreachableException as the Default Arm
In switch expressions on sealed hierarchies, you don't need a discard (_ => ...). But if you add one anyway — for safety or IDE peace — use throw new UnreachableException() rather than throwing a generic NotImplementedException or returning a dummy value. UnreachableException (introduced in .NET 7) signals that this code path is architecturally impossible, making the intent clear and the error diagnostic useful if it ever fires.
Stage 1 — Parsing: CSV Text to Raw Records
The parse stage takes raw text input and produces a list of RawRecord objects. Errors here are structural: wrong column count, empty input, unreadable encoding. None of these should throw — they're expected failure modes that return a typed Failure.
Models/RawRecord.cs
// A row parsed from CSV — all values are still strings at this stage
public record RawRecord(
int LineNumber,
string Name,
string PriceText,
string Category,
string StockText);
// The target domain object — populated after validation and transformation
public record ImportedProduct(
string Name,
decimal Price,
string Category,
int Stock);
Pipeline/ParseStage.cs
public static class ParseStage
{
private const int ExpectedColumnCount = 4;
private static readonly string[] ExpectedHeader =
["Name", "Price", "Category", "Stock"];
public static Result<IReadOnlyList<RawRecord>> Parse(string csvText)
{
if (string.IsNullOrWhiteSpace(csvText))
return Result<IReadOnlyList<RawRecord>>.Fail(
Error.Parse("PARSE_EMPTY_INPUT", "Input text is empty."));
var lines = csvText.Split('\n',
StringSplitOptions.RemoveEmptyEntries | StringSplitOptions.TrimEntries);
if (lines.Length == 0)
return Result<IReadOnlyList<RawRecord>>.Fail(
Error.Parse("PARSE_NO_LINES", "No lines found after splitting."));
// Validate header row
var header = lines[0].Split(',', StringSplitOptions.TrimEntries);
if (!header.SequenceEqual(ExpectedHeader))
return Result<IReadOnlyList<RawRecord>>.Fail(
Error.Parse(
"PARSE_INVALID_HEADER",
"CSV header does not match expected columns.",
Detail: $"Expected: [{string.Join(", ", ExpectedHeader)}] " +
$"Got: [{string.Join(", ", header)}]"));
// Parse data rows
var records = new List<RawRecord>();
var rowErrors = new List<string>();
foreach (var (line, index) in lines.Skip(1).Select((l, i) => (l, i + 2)))
{
var cols = line.Split(',', StringSplitOptions.TrimEntries);
if (cols.Length != ExpectedColumnCount)
{
rowErrors.Add($"Line {index}: expected {ExpectedColumnCount} columns, got {cols.Length}");
continue;
}
records.Add(new RawRecord(
LineNumber: index,
Name: cols[0],
PriceText: cols[1],
Category: cols[2],
StockText: cols[3]));
}
if (rowErrors.Count > 0)
return Result<IReadOnlyList<RawRecord>>.Fail(
Error.Parse(
"PARSE_MALFORMED_ROWS",
$"{rowErrors.Count} row(s) had incorrect column counts.",
Detail: string.Join("; ", rowErrors)));
return Result<IReadOnlyList<RawRecord>>.Ok(records);
}
}
Stage 2 — Validation: Raw Records to Validated Records
Validation checks business rules against each row: price must be a positive decimal, stock must be a non-negative integer, category must be known. The key decision here is whether to fail-fast (return on first error) or collect all errors. For import pipelines, collecting all errors and reporting them together is dramatically better UX.
Pipeline/ValidateStage.cs
public static class ValidateStage
{
private static readonly HashSet<string> KnownCategories =
["Electronics", "Clothing", "Food", "Books", "Hardware"];
// Validates a batch — collects ALL row errors before returning
public static Result<IReadOnlyList<ValidatedRecord>> ValidateBatch(
IReadOnlyList<RawRecord> records)
{
var validated = new List<ValidatedRecord>();
var rowErrors = new List<string>();
foreach (var raw in records)
{
var result = ValidateRow(raw);
result.Match(
onSuccess: v => validated.Add(v),
onFailure: e => rowErrors.Add(
$"Line {raw.LineNumber}: [{e.Code}] {e.Message}"));
}
return rowErrors.Count == 0
? Result<IReadOnlyList<ValidatedRecord>>.Ok(validated)
: Result<IReadOnlyList<ValidatedRecord>>.Fail(
Error.Validation(
"VALIDATE_BATCH_FAILED",
$"{rowErrors.Count} record(s) failed validation.",
Detail: string.Join("\n", rowErrors)));
}
// Validates a single row — used by ValidateBatch and in tests
public static Result<ValidatedRecord> ValidateRow(RawRecord raw)
{
// Collect all field errors before returning
var errors = new List<string>
{
..CheckName(raw),
..CheckPrice(raw),
..CheckCategory(raw),
..CheckStock(raw)
};
if (errors.Count > 0)
return Result<ValidatedRecord>.Fail(
Error.Validation(
"VALIDATE_ROW_FAILED",
$"Row {raw.LineNumber} has {errors.Count} validation error(s).",
Detail: string.Join("; ", errors)));
// All checks passed — now safe to parse the numeric fields
return Result<ValidatedRecord>.Ok(new ValidatedRecord(
LineNumber: raw.LineNumber,
Name: raw.Name.Trim(),
Price: decimal.Parse(raw.PriceText),
Category: raw.Category.Trim(),
Stock: int.Parse(raw.StockText)));
}
private static IEnumerable<string> CheckName(RawRecord raw)
{
if (string.IsNullOrWhiteSpace(raw.Name))
yield return "Name is required";
else if (raw.Name.Length > 200)
yield return $"Name exceeds 200 characters ({raw.Name.Length})";
}
private static IEnumerable<string> CheckPrice(RawRecord raw)
{
if (!decimal.TryParse(raw.PriceText, out var price))
yield return $"Price '{raw.PriceText}' is not a valid decimal";
else if (price <= 0)
yield return $"Price must be greater than zero (got {price})";
}
private static IEnumerable<string> CheckCategory(RawRecord raw)
{
if (!KnownCategories.Contains(raw.Category.Trim()))
yield return $"Category '{raw.Category}' is not recognised. " +
$"Valid: [{string.Join(", ", KnownCategories)}]";
}
private static IEnumerable<string> CheckStock(RawRecord raw)
{
if (!int.TryParse(raw.StockText, out var stock))
yield return $"Stock '{raw.StockText}' is not a valid integer";
else if (stock < 0)
yield return $"Stock cannot be negative (got {stock})";
}
}
public record ValidatedRecord(
int LineNumber,
string Name,
decimal Price,
string Category,
int Stock);
Collect-All vs Fail-Fast Validation
Fail-fast stops at the first error — useful when processing a single entity where later checks would be meaningless without earlier data. Collect-all reports every problem in one pass — indispensable for batch imports where forcing the user to fix-retry-fix-retry is a terrible experience. The yield return pattern in the private check methods makes collect-all natural without allocating intermediate collections per check.
Stage 3 — Transformation: Domain Mapping
Once records are validated, transformation is pure and infallible — it's just a projection from ValidatedRecord to ImportedProduct. This is where Map shines: no error can occur, so there's no Result wrapping inside the transform function itself.
Pipeline/TransformStage.cs
public static class TransformStage
{
// Pure transformation — cannot fail, so returns T not Result<T>
// This is used as the function argument to .Map() in the pipeline
public static ImportedProduct ToProduct(ValidatedRecord record) =>
new(
Name: NormaliseName(record.Name),
Price: RoundPrice(record.Price),
Category: record.Category,
Stock: record.Stock);
// Batch overload — transforms a list via Map
public static IReadOnlyList<ImportedProduct> ToProducts(
IReadOnlyList<ValidatedRecord> records) =>
[..records.Select(ToProduct)]; // collection expression with spread
// ─── Private normalisations ─────────────────────────────────────
private static string NormaliseName(string name) =>
// Title case, trim internal whitespace
string.Join(' ',
name.Split(' ', StringSplitOptions.RemoveEmptyEntries)
.Select(word => char.ToUpper(word[0]) + word[1..].ToLower()));
private static decimal RoundPrice(decimal price) =>
Math.Round(price, 2, MidpointRounding.AwayFromZero);
}
// Transformation used directly in .Map() — no ceremony:
// result.Map(TransformStage.ToProducts)
// result.Map(r => r.Select(TransformStage.ToProduct).ToList())
Stage 4 — Persistence with Async Result
Persistence is async and can fail: database unavailable, unique constraint violation, timeout. This is where BindAsync connects an async step into the pipeline. The caller never sees exceptions — only a Task<Result<T>>.
Pipeline/PersistStage.cs
public interface IProductRepository
{
Task<int> InsertBatchAsync(
IReadOnlyList<ImportedProduct> products,
CancellationToken ct = default);
}
public class PersistStage(IProductRepository repository, ILogger<PersistStage> logger)
{
public async Task<Result<ImportSummary>> PersistAsync(
IReadOnlyList<ImportedProduct> products,
CancellationToken ct = default)
{
if (products.Count == 0)
return Result<ImportSummary>.Fail(
Error.Validation("PERSIST_EMPTY_BATCH", "No products to import."));
try
{
var inserted = await repository.InsertBatchAsync(products, ct);
logger.LogInformation(
"Persisted {Count} products successfully",
inserted);
return Result<ImportSummary>.Ok(new ImportSummary(
TotalInserted: inserted,
CompletedAt: DateTime.UtcNow));
}
catch (Exception ex) when (ex is not OperationCanceledException)
{
// Log the full exception internally; return a safe message to the caller
logger.LogError(ex,
"Database insert failed for batch of {Count} products", products.Count);
return Result<ImportSummary>.Fail(
Error.Unexpected(
"Failed to save products. Please try again or contact support.",
ex));
}
}
}
public record ImportSummary(int TotalInserted, DateTime CompletedAt);
Catch Broadly at the Boundary, Not Throughout the Pipeline
The try/catch in PersistStage is at the infrastructure boundary — the point where your code hands off to an external system (the database). That's the right place for broad exception catching. Inside pure pipeline stages (parse, validate, transform), there's no try/catch at all — those stages either produce a Result or have bugs. Catch only at boundaries where you're converting infrastructure exceptions into typed errors.
Composing the Full Pipeline
Four stages. Each one returns a Result (or Task<Result>). Compose them with Bind and BindAsync — the failure short-circuit means you never write a nested if-statement to check each step.
Program.cs — Pipeline Composition
// Full pipeline: text → parsed → validated → transformed → persisted
public class ImportPipeline(PersistStage persistStage, ILogger<ImportPipeline> logger)
{
public async Task<Result<ImportSummary>> RunAsync(
string csvText,
CancellationToken ct = default)
{
var result = await
// Stage 1: Parse — sync, returns Result<IReadOnlyList<RawRecord>>
ParseStage.Parse(csvText)
// Stage 2: Validate — sync bind, returns Result<IReadOnlyList<ValidatedRecord>>
.Bind(ValidateStage.ValidateBatch)
// Stage 3: Transform — sync map (can't fail), returns Result<IReadOnlyList<ImportedProduct>>
.Map(TransformStage.ToProducts)
// Stage 4: Persist — async bind, returns Task<Result<ImportSummary>>
.BindAsync(products => persistStage.PersistAsync(products, ct));
// Log the outcome at the pipeline boundary
result.Match(
onSuccess: summary =>
{
logger.LogInformation(
"Import completed: {Inserted} products at {Time}",
summary.TotalInserted, summary.CompletedAt);
return summary;
},
onFailure: error =>
{
logger.LogWarning(
"Import failed: [{Code}] {Message} | Detail: {Detail}",
error.Code, error.Message, error.Detail ?? "none");
return default(ImportSummary)!;
});
return result;
}
}
// Entry point usage
var pipeline = host.Services.GetRequiredService<ImportPipeline>();
var outcome = await pipeline.RunAsync(File.ReadAllText("products.csv"));
Console.WriteLine(outcome.Match(
onSuccess: s => $"✓ Imported {s.TotalInserted} products.",
onFailure: e => $"✗ Import failed: {e.Message}"));
Without the Pipeline Pattern (for contrast)
// ❌ The equivalent without Result — exception-driven, invisible error paths
public async Task ImportAsync(string csvText)
{
if (string.IsNullOrWhiteSpace(csvText))
throw new ArgumentException("Input is empty"); // invisible to callers
List<RawRecord> rawRecords;
try { rawRecords = ParseCsv(csvText); }
catch (ParseException ex) { throw new ImportException("Parse failed", ex); } // re-wrap
List<ValidatedRecord> validated;
try { validated = Validate(rawRecords); }
catch (ValidationException ex) { throw new ImportException("Validation failed", ex); }
// ...every step needs its own try/catch, error type, and re-wrapping
// The happy path is buried in noise; failure paths are invisible in signatures
}
// ✅ With Result — every possible outcome is visible in the method signature
// Task<Result<ImportSummary>> RunAsync(string csvText, CancellationToken ct)
Structured Logging-Friendly Errors
The two-field error model (safe Message + internal Detail) pays off most visibly in logging. You can emit the full technical context to your log sink for debugging while exposing only the safe message in API responses, error emails, and third-party integrations.
Log Template Design
Structured Log Events for Pipeline Errors
// Log helper — centralises the safe vs debug split
public static class PipelineLogger
{
public static void LogPipelineError(
ILogger logger,
Error error,
string operation,
object? context = null)
{
// SAFE fields: go into the structured log — may be shipped to external sinks
// NEVER include Detail in fields that go to third-party log aggregators
using var safeScope = logger.BeginScope(new Dictionary<string, object?>
{
["ErrorCode"] = error.Code,
["ErrorMessage"] = error.Message, // safe — no internal paths/IDs
["ErrorSeverity"] = error.Severity.ToString(),
["Operation"] = operation
});
// Debug detail is logged separately at Debug level
// Configure your log sink to drop Debug in production if shipping externally
logger.LogDebug(
"Pipeline error detail — Code: {Code}, Detail: {Detail}",
error.Code,
error.Detail ?? "(none)");
// The warning uses only safe fields
logger.LogWarning(
"Pipeline operation '{Operation}' failed: [{Code}] {Message}",
operation, error.Code, error.Message);
}
}
// Usage in pipeline stages
logger.LogInformation(
"Import batch starting: {LineCount} lines, {RecordCount} candidate records",
lineCount, recordCount);
PipelineLogger.LogPipelineError(logger, error, "ValidateBatch",
context: new { RecordCount = records.Count });
// What a structured log event looks like in JSON (e.g. Seq, Elastic, Splunk)
// {
// "Timestamp": "2026-02-24T10:23:45Z",
// "Level": "Warning",
// "Message": "Pipeline operation 'ValidateBatch' failed: [VALIDATE_BATCH_FAILED] 3 record(s) failed validation.",
// "ErrorCode": "VALIDATE_BATCH_FAILED",
// "ErrorMessage": "3 record(s) failed validation.", <-- safe, no internal paths
// "ErrorSeverity": "Error",
// "Operation": "ValidateBatch"
// // Detail is NOT included here — only in Debug level events
// }
Error Codes Are Your Queryable Index
The Code field in the error model — e.g. "PARSE_INVALID_HEADER", "VALIDATE_BATCH_FAILED" — is designed to be queried in your log aggregator. In Seq: ErrorCode = 'PARSE_INVALID_HEADER'. In Kibana: filter on ErrorCode.keyword. You can build dashboards showing which error codes are trending, alert on UNEXPECTED_ERROR spikes, and trace the history of a specific error type without writing free-text search queries.
Unit Testing Every Stage
Because each stage is a pure function returning Result<T>, tests are remarkably straightforward. No mocking required for parse, validate, or transform — just call the function and assert on the result shape. The Match operator makes assertions read like specifications.
Parse Stage Tests
ParseStageTests.cs
public class ParseStageTests
{
[Fact]
public void Parse_EmptyString_ReturnsFailure()
{
var result = ParseStage.Parse("");
result.Should().BeOfType<Result<IReadOnlyList<RawRecord>>.Failure>();
result.Error.Code.Should().Be("PARSE_EMPTY_INPUT");
}
[Fact]
public void Parse_InvalidHeader_ReturnsFailure_WithCode()
{
const string csv = "WrongCol,Price,Category,Stock\nWidget,9.99,Electronics,10";
var result = ParseStage.Parse(csv);
result.IsFailure.Should().BeTrue();
result.Error.Code.Should().Be("PARSE_INVALID_HEADER");
}
[Fact]
public void Parse_ValidCsv_ReturnsCorrectRecordCount()
{
var csv = """
Name,Price,Category,Stock
Widget,9.99,Electronics,10
Gadget,24.50,Electronics,5
""";
var result = ParseStage.Parse(csv);
result.IsSuccess.Should().BeTrue();
result.Value.Should().HaveCount(2);
result.Value[0].Name.Should().Be("Widget");
}
[Fact]
public void Parse_MalformedRow_ReturnsFailure_WithDetail()
{
var csv = """
Name,Price,Category,Stock
Widget,9.99,Electronics
"""; // missing Stock column
var result = ParseStage.Parse(csv);
result.IsFailure.Should().BeTrue();
result.Error.Code.Should().Be("PARSE_MALFORMED_ROWS");
result.Error.Detail.Should().Contain("Line 2");
}
}
Validate Stage Tests
ValidateStageTests.cs
public class ValidateStageTests
{
private static RawRecord ValidRecord(
string name = "Widget",
string price = "9.99",
string category = "Electronics",
string stock = "10",
int line = 2) =>
new(line, name, price, category, stock);
[Fact]
public void ValidateRow_ValidRecord_ReturnsSuccess()
{
var result = ValidateStage.ValidateRow(ValidRecord());
result.IsSuccess.Should().BeTrue();
result.Value.Price.Should().Be(9.99m);
result.Value.Stock.Should().Be(10);
}
[Theory]
[InlineData("", "9.99", "Electronics", "10", "VALIDATE_ROW_FAILED")] // empty name
[InlineData("Widget", "-5", "Electronics", "10", "VALIDATE_ROW_FAILED")] // negative price
[InlineData("Widget", "9.99", "Weapons", "10", "VALIDATE_ROW_FAILED")] // unknown category
[InlineData("Widget", "9.99", "Electronics", "-1", "VALIDATE_ROW_FAILED")] // negative stock
[InlineData("Widget", "abc", "Electronics", "10", "VALIDATE_ROW_FAILED")] // non-numeric price
public void ValidateRow_InvalidFields_ReturnsFailure(
string name, string price, string category, string stock, string expectedCode)
{
var raw = ValidRecord(name, price, category, stock);
var result = ValidateStage.ValidateRow(raw);
result.IsFailure.Should().BeTrue();
result.Error.Code.Should().Be(expectedCode);
}
[Fact]
public void ValidateBatch_MultipleInvalidRows_CollectsAllErrors()
{
var records = new[]
{
ValidRecord(price: "abc", line: 2), // bad price
ValidRecord(stock: "-5", line: 3), // bad stock
ValidRecord(line: 4) // valid
};
var result = ValidateStage.ValidateBatch(records);
result.IsFailure.Should().BeTrue();
result.Error.Detail.Should().Contain("Line 2");
result.Error.Detail.Should().Contain("Line 3");
// Detail mentions both failures, not just the first one
}
}
Full Pipeline Integration Test
PipelineIntegrationTests.cs
public class PipelineIntegrationTests
{
private const string ValidCsv = """
Name,Price,Category,Stock
Widget Pro,9.99,Electronics,50
Travel Mug,14.50,Hardware,100
""";
[Fact]
public async Task Pipeline_ValidInput_ReturnsSuccessWithCorrectCount()
{
var repo = new InMemoryProductRepository();
var logger = NullLogger<ImportPipeline>.Instance;
var persist = new PersistStage(repo, NullLogger<PersistStage>.Instance);
var pipeline = new ImportPipeline(persist, logger);
var result = await pipeline.RunAsync(ValidCsv);
result.IsSuccess.Should().BeTrue();
result.Value.TotalInserted.Should().Be(2);
repo.Products.Should().HaveCount(2);
repo.Products[0].Name.Should().Be("Widget Pro");
}
[Fact]
public async Task Pipeline_EmptyInput_ShortCircuitsAtParseStage()
{
var repo = new InMemoryProductRepository();
var persist = new PersistStage(repo, NullLogger<PersistStage>.Instance);
var pipeline = new ImportPipeline(persist, NullLogger<ImportPipeline>.Instance);
var result = await pipeline.RunAsync("");
result.IsFailure.Should().BeTrue();
result.Error.Code.Should().Be("PARSE_EMPTY_INPUT");
repo.Products.Should().BeEmpty(); // persist was never called
}
[Fact]
public async Task Pipeline_ValidationFailure_DoesNotCallPersist()
{
var repo = new InMemoryProductRepository();
var persist = new PersistStage(repo, NullLogger<PersistStage>.Instance);
var pipeline = new ImportPipeline(persist, NullLogger<ImportPipeline>.Instance);
var csvWithBadRow = """
Name,Price,Category,Stock
Widget,NOT_A_PRICE,Electronics,10
""";
var result = await pipeline.RunAsync(csvWithBadRow);
result.IsFailure.Should().BeTrue();
result.Error.Code.Should().Be("VALIDATE_BATCH_FAILED");
repo.Products.Should().BeEmpty();
}
// Lightweight in-memory repository for tests — no DbContext needed
private class InMemoryProductRepository : IProductRepository
{
public List<ImportedProduct> Products { get; } = [];
public Task<int> InsertBatchAsync(
IReadOnlyList<ImportedProduct> products,
CancellationToken ct = default)
{
Products.AddRange(products);
return Task.FromResult(products.Count);
}
}
}
Test the Failure Paths Explicitly
Most unit test suites over-index on the happy path. For a pipeline built on Result<T>, the failure paths are first-class code — test them first-class too. The integration test above explicitly verifies that repo.Products remains empty after a parse or validation failure. This proves the short-circuit works: the persist stage was never called, which is exactly the guarantee the pipeline is supposed to provide.
References & Further Reading
The Text Import pipeline is small by design — the patterns it demonstrates are the point, not the business logic. These same patterns apply to any domain where you compose multiple fallible operations: HTTP request pipelines, event sourcing command handlers, multi-step form submissions, data transformation jobs.
Add a partial-success mode to the pipeline — a batch that succeeds on 80% of rows and returns both the inserted records and a list of row-level errors. Integrate the pipeline into an ASP.NET Core Minimal API endpoint: ParseStage.Parse(csvText).Bind(...).Map(...).MatchAsync(...) maps directly to an HTTP response with zero try/catch at the controller layer. Extend the error model with an ErrorList variant to carry multiple simultaneous failures rather than the first one only, which is the final step toward full railway-oriented programming.
Frequently Asked Questions
Why use Result<T> instead of throwing exceptions for errors?
Exceptions are for exceptional circumstances — things that should never happen under normal operation. Validation failures, parsing errors, and business rule violations are expected outcomes, not exceptions. Using exceptions for these makes error paths invisible in method signatures and forces callers to know which exceptions to catch. Result<T> makes the failure path explicit in the type system — the compiler forces you to handle both Success and Failure cases.
What is the difference between Map and Bind on a Result type?
Map transforms the success value without changing the wrapper — the function returns a plain T, not a Result<T>. Bind threads a Result through a step that itself returns a Result<T>, flattening the nested Result<Result<T>> that would otherwise occur. Use Map for pure transformations that can't fail; use Bind for operations that might produce their own errors — validation, parsing, database writes.
How do I keep error messages safe for external consumers without losing debug detail?
Store two representations in your error model: a safe user-facing Message (suitable for API responses, UI display, and logs shipped to third parties) and a Detail string (full technical context — never exposed externally). Log both to your internal sink but only return Message to callers. This means you never have to choose between useful developer context and information-disclosure safety.
Should I build my own Result<T> or use a library?
For a new project, building your own takes about 60 lines and gives you full control over the error model shape. For larger teams or projects already using multiple functional patterns, ErrorOr and CSharpFunctionalExtensions are both well-maintained and NuGet-ready. The patterns in this tutorial — Map, Bind, Match — are identical regardless of which implementation you use, so this code translates directly.
Does railway-oriented programming work well with async/await in C#?
Yes, with small adaptations. Your Map and Bind methods need async-aware overloads that accept Func<T, Task<TNext>> and return Task<Result<TNext>>. The key rule: never nest Task<Result<Task<T>>> — if a bind step is async, the outer type should be Task<Result<T>> directly. The tutorial shows both sync and async stages wired together cleanly using BindAsync.