🐳 Hands-On Tutorial

Dockerizing ASP.NET Core: Multi-Stage Builds, Clean Images & a Production-Ready Ship Workflow

The Dockerfile that ships with a new dotnet new webapi project works. It builds. It runs. It is also running as root, copying your entire source tree into the build context including your bin and obj folders, and producing an image that is three times larger than it needs to be — because no one ever updated the template defaults.

This tutorial replaces that template with a Dockerfile that is actually production-ready: multi-stage, minimal, non-root, health-checked, and cache-optimised. You will build a containerized ASP.NET Core Todo API end to end, then ship it to a registry with a workflow you can repeat on every project from here forward.

What You'll Build

A containerized ASP.NET Core Minimal API (tutorials/cloud-native/DockerizedTodoApi/) that demonstrates every layer of a production container workflow:

  • Multi-stage Dockerfile — SDK stage for restore/build/publish, aspnet:8.0 runtime stage for the final image, no build tooling in production
  • Non-root security baseline — a dedicated app user, correct file ownership, port 8080 instead of 80
  • Environment-variable config — no secrets in the image, runtime injection via -e / --env-file, full appsettings.* layering preserved
  • Health check stack — ASP.NET Core /health endpoint for Kubernetes probes + Docker HEALTHCHECK instruction for standalone and Compose deployments
  • Image hygiene.dockerignore that prevents cache poisoning, pinned base image tags, slim layer ordering for maximum cache reuse
  • Local run workflow — build, run, inspect logs, exec into the container, and debug a misbehaving container
  • Ship workflow — tag and push to Docker Hub and GHCR, plus notes for Azure Container Apps and Kubernetes deployment

Project Setup & Folder Layout

A Minimal API Todo service — simple enough that the Dockerfile is the focus, not the application logic. The project structure mirrors what a real production API would look like before containerization.

Terminal — Scaffold the Project
dotnet new webapi -n DockerizedTodoApi --no-https --minimal
cd DockerizedTodoApi

# Health checks are included in the ASP.NET Core shared framework (.NET 8)
# No additional package needed — AddHealthChecks() is available in Program.cs directly
# If using a specific DB health check (e.g. SQL Server), add the provider package:
# dotnet add package AspNetCore.HealthChecks.SqlServer

# Verify the app runs locally before touching Docker
dotnet run
# Expected: Application started. Press Ctrl+C to shut down.
# Expected: http://localhost:5000/todos → 200 OK

Target Folder Layout

DockerizedTodoApi/ — Complete File Tree
DockerizedTodoApi/
├── Dockerfile                  # multi-stage, non-root, health-checked
├── .dockerignore               # prevents bin/obj/secrets entering build context
├── docker-compose.yml          # local run with health-check dependency
├── appsettings.json            # base config — committed, no secrets
├── appsettings.Development.json
├── appsettings.Production.json # structure only — runtime env vars supply values
├── Models/
│   └── TodoItem.cs
├── Endpoints/
│   └── TodoEndpoints.cs
├── Health/
│   └── DatabaseHealthCheck.cs  # custom health check registered in Section 6
└── Program.cs

The Todo API Domain

Models/TodoItem.cs & Endpoints/TodoEndpoints.cs
// Models/TodoItem.cs
public record TodoItem(int Id, string Title, bool IsComplete);

// Endpoints/TodoEndpoints.cs
public static class TodoEndpoints
{
    private static readonly List<TodoItem> _store =
    [
        new(1, "Write the Dockerfile", false),
        new(2, "Add health checks",    false),
        new(3, "Ship to registry",     false)
    ];

    public static void MapTodoEndpoints(this WebApplication app)
    {
        var group = app.MapGroup("/todos").WithTags("Todos");

        group.MapGet("/",          ()        => Results.Ok(_store));
        group.MapGet("/{id:int}", (int id)   => _store.FirstOrDefault(t => t.Id == id)
                                               is { } item ? Results.Ok(item) : Results.NotFound());
        group.MapPost("/", (TodoItem item)   => { _store.Add(item); return Results.Created($"/todos/{item.Id}", item); });
        group.MapPut("/{id:int}", (int id, TodoItem updated) =>
        {
            var idx = _store.FindIndex(t => t.Id == id);
            if (idx == -1) return Results.NotFound();
            _store[idx] = updated;
            return Results.NoContent();
        });
        group.MapDelete("/{id:int}", (int id) =>
        {
            _store.RemoveAll(t => t.Id == id);
            return Results.NoContent();
        });
    }
}
Verify Locally Before Containerizing

Always confirm the application starts and responds correctly with dotnet run before writing a single line of Dockerfile. A broken application produces a broken container — and debugging application logic inside a container is harder than debugging it directly. The Dockerfile should be the last piece you add, not the first.

Docker Concepts Every .NET Developer Needs

Not a Docker 101 — just the mental model gaps that cause the most confusion when containerizing .NET applications for the first time.

Images vs Containers vs Registries

An image is a read-only, layered snapshot — the blueprint. A container is a running instance of an image — the process. A registry is where images are stored and pulled from (Docker Hub, GHCR, ACR). You build images, push them to registries, and run containers from pulled images.

The distinction matters for .NET because dotnet publish produces the artifact that goes into the image. The image is built once and deployed many times as containers. Each container gets its own writable layer on top of the read-only image layers — shared image layers are never duplicated on disk.

Tags and Why You Should Pin Them

Image Tags — What They Mean and Which to Use
# :latest — always points to the most recent published tag
# NEVER use in production Dockerfiles — you get a different image on every pull
FROM mcr.microsoft.com/dotnet/aspnet:latest          # ❌ unpinned

# Major.minor — stable channel, receives patch updates automatically
FROM mcr.microsoft.com/dotnet/aspnet:8.0             # ✅ good default

# Major.minor.patch — exact version, fully deterministic
FROM mcr.microsoft.com/dotnet/aspnet:8.0.13          # ✅ maximum reproducibility

# OS variant suffixes
FROM mcr.microsoft.com/dotnet/aspnet:8.0             # Debian bookworm-slim (recommended default)
FROM mcr.microsoft.com/dotnet/aspnet:8.0-alpine      # Alpine — smaller, musl libc (test thoroughly)
FROM mcr.microsoft.com/dotnet/aspnet:8.0-jammy       # Ubuntu 22.04

# SDK image — for the build stage ONLY, never the final runtime stage
FROM mcr.microsoft.com/dotnet/sdk:8.0                # ✅ build stage
FROM mcr.microsoft.com/dotnet/sdk:8.0 AS final       # ❌ SDK in production image — 800MB+

Port Mapping: Container Ports vs Host Ports

Port Mapping — Container Inside vs Host Outside
# Container listens on 8080 internally (non-root cannot bind 80)
# Host maps 5000 → container 8080
docker run -p 5000:8080 my-todo-api

# Format: -p HOST_PORT:CONTAINER_PORT
# You access http://localhost:5000 — Docker routes it to :8080 inside the container

# Expose multiple ports
docker run -p 5000:8080 -p 5001:8081 my-todo-api

# Bind to a specific host interface (more secure — not accessible from other machines)
docker run -p 127.0.0.1:5000:8080 my-todo-api

# EXPOSE in the Dockerfile is documentation only — it does NOT publish the port
# You still need -p at docker run time, or ports: in docker-compose.yml
ASP.NET Core 8 Defaults to Port 8080 in Containers

Since .NET 8, Microsoft changed the default container port for ASP.NET Core from 80 to 8080. This aligns with the non-root security baseline — processes running as non-root cannot bind to ports below 1024. If you are upgrading a containerized .NET 7 or earlier project, update your -p flags, Compose ports:, and Kubernetes containerPort from 80 to 8080. The environment variable ASPNETCORE_HTTP_PORTS=8080 is also set by default in the official .NET 8 base images.

The Multi-Stage Dockerfile: Build Once, Run Lean

A multi-stage Dockerfile uses multiple FROM instructions, each defining a named stage. Only the final stage ends up in the image you ship — earlier stages are discarded after their artifacts are copied forward.

The Complete Production Dockerfile

Dockerfile — Multi-Stage, Annotated
# ──────────────────────────────────────────────────────────────────
# Stage 1: restore
# Separate restore from build to maximise Docker layer cache.
# The restore layer rebuilds only when .csproj files change —
# not when source files change.
# ──────────────────────────────────────────────────────────────────
FROM mcr.microsoft.com/dotnet/sdk:8.0 AS restore
WORKDIR /src

# Copy project file(s) first — triggers restore layer rebuild only on csproj change
COPY ["DockerizedTodoApi/DockerizedTodoApi.csproj", "DockerizedTodoApi/"]
RUN dotnet restore "DockerizedTodoApi/DockerizedTodoApi.csproj" --locked-mode

# ──────────────────────────────────────────────────────────────────
# Stage 2: build
# Copy all source, then build — restore cache is already warm
# ──────────────────────────────────────────────────────────────────
FROM restore AS build
COPY . .
RUN dotnet build "DockerizedTodoApi/DockerizedTodoApi.csproj" \
    -c Release \
    --no-restore \
    -o /build

# ──────────────────────────────────────────────────────────────────
# Stage 3: publish
# Produce the self-contained publish output without SDK tooling
# ──────────────────────────────────────────────────────────────────
FROM restore AS publish
RUN dotnet publish "DockerizedTodoApi/DockerizedTodoApi.csproj" \
    -c Release \
    --no-restore \
    -o /publish \
    /p:UseAppHost=false          # no native binary wrapper — not needed in Linux container

# ──────────────────────────────────────────────────────────────────
# Stage 4: final runtime image
# Only the publish output lands here — no SDK, no source, no build tools
# ──────────────────────────────────────────────────────────────────
FROM mcr.microsoft.com/dotnet/aspnet:8.0 AS final
WORKDIR /app

# Copy only the published output from Stage 3
COPY --from=publish /publish .

# Non-root user setup — covered in detail in Section 4
RUN addgroup --system appgroup && \
    adduser  --system --ingroup appgroup --no-create-home appuser
USER appuser

# Document the port the app listens on (informational — does NOT publish it)
EXPOSE 8080

# Health check — covered in detail in Section 6
HEALTHCHECK --interval=30s --timeout=10s --start-period=15s --retries=3 \
  CMD curl -f http://localhost:8080/health || exit 1

ENTRYPOINT ["dotnet", "DockerizedTodoApi.dll"]

Understanding the Layer Cache Strategy

Why Separate restore From build — Cache Impact
# Without the separate restore stage (naive approach):
COPY . .                         # ← any source file change invalidates this layer
RUN dotnet restore               # ← re-downloads all NuGet packages every time
RUN dotnet publish -c Release    # ← builds from scratch

# With the separate restore stage (correct approach):
COPY *.csproj .                  # ← only csproj files — rarely changes
RUN dotnet restore --locked-mode # ← cached unless .csproj changes
COPY . .                         # ← source files
RUN dotnet build --no-restore    # ← restore cache is already warm

# Result: editing Program.cs and rebuilding takes ~5 seconds (build only)
# instead of ~60 seconds (restore + build). On CI this compounds significantly.

# --locked-mode requires packages.lock.json to exist and be committed.
# Generate it once: dotnet restore --use-lock-file
Generate packages.lock.json Before Using --locked-mode

The --locked-mode flag makes dotnet restore use the committed packages.lock.json exactly, refusing to resolve different versions. This is the correct behaviour for a reproducible build: the same packages every time, regardless of what is published to NuGet between your builds. Generate it once by running dotnet restore --use-lock-file locally, commit the resulting packages.lock.json, and add it to source control. Without it, --locked-mode will fail the build with a clear error message.

UseAppHost=false Is Important for Linux Containers

/p:UseAppHost=false tells the publish step not to generate a native executable wrapper alongside the DLL. On Linux containers you never need this wrapper — you launch the app with dotnet DockerizedTodoApi.dll. Including the native wrapper adds unnecessary bytes and can cause confusion because both the DLL and the executable appear in the publish output. Omit UseAppHost=false only if you specifically need to run the native executable instead of dotnet as the entry point.

Non-Root User & File Permission Baseline

By default, Docker containers run as root (UID 0). This is not the same as root on the host — container isolation limits the blast radius — but it still means your application process has the highest privilege level available inside the container. A non-root user is a practical baseline that every production container should enforce.

Dockerfile — Non-Root User, Correct Ownership Pattern
FROM mcr.microsoft.com/dotnet/aspnet:8.0 AS final
WORKDIR /app

# Copy the publish output first (owned by root at this point)
COPY --from=publish /publish .

# Create a system group and system user with no home directory
# --system users have UID/GID in the system range (typically < 1000)
# --no-create-home reduces the filesystem footprint
RUN addgroup --system appgroup && \
    adduser  --system \
             --ingroup appgroup \
             --no-create-home \
             --disabled-password \
             appuser

# Change ownership of the app directory to the new user
# This ensures the process can read its own files but cannot write outside /app
RUN chown -R appuser:appgroup /app

# Switch to the non-root user — all subsequent RUN, CMD, ENTRYPOINT run as appuser
USER appuser

EXPOSE 8080
ENTRYPOINT ["dotnet", "DockerizedTodoApi.dll"]

Using the Built-In Non-Root User in .NET 8 Images

Dockerfile — Shortcut: Use Microsoft's Built-In app User
# .NET 8 base images ship with a pre-created non-root user named 'app' (UID 1654)
# This is the simplest non-root setup and Microsoft's recommended approach

FROM mcr.microsoft.com/dotnet/aspnet:8.0 AS final
WORKDIR /app

# Use COPY --chown to set ownership in a single layer (more efficient than RUN chown)
COPY --from=publish --chown=app:app /publish .

# Switch to the built-in non-root app user
USER app

EXPOSE 8080
ENTRYPOINT ["dotnet", "DockerizedTodoApi.dll"]

# Verify the running user from inside the container:
# docker exec <container-id> whoami  → app
# docker exec <container-id> id      → uid=1654(app) gid=1654(app)
Non-Root Containers Cannot Bind to Port 80

Linux prevents non-root processes from binding to ports below 1024. If you configure your ASP.NET Core app to listen on port 80 and run it as a non-root user, the container will fail to start with Permission denied. Use port 8080 inside the container — it is the .NET 8 default and has no privilege restriction. Map it to any host port you need with -p HOST:8080. Update ASPNETCORE_HTTP_PORTS=8080 in your environment if the app was previously configured for port 80.

Verify Your Kubernetes PodSecurity Policy Compatibility

If you are deploying to a Kubernetes cluster with the Restricted PodSecurity admission profile enforced, your pods must run as non-root, must not allow privilege escalation, and must have a read-only root filesystem. The non-root user pattern in this section satisfies the first two requirements. For a read-only root filesystem, add securityContext.readOnlyRootFilesystem: true to your pod spec and mount writable emptyDir volumes for any paths the app writes to (ASP.NET Core writes to /tmp for data protection keys by default — this needs an explicit mount).

Configuration via Environment Variables

Secrets must never appear in a Docker image. Image layers are permanent, history is queryable, and registries are not always private. Every piece of configuration that varies between environments — connection strings, API keys, signing secrets — arrives at runtime via environment variables, not at build time via COPY or ENV.

How ASP.NET Core Config Layering Works in a Container

Configuration Precedence Inside a Container
# ASP.NET Core's config provider stack runs the same inside a container:
# appsettings.json → appsettings.{ASPNETCORE_ENVIRONMENT}.json → Environment Variables

# Set the environment at docker run time
docker run \
  -e ASPNETCORE_ENVIRONMENT=Production \
  -e ConnectionStrings__DefaultConnection="Server=db;Database=todos;..." \
  -e Jwt__SigningKey="runtime-secret-never-in-image" \
  -p 5000:8080 \
  my-todo-api

# Double underscore __ is the hierarchy separator for env vars in .NET
# ConnectionStrings__DefaultConnection → ConnectionStrings:DefaultConnection in config

# Load from a file for local development (file must be gitignored)
docker run --env-file .env.local -p 5000:8080 my-todo-api

# .env.local (gitignored — never committed):
# ASPNETCORE_ENVIRONMENT=Development
# ConnectionStrings__DefaultConnection=Data Source=dev.db
# Jwt__SigningKey=local-dev-signing-key-not-a-real-secret

appsettings.Production.json — Structure Without Values

appsettings.Production.json — Keys Only, No Secrets
{
  "Logging": {
    "LogLevel": {
      "Default": "Warning",
      "Microsoft.AspNetCore": "Warning"
    }
  },
  "ConnectionStrings": {
    "DefaultConnection": ""
  },
  "Jwt": {
    "Issuer":     "https://your-domain.com",
    "Audience":   "todos-api",
    "SigningKey":  ""
  },
  "AllowedHosts": "*"
}
ENV in a Dockerfile Bakes the Value Into the Image Layer

ENV MY_SECRET=value in a Dockerfile writes that value into the image layer permanently. Anyone who runs docker history <image> or inspects the image manifest can read it — even after you overwrite it with -e at runtime. Use ENV only for non-sensitive configuration defaults like ASPNETCORE_HTTP_PORTS=8080 or DOTNET_RUNNING_IN_CONTAINER=true. Never use ENV for passwords, API keys, connection strings, or signing secrets.

Docker Compose secrets Block for Local Development

For local multi-container development with Docker Compose, the secrets: block mounts secret values as files at /run/secrets/<name> rather than environment variables. ASP.NET Core can read these using AddKeyPerFile("/run/secrets") in your configuration builder. This approach avoids exposing secrets in docker inspect output (which shows environment variables) and more closely mimics how Kubernetes mounts secrets as files. It is particularly useful when working with Docker Compose before moving to Kubernetes.

Health Checks: Application Endpoint & Docker Instruction

There are two distinct health check layers in a containerized ASP.NET Core application. They serve different orchestrators and have different semantics. Both are needed for a complete production setup.

Layer 1 — ASP.NET Core Health Check Endpoint

Program.cs — Health Check Registration & Endpoint
var builder = WebApplication.CreateBuilder(args);

// Register health checks — add as many as needed
builder.Services.AddHealthChecks()
    // Built-in: always returns Healthy
    // Use for liveness — "is the process alive?"
    .AddCheck("self", () => HealthCheckResult.Healthy("Process is running"))

    // Custom check: "can we reach the database?"
    // Use for readiness — "is the app ready to serve traffic?"
    .AddCheck<DatabaseHealthCheck>("database", tags: ["readiness"]);

var app = builder.Build();

app.MapTodoEndpoints();

// Liveness probe — Kubernetes restarts the pod if this returns unhealthy
// Returns: 200 OK with { "status": "Healthy" }
app.MapHealthChecks("/health/live", new HealthCheckOptions
{
    Predicate = _ => false    // only the always-healthy "self" check
});

// Readiness probe — Kubernetes stops sending traffic if this returns unhealthy
// Returns: 200 OK when all readiness checks pass, 503 Service Unavailable otherwise
app.MapHealthChecks("/health/ready", new HealthCheckOptions
{
    Predicate = check => check.Tags.Contains("readiness")
});

// Combined endpoint — used by Docker HEALTHCHECK and load balancers
app.MapHealthChecks("/health");

app.Run();

Custom Health Check — Database Connectivity

Health/DatabaseHealthCheck.cs
public class DatabaseHealthCheck(IConfiguration config) : IHealthCheck
{
    public async Task<HealthCheckResult> CheckHealthAsync(
        HealthCheckContext context,
        CancellationToken  cancellationToken = default)
    {
        try
        {
            var connStr = config.GetConnectionString("DefaultConnection");
            if (string.IsNullOrEmpty(connStr))
                return HealthCheckResult.Degraded("Connection string is not configured");

            // Open a connection and run a trivial query to verify reachability
            // Simulated connectivity check — replace with real DB check for production
await Task.Delay(10, cancellationToken); // simulate async check
return HealthCheckResult.Healthy("Database is reachable");

            return HealthCheckResult.Healthy("Database is reachable");
        }
        catch (Exception ex)
        {
            return HealthCheckResult.Unhealthy(
                "Database connectivity check failed",
                exception: ex);
        }
    }
}

Layer 2 — Docker HEALTHCHECK Instruction

Dockerfile — HEALTHCHECK Instruction & What Each Flag Does
# HEALTHCHECK tells the Docker daemon how to probe the container
# Status appears in: docker ps, docker inspect, and compose depends_on conditions

HEALTHCHECK \
  --interval=30s    `# how often Docker runs the check` \
  --timeout=10s     `# how long before the check is considered failed` \
  --start-period=15s `# grace period on startup before failures count` \
  --retries=3       `# consecutive failures before container = unhealthy` \
  CMD curl -f http://localhost:8080/health || exit 1

# curl must be available in the base image
# aspnet:8.0 (Debian) includes curl — verify with: docker run --rm aspnet:8.0 which curl
# aspnet:8.0-alpine does NOT include curl — use wget or install it:
# RUN apk add --no-cache curl

# Alternatively, use the dotnet health check tool (no curl dependency):
HEALTHCHECK CMD dotnet-monitor collect --output /dev/null \
  || wget -qO- http://localhost:8080/health || exit 1

# Check container health status:
docker inspect --format='{{.State.Health.Status}}' <container-id>
# Possible values: starting | healthy | unhealthy | none
The Docker HEALTHCHECK Does Not Replace Kubernetes Probes

When running on Kubernetes, the Docker HEALTHCHECK instruction is ignored — Kubernetes manages liveness and readiness through its own probe mechanism configured in the pod spec. The Docker HEALTHCHECK is only evaluated by the Docker daemon itself (standalone docker run or docker compose). Do not rely on it for production availability management on Kubernetes. Define livenessProbe and readinessProbe in your Kubernetes deployment manifest pointing to /health/live and /health/ready respectively.

docker compose depends_on Uses HEALTHCHECK Status

In a docker-compose.yml, setting depends_on: condition: service_healthy makes Compose wait until the dependency container reports a healthy Docker HEALTHCHECK status before starting the dependent service. This is valuable for local development: your API container won't start until the database container passes its health check, preventing the "connection refused on startup" race condition. Without HEALTHCHECK in the database image, the condition falls back to service_started (container running, not necessarily ready), and the race condition returns.

Image Hygiene: .dockerignore, Layer Caching & Pinned Tags

A clean image is deterministic, fast to build, and contains nothing it does not need to run. Three practices achieve this: a thorough .dockerignore, a layer order that maximises cache reuse, and pinned base image tags.

The .dockerignore File

.dockerignore — Complete File for ASP.NET Core Projects
## Build output — the single most important exclusion
## Without this, every dotnet build invalidates all COPY layers in Docker
**/bin/
**/obj/

## Git history — never needed in an image, can be large
.git/
.gitignore
.gitattributes

## IDE and editor files
.vs/
.vscode/
*.user
*.suo

## Test projects — not needed in production images
**/*.Tests/
**/*.Test/
**/*Tests.csproj

## Secret files — critical: these must never enter the build context
**/.env
**/.env.*
**/secrets.json
**/appsettings.Local.json
**/*.pfx
**/*.key

## CI/CD and documentation
.github/
docs/
README.md
CHANGELOG.md
*.md

## Node modules (if frontend assets present)
**/node_modules/

## Docker files themselves (informational — reduces build context size slightly)
Dockerfile*
docker-compose*
.dockerignore
Measure Your Build Context Before and After

Run docker build --no-cache . 2>&1 | head -5 and look for the line Sending build context to Docker daemon X.XXkB. Without a .dockerignore, a typical .NET project sends 50–200MB of build artifacts and IDE files. With a correct .dockerignore, it drops to under 1MB. The difference in build time on a remote Docker daemon or CI runner is significant — and the cache invalidation savings (not re-running expensive restore steps because a bin/Debug file changed) compound across every developer on the team.

Pinning Base Images for Reproducibility

Dockerfile — Pinned Tags with SHA Digest for Maximum Reproducibility
# Level 1: Pin to major.minor — gets patch security updates automatically
FROM mcr.microsoft.com/dotnet/sdk:8.0 AS restore        # ✅ recommended for most teams
FROM mcr.microsoft.com/dotnet/aspnet:8.0 AS final

# Level 2: Pin to exact patch version — fully deterministic, requires manual updates
FROM mcr.microsoft.com/dotnet/sdk:8.0.404 AS restore
FROM mcr.microsoft.com/dotnet/aspnet:8.0.13 AS final

# Level 3: Pin to SHA256 digest — byte-for-byte identical every build
# Find the digest: docker pull mcr.microsoft.com/dotnet/aspnet:8.0 && docker inspect
FROM mcr.microsoft.com/dotnet/aspnet:8.0@sha256:abc123... AS final

# Practical recommendation:
# Development / feature branches: major.minor (8.0) — automatic security patches
# Production releases / compliance: exact patch (8.0.13) — change-controlled updates
# Air-gapped / regulated environments: SHA256 digest — absolute reproducibility

# Check for available .NET 8 base image digests:
# https://mcr.microsoft.com/v2/dotnet/aspnet/tags/list
COPY --chown Avoids a Separate RUN chown Layer

Each RUN instruction creates a new image layer. Running RUN chown -R appuser /app after copying files creates an additional layer that duplicates all the file data with new ownership metadata — effectively doubling the layer size for large publish outputs. Use COPY --chown=app:app --from=publish /publish . instead. The ownership is applied as part of the COPY instruction in a single layer, producing a smaller image without the duplication. This is the approach shown in Section 4.

Don't Combine Unrelated RUN Commands Indiscriminately

A common Docker anti-pattern is combining every RUN into a single layer to "reduce layer count". Docker's layer cache is per-instruction: if you combine apt-get install curl with your user creation and ownership commands into one RUN, a change to any part of that combined command invalidates the entire cached layer and re-runs everything. Keep RUN instructions grouped by how frequently they change: OS package installs (rarely), user creation (rarely), application-specific setup (occasionally). Separate them so the cache is as granular as possible.

Shipping the Image: Registry Push & Cloud Deployment Notes

With a production-ready Dockerfile in place, the ship workflow is three steps: build with a deterministic tag, push to a registry, and reference the tag in your deployment. This section covers that workflow for Docker Hub, GHCR, and the two most common cloud destinations.

Build, Tag, and Push — The Repeatable Workflow

Terminal — Build, Tag & Push to Docker Hub and GHCR
# ── Build ────────────────────────────────────────────────────────────────────
# Use a git SHA as the version tag for deterministic, traceable deployments
export VERSION=$(git rev-parse --short HEAD)

docker build \
  -t my-todo-api:${VERSION} \
  -t my-todo-api:latest \
  -f Dockerfile \
  .

# Verify the image is clean — check size and layers
docker image inspect my-todo-api:${VERSION} --format '{{.Size}}'
docker history my-todo-api:${VERSION}

# ── Push to Docker Hub ────────────────────────────────────────────────────────
docker login                                        # prompts for username/password
docker tag  my-todo-api:${VERSION} yourusername/todo-api:${VERSION}
docker push yourusername/todo-api:${VERSION}
docker push yourusername/todo-api:latest

# ── Push to GitHub Container Registry (GHCR) ─────────────────────────────────
# Authenticate with a GitHub Personal Access Token (write:packages scope)
echo $GITHUB_TOKEN | docker login ghcr.io -u USERNAME --password-stdin

docker tag  my-todo-api:${VERSION} ghcr.io/yourusername/todo-api:${VERSION}
docker push ghcr.io/yourusername/todo-api:${VERSION}

# Pull and run from GHCR to verify the push succeeded
docker pull  ghcr.io/yourusername/todo-api:${VERSION}
docker run --rm \
  -p 5000:8080 \
  -e ASPNETCORE_ENVIRONMENT=Production \
  -e ConnectionStrings__DefaultConnection="Data Source=/data/todos.db" \
  ghcr.io/yourusername/todo-api:${VERSION}

GitHub Actions — Automated Build and Push

.github/workflows/docker-push.yml — CI Build and Push to GHCR
name: Build and Push Docker Image

on:
  push:
    branches: [main]
  pull_request:
    branches: [main]

env:
  REGISTRY: ghcr.io
  IMAGE_NAME: ${{ github.repository }}

jobs:
  build-and-push:
    runs-on: ubuntu-latest
    permissions:
      contents: read
      packages: write   # required to push to GHCR

    steps:
      - uses: actions/checkout@v4

      - name: Log in to GHCR
        uses: docker/login-action@v3
        with:
          registry: ${{ env.REGISTRY }}
          username: ${{ github.actor }}
          password: ${{ secrets.GITHUB_TOKEN }}  # automatically available, no setup needed

      - name: Extract Docker metadata
        id: meta
        uses: docker/metadata-action@v5
        with:
          images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
          tags: |
            type=sha                   # ghcr.io/owner/repo:sha-abc1234
            type=raw,value=latest,enable={{is_default_branch}}

      - name: Build and push
        uses: docker/build-push-action@v6
        with:
          context: .
          push: ${{ github.event_name != 'pull_request' }}
          tags: ${{ steps.meta.outputs.tags }}
          labels: ${{ steps.meta.outputs.labels }}
          cache-from: type=gha       # GitHub Actions cache for Docker layers
          cache-to:   type=gha,mode=max

Cloud Deployment Notes

Azure Container Apps & Kubernetes — Deployment Snippets
# ── Azure Container Apps ──────────────────────────────────────────────────────
az containerapp create \
  --name todo-api \
  --resource-group my-rg \
  --environment my-env \
  --image ghcr.io/yourusername/todo-api:sha-abc1234 \
  --target-port 8080 \
  --ingress external \
  --env-vars \
      ASPNETCORE_ENVIRONMENT=Production \
      ConnectionStrings__DefaultConnection=secretref:db-conn \
  --min-replicas 1 \
  --max-replicas 5

# Secrets are managed by Container Apps — referenced via secretref:
az containerapp secret set --name todo-api --resource-group my-rg \
  --secrets "db-conn=Server=prod-db;Database=todos;..."

# ── Kubernetes Deployment Manifest ────────────────────────────────────────────
# kubernetes/deployment.yaml (abbreviated — key sections)
apiVersion: apps/v1
kind: Deployment
spec:
  template:
    spec:
      containers:
        - name: todo-api
          image: ghcr.io/yourusername/todo-api:sha-abc1234
          ports:
            - containerPort: 8080
          env:
            - name: ASPNETCORE_ENVIRONMENT
              value: Production
            - name: ConnectionStrings__DefaultConnection
              valueFrom:
                secretKeyRef:
                  name: todo-api-secrets
                  key: db-connection-string
          livenessProbe:              # uses ASP.NET Core /health/live endpoint
            httpGet: { path: /health/live, port: 8080 }
            initialDelaySeconds: 15
            periodSeconds: 30
          readinessProbe:             # uses ASP.NET Core /health/ready endpoint
            httpGet: { path: /health/ready, port: 8080 }
            initialDelaySeconds: 5
            periodSeconds: 10
Tag With git SHA, Not Just :latest, in Every Deployment

Using :latest in a Kubernetes deployment manifest means every kubectl rollout restart or pod reschedule may pull a different image than the one currently running — because :latest is resolved at pull time, not at deploy time. Tag every production deployment with the git commit SHA. This makes deployments auditable (you can trace an image back to the exact commit that built it), rollbacks trivial (kubectl set image to the previous SHA), and incidents easier to diagnose (the tag in your pod spec tells you exactly what code is running).

Set imagePullPolicy: Always When Using Mutable Tags

Kubernetes caches images on each node. If you push a new image to the same :latest tag and Kubernetes has already pulled it, the node will continue using the cached version and your new code will not deploy. Set imagePullPolicy: Always to force a fresh pull on every pod creation. This is the correct setting when using mutable tags. When using immutable SHA-tagged images, imagePullPolicy: IfNotPresent is safe and faster — the SHA guarantees the cached image is identical to the remote.

Common Questions

Why use a multi-stage Dockerfile for ASP.NET Core instead of a single stage?

A single-stage build copies the entire SDK into your final image — the C# compiler, MSBuild, NuGet cache, and hundreds of megabytes of build tooling your running application never uses. A multi-stage build separates the SDK stage (restore, build, publish) from the runtime stage. Only the published output lands in the final image. A single-stage image is typically 800MB–1GB; a multi-stage image using aspnet:8.0 is around 200–250MB — a smaller attack surface, faster pulls, and cheaper registry storage.

What does running a Docker container as non-root actually protect against?

Running as root inside a container means any vulnerability that allows arbitrary command execution runs with the highest privileges available inside that container. A non-root user limits the blast radius: the process cannot write to system directories, cannot install packages, and cannot bind to privileged ports below 1024. On Kubernetes, a non-root user is a requirement for clusters enforcing the Restricted PodSecurity admission policy — and a sensible baseline for all clusters regardless of enforcement level.

Should I use alpine or slim base images for production ASP.NET Core containers?

Alpine images use musl libc instead of glibc, which most .NET applications are built for. Microsoft's primary .NET images are Debian-based (glibc). Alpine-based .NET images exist but have historically had compatibility issues with certain native dependencies and globalization packages. The recommended default is aspnet:8.0 (Debian bookworm-slim) — around 200MB with strong compatibility. If image size is critical and your app has no native dependency edge cases, aspnet:8.0-alpine is viable but requires thorough testing before production.

How do I pass secrets to a Docker container without baking them into the image?

Never use ENV or COPY to put secrets into an image — they become part of the image layer history and are visible via docker history. For local development, pass secrets via -e or an --env-file that is gitignored. For Docker Compose, use the secrets: block. For Kubernetes, use Secret objects mounted as environment variables or files. For Azure Container Apps, use managed identity and Key Vault references. Secrets arrive at runtime, never at build time.

What is the difference between the ASP.NET Core health check endpoint and the Docker HEALTHCHECK instruction?

They operate at different layers and serve different orchestrators. The ASP.NET Core /health endpoint is an HTTP endpoint used by Kubernetes liveness and readiness probes and load balancers. The Docker HEALTHCHECK instruction tells the Docker daemon how to probe the container in standalone docker run and docker compose deployments. On Kubernetes, the Docker HEALTHCHECK is ignored — Kubernetes uses its own probe mechanism. The HEALTHCHECK is most valuable for Compose workflows where depends_on: condition: service_healthy prevents dependent containers from starting before their dependencies are ready.

Why does my .dockerignore file matter if I'm using a multi-stage build?

.dockerignore controls what Docker sends to the daemon as the build context before the build begins — not just what lands in the final image. Without it, docker build sends everything including bin, obj, .git history, and local secret files. This has two costs: build context transfer time (significant for large repos), and cache invalidation — if your bin folder changed since the last build, Docker treats the COPY layer as changed and re-runs the expensive restore and build steps. Multi-stage builds separate what reaches the final image; .dockerignore controls what enters the build context at all.

Back to Tutorials