Works Locally, Breaks in CI: The Pattern Behind Every Docker Mistake
Dockerizing an ASP.NET Core application for the first time produces a container that runs on your machine. It builds, it starts, the logs look right. Then it goes to CI, or a colleague pulls it, or it hits a staging server — and something breaks. The container exits immediately. The port is unreachable. Logs are empty. The app starts but cannot connect to the database. HTTPS fails with a certificate error.
These failures share a common root: assumptions baked into local development that do not survive the container boundary. Kestrel's default URL binds to localhost, not all interfaces. The dev HTTPS certificate lives in your user profile, not in the image. Environment variables you set in launchSettings.json do not exist inside Docker. Logs written to a file are invisible to docker logs. None of these are obscure edge cases — they are the predictable collision points between how ASP.NET Core is configured for development and what a container runtime actually provides. This article names each mistake, shows exactly what it looks like when it fails, and gives you the fix.
Mistake 1 & 2: Port Binding and Kestrel URL Configuration
Port problems are the most common first failure when Dockerizing ASP.NET Core. They surface as connection refused on the host, a container that starts and immediately exits, or a container that runs but returns no response. There are two distinct mistakes that produce identical symptoms and are frequently confused for each other: misconfigured Kestrel URLs and a misunderstanding of what EXPOSE actually does.
# ════════════════════════════════════════════════════════════════════════════
# MISTAKE 1: Kestrel binds to localhost — unreachable from outside the container
# ════════════════════════════════════════════════════════════════════════════
# launchSettings.json (development only — ignored inside Docker):
# "applicationUrl": "http://localhost:5000"
#
# When this setting drives Kestrel inside a container, it binds to 127.0.0.1.
# Docker's port mapping sends traffic to the container's external interface
# (0.0.0.0), not to 127.0.0.1. The connection never reaches Kestrel.
#
# Symptom: docker run -p 8080:8080 myapp → curl http://localhost:8080 → Connection refused
# Kestrel IS running. It is just not listening where Docker is sending traffic.
# ── FIX: set ASPNETCORE_HTTP_PORTS or ASPNETCORE_URLS ─────────────────────
# In your Dockerfile (runtime stage):
ENV ASPNETCORE_HTTP_PORTS=8080
# This tells Kestrel to listen on ALL interfaces on port 8080 inside the container.
# Equivalent alternative:
# ENV ASPNETCORE_URLS=http://+:8080
# The + and 0.0.0.0 are interchangeable — both mean "all interfaces".
# Do NOT use http://localhost:8080 — localhost inside a container is 127.0.0.1.
EXPOSE 8080
# EXPOSE documents intent. It does NOT publish the port to the host.
# The host mapping always happens at container start:
# docker run -p 8080:8080 myapp ← host_port:container_port
# ════════════════════════════════════════════════════════════════════════════
# MISTAKE 2: Believing EXPOSE publishes the port
# ════════════════════════════════════════════════════════════════════════════
# WRONG mental model:
# EXPOSE 8080 → "now port 8080 is accessible on my host at localhost:8080"
# CORRECT mental model:
# EXPOSE 8080 → "this image intends to use port 8080" (documentation only)
# The port becomes accessible on the host ONLY when you map it at runtime:
# docker run -p 8080:8080 myapp # explicit mapping — works
# docker run -p 5001:8080 myapp # host 5001 → container 8080 — works
# docker run myapp # no mapping — port NOT accessible on host
# docker-compose.yml equivalent:
# services:
# api:
# image: myapp
# ports:
# - "8080:8080" # host:container — this is the actual publication step
# ════════════════════════════════════════════════════════════════════════════
# COMPLETE CORRECT PORT CONFIGURATION IN THE DOCKERFILE
# ════════════════════════════════════════════════════════════════════════════
FROM mcr.microsoft.com/dotnet/aspnet:8.0-bookworm-slim AS runtime
WORKDIR /app
USER app
COPY --from=build --chown=app:app /app/publish .
# Explicit Kestrel binding — all interfaces, port 8080
# .NET 8 non-root default is 8080; set it explicitly so intent is documented
ENV ASPNETCORE_HTTP_PORTS=8080
# Inform orchestrators and tooling of the intended port (documentation)
EXPOSE 8080
ENTRYPOINT ["dotnet", "MyApp.dll"]
# Then at runtime: docker run -p 8080:8080 myapp
The Kestrel localhost binding mistake is particularly confusing because the error message — connection refused — is identical to what you see when the container is not running at all. Always verify with docker ps that the container is actually running before diagnosing the port. If the container is running and the port is mapped, but the connection is refused, the cause is almost always Kestrel binding to 127.0.0.1 instead of 0.0.0.0. Confirm by running docker exec <container> ss -tlnp or netstat -tlnp and checking which address Kestrel is bound to.
Mistake 3: Baking Secrets Into Image Layers With ENV
The ENV instruction is the right tool for non-sensitive configuration that belongs in the image — like ASPNETCORE_HTTP_PORTS or DOTNET_RUNNING_IN_CONTAINER. It is the wrong tool for anything that should not be visible to everyone who can pull the image. Connection strings, API keys, JWT secrets, and OAuth credentials set with ENV in a Dockerfile are baked into the image layer permanently and are trivially readable by anyone with access to the image.
# ════════════════════════════════════════════════════════════════════════════
# MISTAKE: secrets in ENV instructions — baked into image layers forever
# ════════════════════════════════════════════════════════════════════════════
# WRONG — do not do this:
ENV ConnectionStrings__DefaultConnection="Server=prod-db;Database=myapp;User=sa;Password=SuperSecret123!"
ENV JwtSettings__SecretKey="my-super-secret-jwt-key-that-is-now-public"
ENV ExternalApi__ApiKey="sk-live-abc123def456"
# These values are now:
# ✗ Visible in docker inspect myapp
# ✗ Visible in docker history myapp
# ✗ Present in every image layer pushed to your registry
# ✗ Inherited by every image built FROM yours
# ✗ Readable by anyone with registry pull access
# ════════════════════════════════════════════════════════════════════════════
# FIX: runtime injection — secrets never enter the image
# ════════════════════════════════════════════════════════════════════════════
# ── Option A: docker run -e (local testing / simple deployments) ──────────
# docker run -p 8080:8080 \
# -e ConnectionStrings__DefaultConnection="Server=...;Password=..." \
# -e JwtSettings__SecretKey="..." \
# myapp
# Double underscore __ is EF Core / ASP.NET Core's config hierarchy separator.
# ConnectionStrings__DefaultConnection maps to ConnectionStrings:DefaultConnection
# in appsettings.json — the container runtime variable overrides the file value.
# ── Option B: docker-compose with .env file (never commit .env) ───────────
# docker-compose.yml:
# services:
# api:
# image: myapp
# environment:
# - ConnectionStrings__DefaultConnection=${DB_CONNECTION_STRING}
# - JwtSettings__SecretKey=${JWT_SECRET}
#
# .env (in .gitignore AND .dockerignore — never committed):
# DB_CONNECTION_STRING=Server=prod-db;Database=myapp;User=sa;Password=...
# JWT_SECRET=my-actual-jwt-secret
#
# docker-compose reads .env automatically. The secret never enters the image.
# ── Option C: Kubernetes Secrets (production orchestration) ───────────────
# Create the secret:
# kubectl create secret generic myapp-secrets \
# --from-literal=db-connection="Server=...;Password=..." \
# --from-literal=jwt-secret="..."
#
# Reference in your Deployment spec:
# env:
# - name: ConnectionStrings__DefaultConnection
# valueFrom:
# secretKeyRef:
# name: myapp-secrets
# key: db-connection
# ── What SHOULD go in ENV in your Dockerfile ─────────────────────────────
# Non-sensitive configuration that is the same across all environments:
ENV ASPNETCORE_HTTP_PORTS=8080
ENV DOTNET_RUNNING_IN_CONTAINER=true
# DOTNET_RUNNING_IN_CONTAINER signals to ASP.NET Core that it is containerised —
# affects default configuration behaviour such as which appsettings files load.
# Environment-specific but non-secret configuration belongs in runtime -e flags
# or orchestrator config maps, NOT in the Dockerfile.
The double-underscore convention (ConnectionStrings__DefaultConnection) is the container-friendly equivalent of the JSON configuration hierarchy separator. ASP.NET Core's configuration system maps __ in environment variable names to : in configuration keys — so ConnectionStrings__DefaultConnection overrides ConnectionStrings:DefaultConnection from appsettings.json. This is how you inject any nested configuration value at runtime without modifying the application or the image. The pattern works identically in docker run -e, Docker Compose environment blocks, and Kubernetes environment variable specs.
Mistake 4: Expecting Development HTTPS Certificates to Work in Containers
ASP.NET Core's development HTTPS setup works seamlessly on a developer machine because dotnet dev-certs https --trust generates a certificate and installs it in the local machine and user certificate stores. None of that exists inside a Docker container. When an ASP.NET Core application configured for HTTPS starts in a container without a valid certificate, Kestrel fails to bind to the HTTPS URL and the container exits — sometimes silently, sometimes with a certificate exception that is easy to misread as a different problem.
# ════════════════════════════════════════════════════════════════════════════
# MISTAKE: ASPNETCORE_URLS includes https:// without a valid certificate
# ════════════════════════════════════════════════════════════════════════════
# This configuration works on a dev machine. It fails inside Docker.
ENV ASPNETCORE_URLS="https://+:443;http://+:80"
# Inside the container: no dev cert, no user certificate store, no trust store.
# Kestrel: "Unable to configure HTTPS endpoint. No server certificate was specified."
# Container: exits immediately. docker logs shows the exception. Port is unreachable.
# ════════════════════════════════════════════════════════════════════════════
# FIX A (recommended — production): TLS termination at the reverse proxy
# ════════════════════════════════════════════════════════════════════════════
# Kestrel in the container listens on plain HTTP only.
# TLS is handled upstream by nginx, Traefik, AWS ALB, Azure App Gateway, etc.
# The container never needs a certificate. This is the standard production pattern.
ENV ASPNETCORE_HTTP_PORTS=8080
# No HTTPS URL. No certificate needed. Clean and correct.
# nginx upstream configuration (outside the container):
# upstream myapp {
# server myapp-container:8080; # plain HTTP to the container
# }
# server {
# listen 443 ssl;
# ssl_certificate /etc/ssl/certs/myapp.crt;
# ssl_certificate_key /etc/ssl/private/myapp.key;
# location / { proxy_pass http://myapp; } # TLS terminated here
# }
# ════════════════════════════════════════════════════════════════════════════
# FIX B (development only): mount dev cert and configure Kestrel via ENV
# ════════════════════════════════════════════════════════════════════════════
# Step 1: Export your dev cert to a file (run once on your machine)
# dotnet dev-certs https --export-path ./certs/aspnetapp.pfx --password "devpassword"
# Step 2: Mount the cert and configure Kestrel at container start
# docker run -p 8080:8080 -p 8443:8443 \
# -v $(pwd)/certs:/https:ro \
# -e ASPNETCORE_URLS="https://+:8443;http://+:8080" \
# -e ASPNETCORE_Kestrel__Certificates__Default__Path=/https/aspnetapp.pfx \
# -e ASPNETCORE_Kestrel__Certificates__Default__Password=devpassword \
# myapp
# The cert is mounted read-only. The password is injected at runtime (not in image).
# This is development-only — never mount a production cert this way.
# ── appsettings.json: disable HTTPS redirection in container environments ──
# {
# "profiles": {
# "Docker": {
# "environmentVariables": {
# "ASPNETCORE_HTTP_PORTS": "8080"
# }
# }
# }
# }
# In Program.cs — disable HTTPS redirection when running in a container:
# if (!app.Environment.IsProduction() || !bool.Parse(
# Environment.GetEnvironmentVariable("DOTNET_RUNNING_IN_CONTAINER") ?? "false"))
# {
# app.UseHttpsRedirection();
# }
# In production containers, the reverse proxy handles redirection — the app
# should not redirect HTTP→HTTPS because it only speaks HTTP inside the cluster.
TLS termination at the reverse proxy is not a compromise — it is the architecturally correct pattern for containerised applications. Inside a Kubernetes cluster or a Docker network, traffic between the load balancer and your container travels on a private network segment. TLS inside that segment adds encryption overhead with minimal security benefit relative to the operational complexity of managing certificates inside every container. Terminate TLS once at the edge, pass plain HTTP to Kestrel, and use mutual TLS (mTLS) at the service mesh layer if internal traffic encryption is a compliance requirement.
Mistake 5 & 6: Image Bloat and Logging to Files Instead of Stdout
Two mistakes that do not cause immediate failures but create serious operational problems at scale. Image bloat — from single-stage builds, broad COPY instructions without a .dockerignore, or SDK images used as runtime bases — slows every pull, every deployment, and every autoscaling event. Log file output — writing to a path inside the container filesystem instead of stdout — makes your application invisible to every log aggregation tool in your infrastructure, from docker logs to CloudWatch to Datadog.
# ════════════════════════════════════════════════════════════════════════════
# MISTAKE 5: Image bloat — SDK in runtime image, no .dockerignore, root user
# ════════════════════════════════════════════════════════════════════════════
# These three decisions combine to produce a large, insecure image:
# FROM mcr.microsoft.com/dotnet/sdk:8.0 ← ~750MB SDK used as runtime
# COPY . . ← bin/, obj/, .git/ all included
# RUN dotnet publish ...
# ENTRYPOINT ["dotnet", "MyApp.dll"] ← running as root
# Result: 900MB+ image, root process, local build artefacts in layers.
# ── FIX: multi-stage + .dockerignore + non-root user ─────────────────────
# See the companion article on production-safe Dockerfiles for the complete
# multi-stage pattern. The three rules:
#
# 1. Runtime stage uses aspnet:8.0-bookworm-slim — NOT the SDK image
# 2. .dockerignore excludes bin/, obj/, .git/, .env, secrets (see below)
# 3. USER app before COPY — files land with app ownership, process runs non-root
# ── Size impact of each fix (approximate, .NET 8 API project) ─────────────
#
# Single-stage (sdk:8.0, no .dockerignore): ~900MB
# Multi-stage (aspnet:8.0, no .dockerignore): ~280MB (69% reduction)
# Multi-stage (bookworm-slim): ~230MB (74% reduction)
# Multi-stage (alpine): ~130MB (86% reduction)
# Multi-stage (chiseled): ~125MB (86% reduction)
#
# Each 100MB reduction multiplies across every pull in CI, every deploy,
# every pod scale-out event. On 50 deployments/day the difference is gigabytes
# of registry transfer per day.
# ── .dockerignore — minimum viable for bloat reduction ───────────────────
# (Full version in the production Dockerfile article — these are the high-impact entries)
# **/bin/
# **/obj/
# .git/
# .env
# .env.*
# **/appsettings.Development.json
# ════════════════════════════════════════════════════════════════════════════
# MISTAKE 6: Writing logs to files instead of stdout
# ════════════════════════════════════════════════════════════════════════════
# WRONG — Serilog configuration writing to a file inside the container:
# Log.Logger = new LoggerConfiguration()
# .WriteTo.File("/var/log/myapp/app.log", rollingInterval: RollingInterval.Day)
# .CreateLogger();
#
# Problems:
# ✗ docker logs myapp → (empty)
# ✗ Kubernetes log aggregation sees nothing
# ✗ CloudWatch / Datadog / Stackdriver receive nothing
# ✗ Logs are lost when the container restarts (container filesystem is ephemeral)
# ✗ Log files grow until the container filesystem fills — container crashes
# ── FIX: write to stdout/stderr — let the runtime collect logs ────────────
# Default ASP.NET Core host (no Serilog) — console logging is on by default.
# Verify it is not being removed in Program.cs:
# builder.Logging.ClearProviders(); ← this line removes ALL providers including console
# builder.Logging.AddConsole(); ← add it back explicitly if you use ClearProviders
# Serilog — write to console, not file, in containerised environments:
# Log.Logger = new LoggerConfiguration()
# .WriteTo.Console(new JsonFormatter()) // structured JSON to stdout
# .CreateLogger();
#
# JSON format is preferred — log aggregators parse it automatically.
# Plain text works but loses structure (log level, trace ID, etc. as fields).
# Microsoft.Extensions.Logging — appsettings.json logging configuration:
# {
# "Logging": {
# "LogLevel": {
# "Default": "Information",
# "Microsoft.AspNetCore": "Warning"
# }
# }
# }
# The Console provider reads this and writes structured output to stdout.
# No file path. No rolling interval. The container runtime collects and ships it.
# ── Verify logs are reaching stdout ──────────────────────────────────────
# docker run -d --name myapp-test -p 8080:8080 myapp
# curl http://localhost:8080/healthz # trigger a request
# docker logs myapp-test # should show request log lines
# If docker logs is empty: application is writing to a file, not stdout.
The ephemeral filesystem point is the most consequential reason to avoid file-based logging in containers, beyond visibility. When a container restarts — due to a crash, a deployment, a node eviction, or an OOM kill — the container filesystem is recreated from the image. Any logs written to the container filesystem are gone. If your only log store is a file inside the container, you lose the logs from the period immediately before the crash — exactly the logs you need most. Stdout logging means the container runtime captures and persists log output independently of the container's lifecycle. The logs survive the crash because they were never stored inside the container to begin with.