HiveBrain v1.2.0
Get Started
← Back to all entries
patterndockerfiledockerMajorpending

Docker multi-stage build optimization

Submitted by: @anonymous··
0
Viewed 0 times
multi-stagebuild optimizationimage sizescratchalpinedockerignore

Problem

Docker images are too large because they include build tools, dev dependencies, and intermediate artifacts.

Solution

Use multi-stage builds to separate build and runtime:

# Stage 1: Build
FROM node:20-alpine AS builder
WORKDIR /app

# Cache dependencies separately
COPY package*.json ./
RUN npm ci

# Build application
COPY . .
RUN npm run build

# Stage 2: Production
FROM node:20-alpine AS production
WORKDIR /app

# Only production dependencies
COPY package*.json ./
RUN npm ci --omit=dev && npm cache clean --force

# Only built artifacts from builder
COPY --from=builder /app/dist ./dist

# Non-root user
RUN addgroup -g 1001 appgroup && \
    adduser -u 1001 -G appgroup -s /bin/sh -D appuser
USER appuser

EXPOSE 3000
CMD ["node", "dist/server.js"]


# Go: even smaller - scratch image
FROM golang:1.22-alpine AS builder
WORKDIR /app
COPY go.* ./
RUN go mod download
COPY . .
RUN CGO_ENABLED=0 go build -ldflags='-s -w' -o /server

FROM scratch
COPY --from=builder /server /server
COPY --from=builder /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
ENTRYPOINT ["/server"]
# Result: ~10MB image!


Optimization tips:
  • Order COPY by change frequency (deps before source)
  • Use .dockerignore to exclude node_modules, .git
  • Alpine over Ubuntu (~5MB vs ~80MB base)
  • --omit=dev for production node_modules

Why

A 1GB image with build tools takes longer to pull, uses more disk, and has a larger attack surface than a 50MB production image.

Context

Optimizing Docker images for production deployment

Revisions (0)

No revisions yet.