Dev Dissection — Week 7: Docker Fundamentals & Containerization
In Week 6, you added unit & integration tests your TODO app. This week, we’re solving a different problem: “But it works on my machine!” We’ll containerize your entire application with Docker, making it run identically anywhere with a single command.
By the end of this lesson, anyone can run your complete TODO app (frontend, backend, and database) with just docker compose up
.
Prerequisites
Before you start, you need:
- Your Week 6 TODO app
- Docker Desktop installed on your machine
- Basic terminal/command line knowledge
What is Docker?
Docker is like a shipping container for your applications. Just as shipping containers can be moved from ships to trucks to trains without unpacking, Docker containers can run on any machine that has Docker installed.
The Problem Docker Solves
Without Docker:
Developer: "The app works on my machine!"
Colleague: "I get errors when I run it"
Developer: "Did you install Node 18? MongoDB? Set up the environment variables?"
Colleague: "Which versions exactly? Where do I put the .env file?"
With Docker:
Developer: "Run: docker compose up"
Colleague: "It works!"
Key Docker Concepts
Image: A blueprint for creating containers (like a class in programming)
Container: A running instance of an image (like an object instance)
Dockerfile: Instructions for building an image
Docker Compose: Tool for running multi-container applications
Think of it this way:
- Dockerfile = Recipe for making a cake
- Image = The actual cake you baked
- Container = A slice of cake you’re eating
Part 1: Dockerizing the Backend
Let’s start by containerizing your Express API.
Step 1: Create Backend Dockerfile
Create Dockerfile
in your backend directory:
# Use the official Node.js runtime as base image
FROM node:22-alpine
# Set the working directory inside the container
WORKDIR /app
# Copy package.json and package-lock.json first
COPY package*.json ./
# Install ALL dependencies (including dev dependencies)
RUN npm ci
# Copy the rest of the application code
COPY . .
# Build the TypeScript code
RUN npm run build
# Remove dev dependencies to keep image smaller
RUN npm prune --production
# Expose the port the app runs on
EXPOSE 4000
# Define the command to run the application
CMD ["npm", "start"]
Step 2: Create .dockerignore
Create .dockerignore
in your backend directory to exclude unnecessary files:
node_modules
dist
.env*
.git
.gitignore
README.md
Dockerfile
.dockerignore
npm-debug.log
.nyc_output
.coverage
.DS_Store
Step 3: Install missing dev dependency
We installed memory server of mongodb in tests but when building types, it throws error so we need to have it in place.
npm i @types/semver
Part 2: Dockerizing the Frontend
Now let’s containerize your Next.js frontend.
Step 1: Create Frontend Dockerfile
Create Dockerfile
in your frontend directory:
dockerfile
# Multi-stage build for smaller production image
FROM node:22-alpine AS builder
# Set working directory
WORKDIR /app
# Copy package files
COPY package*.json ./
# Install all dependencies (including devDependencies for build)
RUN npm ci
# Copy source code
COPY . .
# Build the Next.js application
RUN npm run build
# Production stage
FROM node:22-alpine AS runner
WORKDIR /app
# Create a non-root user
RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 nextjs
# Copy built application from builder stage
COPY --from=builder /app/public ./public
COPY --from=builder /app/.next/standalone ./
COPY --from=builder /app/.next/static ./.next/static
# Change ownership of the app directory to the nextjs user
RUN chown -R nextjs:nodejs /app
USER nextjs
# Expose port
EXPOSE 3000
# Set environment variable for production
ENV NODE_ENV=production
ENV PORT=3000
# Start the application
CMD ["node", "server.js"]
Step 2: Update next.config.js
Update your next.config.js
to enable standalone output:
import type { NextConfig } from 'next';
const nextConfig: NextConfig = {
/* config options here */
output: 'standalone',
outputFileTracingRoot: process.cwd(),
};
export default nextConfig;
Step 3: Create Frontend .dockerignore
Create .dockerignore
in your frontend directory:
node_modules
.next
.git
.gitignore
README.md
Dockerfile
.dockerignore
npm-debug.log
.DS_Store
.env*.local
.env
Part 3: Docker Compose – Running Everything Together
Now let’s create a Docker Compose setup that runs your entire application stack.
Create Docker Compose File
Create docker-compose.yml
in your project root:
version: '3.8'
services:
# MongoDB Database
mongodb:
image: mongo:7.0
container_name: todo-mongodb
restart: unless-stopped
ports:
- "27018:27017" # Using 27018 to avoid conflicts with local MongoDB
environment:
MONGO_INITDB_ROOT_USERNAME: admin
MONGO_INITDB_ROOT_PASSWORD: password123
MONGO_INITDB_DATABASE: todos
volumes:
- mongodb_data:/data/db
networks:
- todo-network
# Backend API
backend:
build:
context: ./backend
dockerfile: Dockerfile
container_name: todo-backend
restart: unless-stopped
ports:
- "4000:4000"
environment:
NODE_ENV: production
PORT: 4000
MONGO_URI: mongodb://admin:password123@mongodb:27017/todos?authSource=admin
JWT_SECRET: docker-super-secret-jwt-key-change-in-production
depends_on:
- mongodb
networks:
- todo-network
# Frontend Application
frontend:
build:
context: ./frontend
dockerfile: Dockerfile
container_name: todo-frontend
restart: unless-stopped
ports:
- "3000:3000"
environment:
NODE_ENV: production
NEXT_PUBLIC_API_URL: http://localhost:4000
depends_on:
- backend
networks:
- todo-network
# Define networks
networks:
todo-network:
driver: bridge
# Define volumes for data persistence
volumes:
mongodb_data:
Part 4: Project Structure
Your project should now look like this:
your-todo-app/
├── backend/
│ ├── src/
│ ├── package.json
│ ├── Dockerfile
│ ├── .dockerignore
│ └── .env.development
├── frontend/
│ ├── app/
│ ├── components/
│ ├── lib/
│ ├── package.json
│ ├── next.config.js
│ ├── Dockerfile
│ └── .dockerignore
├── docker-compose.yml
Part 5: Running Your Dockerized Application
Build and Run Everything
From your project root directory:
# Build and start all services
docker-compose up --build
# Run in background (detached mode)
docker-compose up --build -d
# View logs
docker-compose logs
# View logs for specific service
docker-compose logs backend
Useful Docker Commands
# Stop all services
docker compose down
# Stop and remove volumes (clears database)
docker compose down -v
# Rebuild a specific service
docker compose build backend
# Starts specific service after rebuild
docker compose up -d backend
# If you want to rebuild and restart everything (not just backend):
docker compose up -d --build
# View running containers
docker-compose ps
Part 6: Individual Dockerfiles (Backend and Frontend Only)
Sometimes you might want to run just the backend or frontend in Docker while keeping other services local.
Running Backend Only
Make sure you have your docker mongodb running (you can use docker compose up -d mongodb
)
From the backend directory:
# Build the backend image
docker build -t todo-backend .
# Run with local MongoDB
docker run -p 4000:4000 \
-e NODE_ENV=development \
-e PORT=4000 \
-e MONGO_URI=mongodb://host.docker.internal:27018/todos-dev \
-e JWT_SECRET=your-dev-secret \
todo-backend
Running Frontend Only
From the frontend directory:
# Build the frontend image
docker build -t todo-frontend .
# Run with local backend
docker run -p 3000:3000 \
-e NEXT_PUBLIC_API_URL=http://host.docker.internal:4000 \
todo-frontend
Note: host.docker.internal
allows Docker containers to connect to services running on your host machine.
Part 7: Testing Your Dockerized Application
Step 1: Start the Complete Stack
docker-compose up --build
You should see logs from all three services starting up.
Step 2: Test the Application
- Visit Frontend: http://localhost:3000
- Test Registration: Create a new account
- Test Login: Sign in with your credentials
- Test Todos: Create, update, and delete todos
- Test Persistence: Stop containers, restart, verify data persists
Step 3: Verify Services
# Check if all containers are running
docker-compose ps
# Should show:
# todo-mongodb running
# todo-backend running
# todo-frontend running
Part 8: Docker Best Practices
1. Multi-stage Builds
Use multi-stage builds for smaller production images (as shown in the frontend Dockerfile).
2. Layer Caching
Copy package.json
before source code to cache npm install steps.
3. Security
- Run containers as non-root users
- Use specific image versions (not
latest
) - Don’t include secrets in Dockerfiles
4. Health Checks
Add health checks to your services:
services:
backend:
# ... other config
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:4000/health"]
interval: 30s
timeout: 10s
retries: 3
Part 9: Troubleshooting Common Issues
Issue 1: Port Conflicts
Error: Port 27017 already in use
Solution: Use different ports in docker-compose.yml or stop local services.
Issue 2: Container Can’t Connect to Database
Error: connect ECONNREFUSED
Solution: Make sure services are in the same network and use container names as hostnames.
Issue 3: Frontend Can’t Reach Backend
Error: Failed to fetch
Solution: Check CORS configuration and make sure NEXT_PUBLIC_API_URL
is correct.
Issue 4: Permission Errors
Error: EACCES: permission denied
Solution: Check file permissions and user configuration in Dockerfile.
Part 10: Sharing Your Dockerized App
Creating a Setup Script
Create start-app.sh
for easy setup:
#!/bin/bash
echo "Starting TODO App with Docker..."
# Check if Docker is running
if ! docker info > /dev/null 2>&1; then
echo "Docker is not running. Please start Docker and try again."
exit 1
fi
# Build and start services
echo "Building and starting services..."
docker compose up --build -d
# Wait for services to be ready
echo "Waiting for services to start..."
sleep 10
echo "TODO App is ready!"
echo "Frontend: http://localhost:3000"
echo "Backend API: http://localhost:4000"
echo "Database: localhost:27018"
echo ""
echo "To stop the app, run: docker-compose down"
Make it executable:
chmod +x start-app.sh
Stopping the App
docker-compose down
Accessing Services
- Frontend: http://localhost:3000
- Backend: http://localhost:4000
- MongoDB: localhost:27018
Next Up: GCP Deployment with Docker + Multi Envs
Your app is containerized and running consistently — but how do you get it live for the world to see? Next week, we’ll deploy your Docker containers to Google Cloud Platform:
- Deploy your containerized app to GCP Cloud Run
- Set up staging and production environments
- Configure environment variables and secrets management
- Manage deployments for team collaboration You’ll take your containers from local development to global scale, learning how to deploy safely with proper environment separation.
From local containers to global scale. Let’s get your app live and production-ready.