Dev Dissection — Week 8: GCP Deployment with Docker + Multiple Environments

In Week 7, you containerized your TODO app with Docker, solving the “it works on my machine” problem. This week, we’re taking your containers global. You’ll deploy your Docker containers to Google Cloud Platform using Cloud Run, set up staging and production environments, and learn proper team deployment workflows.

By the end of this lesson, your TODO app will be live on the internet with proper staging/production separation, just like real-world applications.

Prerequisites

Before you start, you need:

  • Your Week 7 Docker setup working locally
  • A Google Cloud Platform account with billing enabled
  • Basic understanding of environment variables
  • Git repository (GitHub/GitLab) for your code

Why Cloud Run for Docker Deployment?

Cloud Run vs App Engine (from Week 5):

  • App Engine: Platform-as-a-Service (PaaS) – you deploy code, Google handles infrastructure
  • Cloud Run: Container-as-a-Service (CaaS) – you deploy containers, Google handles scaling

Why we’re switching:

  • Consistency: Same Docker containers run locally and in production
  • Flexibility: Any language, any framework, any dependencies
  • Cost: Pay only when requests are being processed (serverless)
  • Scalability: Auto-scales from 0 to 1000+ instances
  • Simplicity: One deployment method for all environments

Part 1: Understanding Multi-Environment Architecture

Environment Strategy

We’ll create three environments:

  • Development: Your local Docker setup (Week 7)
  • Staging: Testing environment that mirrors production
  • Production: Live environment for real users

Architecture Overview

┌─────────────────┐    ┌─────────────────┐    ┌─────────────────┐
│   Development   │    │     Staging     │    │   Production    │
│                 │    │                 │    │                 │
│ Docker Compose  │    │   Cloud Run     │    │   Cloud Run     │
│ Local MongoDB   │    │ MongoDB Atlas   │    │ MongoDB Atlas   │
│ Local Storage   │    │ Cloud Storage   │    │ Cloud Storage   │
└─────────────────┘    └─────────────────┘    └─────────────────┘

Part 2: Preparing for Cloud Deployment

Step 1: Update Docker Configuration

First, let’s modify your containers to work better in cloud environments.

Update backend/Dockerfile:

# Use the AMD64 variant explicitly so Cloud Run (x86_64) can execute all binaries
FROM --platform=linux/amd64 node:22-alpine

# Install dumb-init for proper signal handling
RUN apk add --no-cache dumb-init

# Create app directory and user
WORKDIR /app
RUN addgroup -g 1001 -S nodejs \
 && adduser -S nextjs -u 1001

# Copy package.json and package-lock.json first, install all deps
COPY package*.json ./
RUN npm ci

# Copy the rest of your source code
COPY . .

# Build the TypeScript output
RUN npm run build

# Remove dev dependencies to slim the image
RUN npm prune --production

# Fix permissions & switch to non-root
RUN chown -R nextjs:nodejs /app
USER nextjs

# Tell Docker (and Cloud Run) which port we’ll listen on
EXPOSE 8080

# Use dumb-init as PID 1 for clean signal forwarding
ENTRYPOINT ["dumb-init", "--"]

# Start the compiled app
CMD ["npm", "start"]

Update frontend/Dockerfile:

# 1) Builder: install, copy, set build-time env, build
FROM --platform=linux/amd64 node:22-alpine AS builder

# Accept the public API URL as a build argument:
ARG NEXT_PUBLIC_API_URL
# Make it available to Next.js at build time:
ENV NEXT_PUBLIC_API_URL=${NEXT_PUBLIC_API_URL}

WORKDIR /app

# Copy and install deps
COPY package*.json ./
RUN npm ci

# Copy source and build with the correct API_URL baked in
COPY . .
RUN npm run build

# 2) Runner: slim image, copy only what's needed, run
FROM --platform=linux/amd64 node:22-alpine AS runner

# dumb-init for PID 1 signal handling
RUN apk add --no-cache dumb-init

WORKDIR /app

# non-root user
RUN addgroup --system --gid 1001 nodejs \
 && adduser  --system --uid 1001 nextjs

# Pull in only the built bits
COPY --from=builder /app/public       ./public
COPY --from=builder /app/.next/standalone ./
COPY --from=builder /app/.next/static     ./.next/static

# Fix perms & switch user
RUN chown -R nextjs:nodejs /app
USER nextjs

# Cloud Run will set $PORT; we listen on 8080 by default
EXPOSE 8080

ENTRYPOINT ["dumb-init", "--"]
CMD ["node", "server.js"]

Part 3: Setting Up Google Cloud Platform

Step 1: Create and Configure GCP Project

gcloud auth login
gcloud config set project your-project-id

# Enable required APIs
gcloud services enable \
  cloudbuild.googleapis.com \
  run.googleapis.com \
  artifactregistry.googleapis.com \
  secretmanager.googleapis.com

Step 2: Create Artifact Registry Repositories

Artifact Registry is GCP’s private registry for Docker images. We’ll create separate repositories for staging and production.

# Create one repository for both environments to minimize storage costs
gcloud artifacts repositories create todo-app \
  --repository-format=docker \
  --location=us-central1 \
  --description="TODO app container images"

# Configure Docker to authenticate with Artifact Registry
gcloud auth configure-docker us-central1-docker.pkg.dev

Step 3: Set Up MongoDB Atlas (Multi-Environment)

Since we’re moving away from local MongoDB, let’s set up proper databases:

  1. Go to MongoDB Atlas and create two databases:
    • todos-staging
    • todos
  2. Create separate users for each environment with database-specific permissions

Part 4: Deploying to Staging Environment

Step 1: Build and Push Backend to Staging

# Navigate to backend directory
cd backend

# Build the Docker image with staging tag
docker build -t us-central1-docker.pkg.dev/<your-project-id>/todo-app/backend:staging .

# Push to Artifact Registry (uses free 0.5GB tier)
docker push us-central1-docker.pkg.dev/<your-project-id>/todo-app/backend:staging

Step 2: Deploy Backend to Cloud Run (Staging)

# Deploy backend to Cloud Run
# Deploy backend to Cloud Run with free tier optimizations
gcloud run deploy todo-backend-staging \                                                                                                                                                                                                                                                                                 
  --image us-central1-docker.pkg.dev/<your-project-id>/todo-app/backend:staging \
  --platform managed \
  --region us-central1 \
  --allow-unauthenticated \
  --set-env-vars NODE_ENV=production,MONGO_URI="mongodb+srv://user:pass@mongodb.net/todo-staging?retryWrites=true&w=majority&appName=Cluster0",JWT_SECRET="jwt-staging-secret" \
  --max-instances 2 \
  --memory 512Mi \
  --cpu 1 \
  --port 8080 \
  --concurrency 80 \
  --cpu-throttling \
  --no-cpu-boost

Note: We’re setting env via CLI, the best practice would be to use google secrets manager to store keys which we will cover in next topic.

Step 3: Build and Deploy Frontend to Staging

# Navigate to frontend directory
cd ../frontend

# Build with staging API URL
docker build -t us-central1-docker.pkg.dev/<your-project-id>/todo-app/frontend:staging \
  --build-arg NEXT_PUBLIC_API_URL=https://stagingurl.com .

# Push to Artifact Registry
docker push us-central1-docker.pkg.dev/<your-project-id>/todo-app/frontend:staging

# Deploy to Cloud Run with free tier settings
gcloud run deploy todo-frontend-staging \                                                                                                                    
  --image us-central1-docker.pkg.dev/<your-project-id>/todo-app/frontend:staging \  
  --platform managed \
  --region us-central1 \
  --allow-unauthenticated \
  --set-env-vars NODE_ENV=production \
  --max-instances 2 \                        
  --memory 512Mi \   
  --cpu 1 \       
  --port 8080 \
  --concurrency 100 \
  --cpu-throttling

Part 5: Production Deployment

Step 1: Deploy Backend to Production

# Build production backend
cd backend

docker build -t us-central1-docker.pkg.dev/<your-project-id>/todo-app/backend:production .

docker push us-central1-docker.pkg.dev/<your-project-id>/todo-app/backend:production

# Deploy to production
gcloud run deploy todo-backend-production \
  --image us-central1-docker.pkg.dev/<your-project-id>/todo-app/backend:production \
  --platform managed \
  --region us-central1 \
  --allow-unauthenticated \
  --set-env-vars NODE_ENV=production,MONGO_URI="mongodb+srv://prod:pass@cluster0.qqovh0p.mongodb.net/todo?retryWrites=true&w=majority&appName=Cluster0",JWT_SECRET="jwt-prod-secret" \
  --max-instances 2 \
  --memory 512Mi \
  --cpu 1 \
  --port 8080 \
  --concurrency 80 \
  --cpu-throttling \
  --no-cpu-boost

Note: We’re changing the name here to “todo-backend-production”, for staging we had “todo-backend-staging”

Step 2: Deploy Frontend to Production

# Build production frontend
cd ../frontend

docker build -t us-central1-docker.pkg.dev/<your-project-id>/todo-app/frontend:production \
  --build-arg NEXT_PUBLIC_API_URL="prod url" .

docker push us-central1-docker.pkg.dev/<your-project-id>/todo-production/frontend:latest

# Deploy to production
gcloud run deploy todo-frontend-production \                                                                                                                    
  --image us-central1-docker.pkg.dev/<your-project-id>/todo-app/frontend:production \  
  --platform managed \
  --region us-central1 \
  --allow-unauthenticated \
  --set-env-vars NODE_ENV=production \
  --max-instances 2 \                        
  --memory 512Mi \   
  --cpu 1 \       
  --port 8080 \
  --concurrency 100 \
  --cpu-throttling  

Part 6: Team Deployment Scripts

Let’s create scripts that make deployment easy for your team.

Create scripts/deploy-staging.sh:

#!/bin/bash

set -e  # Exit on error

PROJECT_ID="<project id>"
REGION="us-central1"

echo "Deploying TODO App to Staging..."

# Get the backend URL first
BACKEND_URL=$(gcloud run services describe todo-backend-staging \
  --region=$REGION --format="value(status.url)" 2>/dev/null || echo "")

if [ -z "$BACKEND_URL" ]; then
  echo "Backend not found. Deploying backend first..."
  
  # Build and push backend
  cd backend
  docker build -t $REGION-docker.pkg.dev/$PROJECT_ID/todo-app/backend:staging .
  docker push $REGION-docker.pkg.dev/$PROJECT_ID/todo-app/backend:staging
  
  # Deploy backend with free tier settings
  gcloud run deploy todo-backend-staging \
    --image $REGION-docker.pkg.dev/$PROJECT_ID/todo-app/backend:staging \
    --platform managed \
    --region $REGION \
    --allow-unauthenticated \
    --set-env-vars NODE_ENV=production,MONGO_URI="mongodb+srv://server:serer@mongodb.net/todo-staging?retryWrites=true&w=majority&appName=Cluster0",JWT_SECRET="jwt-staging-secret" \
    --max-instances 2 \
    --memory 512Mi \
    --cpu 1 \
    --port 8080 \
    --concurrency 80 \
    --cpu-throttling \
    --no-cpu-boost \
    --quiet
  
  BACKEND_URL=$(gcloud run services describe todo-backend-staging \
    --region=$REGION --format="value(status.url)")
  
  cd ..
fi

echo "Backend URL: $BACKEND_URL"

# Build and push frontend
cd frontend

docker build -t $REGION-docker.pkg.dev/$PROJECT_ID/todo-app/frontend:staging \
  --build-arg NEXT_PUBLIC_API_URL=$BACKEND_URL .

# Deploy frontend with free tier settings
gcloud run deploy todo-frontend-staging \
  --image $REGION-docker.pkg.dev/$PROJECT_ID/todo-app/frontend:staging \
  --platform managed \
  --region $REGION \
  --allow-unauthenticated \
  --set-env-vars NODE_ENV=production \
  --max-instances 2 \
  --memory 512Mi \
  --cpu 1 \
  --port 8080 \
  --concurrency 100 \
  --cpu-throttling \
  --quiet

FRONTEND_URL=$(gcloud run services describe todo-frontend-staging \
  --region=$REGION --format="value(status.url)")

echo "Staging deployment complete!"
echo "Frontend: $FRONTEND_URL"
echo "Backend: $BACKEND_URL"

cd ..

Create scripts/deploy-production.sh:

#!/bin/bash

set -e  # Exit on error

PROJECT_ID="your-project-id"
REGION="us-central1"

echo "Deploying TODO App to Production..."
echo "Make sure you've tested in staging first!"

read -p "Are you sure you want to deploy to production? (y/N): " -n 1 -r
echo
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
  echo "Deployment cancelled"
  exit 1
fi

# Similar structure as staging but with production configurations
# ... (rest of script similar to staging)

echo "Production deployment complete!"
echo "Your app is live at: $FRONTEND_URL"

Make scripts executable:

chmod +x scripts/deploy-staging.sh
chmod +x scripts/deploy-production.sh

Part 7: Monitoring and Troubleshooting

Viewing Logs

# View backend logs
gcloud run services logs read todo-backend-staging --region=us-central1

# View frontend logs  
gcloud run services logs read todo-frontend-staging --region=us-central1

# Follow logs in real-time
gcloud run services logs tail todo-backend-staging --region=us-central1

Common Issues and Solutions

Issue 1: Container startup timeout

Error: The request failed because the container failed to start

Solution: Check your Dockerfile’s CMD and make sure the app binds to 0.0.0.0:$PORT

Issue 2: CORS errors

Error: CORS policy blocked

Solution: Update your backend CORS configuration:

app.use(cors({
  origin: process.env.NODE_ENV === 'production' 
    ? ['https://your-frontend-url.run.app']
    : true
}));

Testing Your Deployed Application

Step 1: Test Staging Environment

  1. Visit your staging frontend URL
  2. Create a test account
  3. Add some todos
  4. Verify data persists (check staging database)
  5. Test all features work the same as local

Security Best Practices

1. Environment Separation

  • Separate databases for staging/production
  • Different secrets for each environment
  • Isolated service accounts and permissions

2. Secret Management

  • Use Google Secret Manager (never hardcode secrets)
  • Rotate secrets regularly
  • Use least-privilege access

3. Network Security

  • HTTPS enforced by default on Cloud Run
  • Implement proper CORS policies
  • Consider VPC connector for database access

What You’ve Learned

This week, you:

  • Deployed Docker containers to GCP Cloud Run – Your containers now run in Google’s serverless infrastructure
  • Set up multi-environment architecture – Proper staging and production separation

Coming Up: Environment Variables & Secret Management

Keeping your keys and tokens in plain sight is a recipe for disaster. Next week, we’ll lock things down:

  • .env files 101 – How to store local configs without committing secrets
  • Google Secrets Manager – Inject secrets from a secrets vault rather env files
  • Rotation & revocation – Best practices for changing and retiring secrets

By the end of Week 9, you’ll have a rock-solid process for managing every credential and configuration value—no more accidental exposures, and full confidence that your app’s secrets stay secret.


Need Help?

Deployment can be tricky! If you get stuck:

  • Check Cloud Run logs: gcloud run services logs read your-service
  • Test locally first: docker-compose up
  • Join our Discord for help