Ojasa Mirai

Ojasa Mirai

Cloud

Loading...

Learning Level

🟢 BeginneršŸ”µ Advanced
šŸ”§ GCP Account Setupāš™ļø GCP Compute OverviewšŸš€ Cloud Run DeploymentšŸŽÆ App Engine DeploymentšŸ“ GCP Storage & HostingšŸ”„ Firebase HostingšŸ—„ļø Firestore Setup⚔ Firestore RealtimešŸ’¾ Cloud SQL SetupšŸ“Š GCP MonitoringšŸ”‘ GCP AuthenticationšŸ“ˆ GCP Scaling & Performance⚔ Firebase FunctionsšŸ’° GCP Cost Optimization
Cloud/Gcp Deployment/Gcp Compute Overview

āš™ļø GCP Compute Overview

Introduction

Google Cloud Platform offers multiple compute services for running applications. Each service is designed for different use cases, workload types, and management preferences. Understanding which service to use is crucial for building scalable, cost-effective applications on GCP.

Key Learning Outcomes

By the end of this lesson, you'll understand:

  • Different GCP compute options and their use cases
  • How to choose the right compute service for your application
  • Basic comparison of pricing models and scalability
  • Getting started with each compute option
  • Real-world scenarios for each service

GCP Compute Services Overview

Google Cloud offers four main compute services:

1. Compute Engine (Infrastructure as a Service - IaaS)

Overview: Virtual machines running on Google's infrastructure.

Best For:

  • Complex applications requiring full OS control
  • Legacy applications with specific OS requirements
  • Workloads requiring high customization
  • Long-running processes (24/7 services)

Key Characteristics:

  • Full control over operating system and runtime
  • Persistent storage required
  • Manual scaling (or auto-scaling groups)
  • Pay for VM running time
  • Supports all programming languages

Basic CLI Usage:

# Create a Compute Engine instance
gcloud compute instances create my-server \
  --zone=us-central1-a \
  --machine-type=e2-medium \
  --image-family=ubuntu-2204-lts \
  --image-project=ubuntu-os-cloud

# SSH into the instance
gcloud compute ssh my-server --zone=us-central1-a

# Stop the instance (you're still charged for reserved resources)
gcloud compute instances stop my-server --zone=us-central1-a

# Delete the instance (stops all charges)
gcloud compute instances delete my-server --zone=us-central1-a

Pricing Model:

  • Pay per minute for running instances
  • Different pricing for different machine types
  • Committed Use Discounts (CUDs) for long-term commitments (25-70% discount)

Example Application:

# Setting up a Compute Engine instance with Node.js
gcloud compute instances create nodejs-app \
  --zone=us-central1-a \
  --machine-type=e2-medium \
  --image-family=ubuntu-2204-lts \
  --image-project=ubuntu-os-cloud \
  --metadata-from-file startup-script=startup.sh

# startup.sh content:
#!/bin/bash
apt-get update
apt-get install -y nodejs npm
mkdir -p /app
cd /app
cat > server.js << 'EOF'
const http = require('http');

const server = http.createServer((req, res) => {
  if (req.url === '/') {
    res.writeHead(200);
    res.end('Hello from Compute Engine!');
  } else {
    res.writeHead(404);
    res.end('Not Found');
  }
});

server.listen(3000, () => {
  console.log('Server running on port 3000');
});
EOF

npm init -y
node server.js

2. App Engine (Platform as a Service - PaaS)

Overview: Managed platform for building web applications and APIs.

Best For:

  • Web applications and REST APIs
  • Traditional backend services
  • Applications with predictable traffic patterns
  • Teams that prefer managed infrastructure

Key Characteristics:

  • Automatic scaling based on traffic
  • Integrated deployment from source code
  • Built-in security and DDoS protection
  • Pay for instance usage
  • Supports Node.js, Python, Java, Go, and more

Basic CLI Usage:

# Create app.yaml configuration file
cat > app.yaml << 'EOF'
runtime: nodejs18

env: standard

handlers:
  - url: /.*
    script: auto

automatic_scaling:
  min_instances: 1
  max_instances: 10
EOF

# Deploy application
gcloud app deploy

# View application logs
gcloud app logs read -n 50

# Open application in browser
gcloud app browse

Pricing Model:

  • Pay per instance hour
  • Automatic scaling reduces idle costs
  • Free tier: 28 instance hours per day

Example Application:

// server.js for App Engine
const express = require('express');
const app = express();

// Health check endpoint (required by App Engine)
app.get('/_ah/health', (req, res) => {
  res.status(200).send('OK');
});

// Main endpoint
app.get('/', (req, res) => {
  res.send('Hello from App Engine!');
});

// API endpoint
app.get('/users/:id', (req, res) => {
  const userId = req.params.id;
  res.json({
    id: userId,
    name: `User ${userId}`,
    email: `user${userId}@example.com`
  });
});

const PORT = process.env.PORT || 8080;
app.listen(PORT, () => {
  console.log(`App Engine app listening on port ${PORT}`);
});

app.yaml Configuration:

runtime: nodejs18
env: standard

env_variables:
  DATABASE_URL: "your-database-url"
  API_KEY: "your-api-key"

handlers:
  - url: /static
    static_dir: public
  - url: /.*
    script: auto

automatic_scaling:
  min_instances: 1
  max_instances: 20
  target_cpu_utilization: 0.65
  target_throughput_utilization: 0.75

3. Cloud Run (Serverless Container Platform)

Overview: Run containerized applications without managing servers.

Best For:

  • Event-driven applications
  • Microservices and APIs
  • Workloads with variable traffic
  • Applications that need to scale to zero
  • Docker-based applications

Key Characteristics:

  • Automatic scaling including to zero
  • Pay only for compute time used (per 100ms)
  • Request-driven (no idle charges)
  • Stateless containers
  • Quick deployment (seconds)

Basic CLI Usage:

# Create Dockerfile
cat > Dockerfile << 'EOF'
FROM node:18-alpine

WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .

EXPOSE 3000
CMD ["node", "server.js"]
EOF

# Build and push to Container Registry
docker build -t gcr.io/my-project/my-api:v1.0.0 .
docker push gcr.io/my-project/my-api:v1.0.0

# Deploy to Cloud Run
gcloud run deploy my-api \
  --image gcr.io/my-project/my-api:v1.0.0 \
  --region us-central1 \
  --memory 512Mi \
  --cpu 1 \
  --allow-unauthenticated

Pricing Model:

  • Pay per request (per 100ms of CPU time)
  • Scales to zero (no costs when idle)
  • Typical cost: much lower for variable workloads

Example Application:

// server.js for Cloud Run
const express = require('express');
const app = express();

app.use(express.json());

// Must listen on PORT environment variable
const PORT = process.env.PORT || 3000;

// Health check endpoint
app.get('/health', (req, res) => {
  res.json({ status: 'healthy' });
});

// Main endpoint
app.get('/', (req, res) => {
  res.send('Hello from Cloud Run!');
});

// API endpoint
app.post('/process', (req, res) => {
  const data = req.body;
  console.log('Processing:', data);
  res.json({ success: true, id: Math.random() });
});

// Error handling
app.use((err, req, res, next) => {
  console.error(err);
  res.status(500).json({ error: 'Internal server error' });
});

app.listen(PORT, () => {
  console.log(`Server listening on port ${PORT}`);
});

Dockerfile for Cloud Run:

# Use lightweight base image
FROM node:18-alpine

WORKDIR /app

# Copy package files
COPY package*.json ./

# Install dependencies
RUN npm ci --only=production

# Copy application code
COPY . .

# Expose port (optional for documentation)
EXPOSE 3000

# Handle graceful shutdown
ENTRYPOINT ["node", "server.js"]

4. Google Kubernetes Engine (GKE)

Overview: Managed Kubernetes cluster for running containerized applications at scale.

Best For:

  • Large-scale microservices architectures
  • Complex container orchestration needs
  • Applications requiring fine-grained control
  • Multi-region deployments
  • Teams experienced with Kubernetes

Key Characteristics:

  • Kubernetes-native workload management
  • Automatic node scaling
  • Built-in monitoring and logging
  • Advanced networking capabilities
  • Pay for compute resources

Basic CLI Usage:

# Create a Kubernetes cluster
gcloud container clusters create my-cluster \
  --zone us-central1-a \
  --num-nodes 3 \
  --enable-autoscaling \
  --min-nodes 1 \
  --max-nodes 10

# Deploy application using kubectl
kubectl create deployment my-app \
  --image=gcr.io/my-project/my-app:v1.0.0

# Expose deployment as service
kubectl expose deployment my-app \
  --type=LoadBalancer \
  --port 80 \
  --target-port 3000

# View running pods
kubectl get pods

# View service and get external IP
kubectl get services

Kubernetes YAML Example:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app
spec:
  replicas: 3
  selector:
    matchLabels:
      app: my-app
  template:
    metadata:
      labels:
        app: my-app
    spec:
      containers:
      - name: app
        image: gcr.io/my-project/my-app:v1.0.0
        ports:
        - containerPort: 3000
        env:
        - name: PORT
          value: "3000"
        resources:
          requests:
            memory: "256Mi"
            cpu: "100m"
          limits:
            memory: "512Mi"
            cpu: "500m"
---
apiVersion: v1
kind: Service
metadata:
  name: my-app-service
spec:
  selector:
    app: my-app
  type: LoadBalancer
  ports:
  - protocol: TCP
    port: 80
    targetPort: 3000

Comparison Table

FeatureCompute EngineApp EngineCloud RunGKE
Abstraction LevelIaaSPaaSServerlessOrchestration
ScalingManual/Auto Scaling GroupsAutomaticAutomatic + ZeroAutomatic
Idle CostAlways chargedLow (minimum 1 instance)No costRunning nodes
DeploymentVM imagesSource codeContainersKubernetes
Best ForControl, legacy appsWeb apps, APIsEvent-driven, variable loadMicroservices, complex orchestration
Cold StartMinutesSecondsMillisecondsSeconds
Price ModelPer VM hourPer instance hourPer 100ms CPU + requestPer node hour
Easiest to LearnMediumMediumEasiestHardest

Choosing the Right Compute Service

Decision Tree

Does your application need to respond to requests?
ā”œā”€ YES
│  └─ Can it be stateless and containerized?
│     ā”œā”€ YES
│     │  └─ Do you need Kubernetes features?
│     │     ā”œā”€ YES → Use GKE
│     │     └─ NO → Use Cloud Run
│     └─ NO
│        └─ Is it a traditional web app?
│           ā”œā”€ YES → Use App Engine
│           └─ NO → Use Compute Engine
└─ NO (background jobs, long-running processes)
   └─ Use Compute Engine or Cloud Run jobs

Real-World Scenarios

Scenario 1: Startup Building a REST API

Requirements:

  • Quick deployment
  • Variable traffic (scaling to zero during off-hours)
  • Minimal infrastructure management
  • Cost-conscious

Solution: Cloud Run

# 1. Create Node.js Express API
mkdir my-api && cd my-api
npm init -y
npm install express

# 2. Create server.js with API endpoints
# 3. Create Dockerfile
# 4. Deploy to Cloud Run (5 minutes total)

gcloud run deploy my-api \
  --source . \
  --region us-central1 \
  --allow-unauthenticated

Scenario 2: Enterprise Web Application

Requirements:

  • Traditional multi-tier application
  • Predictable traffic patterns
  • Built-in security and DDoS protection
  • Easy deployment from source code

Solution: App Engine

# 1. Create app.yaml
# 2. Deploy source code
gcloud app deploy

# Application scales automatically
# Built-in monitoring and logging included

Scenario 3: Video Processing System

Requirements:

  • Long-running batch jobs
  • Full OS control needed
  • Consistent 24/7 workload
  • Custom dependencies and configurations

Solution: Compute Engine

# 1. Create VM with startup script
# 2. Install video processing tools (FFmpeg, etc.)
# 3. Run processing jobs
# 4. Store results in Cloud Storage

Scenario 4: Complex Microservices Platform

Requirements:

  • Multiple interconnected services
  • Advanced traffic management
  • Service discovery
  • Team expertise in Kubernetes

Solution: GKE

# 1. Create Kubernetes cluster
# 2. Deploy multiple services
# 3. Configure networking and service mesh
# 4. Implement monitoring and logging

Getting Started Path

For Beginners: Start with Cloud Run

Why? Simplest entry point to cloud computing

# Step 1: Create application
echo "const http = require('http');
const server = http.createServer((req, res) => {
  res.end('Hello Cloud Run!');
});
server.listen(process.env.PORT || 3000);" > server.js

npm init -y

# Step 2: Create Dockerfile
echo "FROM node:18-alpine
WORKDIR /app
COPY . .
EXPOSE 3000
CMD ['node', 'server.js']" > Dockerfile

# Step 3: Deploy
docker build -t my-app:v1 .
docker tag my-app:v1 gcr.io/my-project/my-app:v1
docker push gcr.io/my-project/my-app:v1

gcloud run deploy my-app \
  --image gcr.io/my-project/my-app:v1 \
  --region us-central1 \
  --allow-unauthenticated

Key Takeaways

  • **Compute Engine** provides full control but requires more management; best for complex, long-running applications
  • **App Engine** is a managed platform ideal for traditional web applications with built-in scaling
  • **Cloud Run** is the easiest for containerized applications with variable traffic, scaling to zero for cost efficiency
  • **GKE** provides Kubernetes orchestration for complex microservices architectures
  • **Start with Cloud Run** for simplicity, graduate to other services as requirements grow
  • **Cost varies significantly:** Cloud Run (pay per use) vs. Compute Engine (pay per hour) vs. App Engine (middle ground)
  • **Choose based on:** workload type, management preference, traffic patterns, and team expertise

Next Steps

Dive deeper into your chosen compute service with dedicated lessons on Cloud Run Deployment for modern applications, App Engine for traditional web apps, or GKE for enterprise Kubernetes workloads.


Resources

Python Docs

Ojasa Mirai

Master AI-powered development skills through structured learning, real projects, and verified credentials. Whether you're upskilling your team or launching your career, we deliver the skills companies actually need.

Learn Deep • Build Real • Verify Skills • Launch Forward

Courses

PythonFastapiReactJSCloud

Ā© 2026 Ojasa Mirai. All rights reserved.

TwitterGitHubLinkedIn