
Cloud
Learning Level
Google Cloud Platform offers multiple compute services for running applications. Each service is designed for different use cases, workload types, and management preferences. Understanding which service to use is crucial for building scalable, cost-effective applications on GCP.
By the end of this lesson, you'll understand:
Google Cloud offers four main compute services:
Overview: Virtual machines running on Google's infrastructure.
Best For:
Key Characteristics:
Basic CLI Usage:
# Create a Compute Engine instance
gcloud compute instances create my-server \
--zone=us-central1-a \
--machine-type=e2-medium \
--image-family=ubuntu-2204-lts \
--image-project=ubuntu-os-cloud
# SSH into the instance
gcloud compute ssh my-server --zone=us-central1-a
# Stop the instance (you're still charged for reserved resources)
gcloud compute instances stop my-server --zone=us-central1-a
# Delete the instance (stops all charges)
gcloud compute instances delete my-server --zone=us-central1-aPricing Model:
Example Application:
# Setting up a Compute Engine instance with Node.js
gcloud compute instances create nodejs-app \
--zone=us-central1-a \
--machine-type=e2-medium \
--image-family=ubuntu-2204-lts \
--image-project=ubuntu-os-cloud \
--metadata-from-file startup-script=startup.sh
# startup.sh content:
#!/bin/bash
apt-get update
apt-get install -y nodejs npm
mkdir -p /app
cd /app
cat > server.js << 'EOF'
const http = require('http');
const server = http.createServer((req, res) => {
if (req.url === '/') {
res.writeHead(200);
res.end('Hello from Compute Engine!');
} else {
res.writeHead(404);
res.end('Not Found');
}
});
server.listen(3000, () => {
console.log('Server running on port 3000');
});
EOF
npm init -y
node server.jsOverview: Managed platform for building web applications and APIs.
Best For:
Key Characteristics:
Basic CLI Usage:
# Create app.yaml configuration file
cat > app.yaml << 'EOF'
runtime: nodejs18
env: standard
handlers:
- url: /.*
script: auto
automatic_scaling:
min_instances: 1
max_instances: 10
EOF
# Deploy application
gcloud app deploy
# View application logs
gcloud app logs read -n 50
# Open application in browser
gcloud app browsePricing Model:
Example Application:
// server.js for App Engine
const express = require('express');
const app = express();
// Health check endpoint (required by App Engine)
app.get('/_ah/health', (req, res) => {
res.status(200).send('OK');
});
// Main endpoint
app.get('/', (req, res) => {
res.send('Hello from App Engine!');
});
// API endpoint
app.get('/users/:id', (req, res) => {
const userId = req.params.id;
res.json({
id: userId,
name: `User ${userId}`,
email: `user${userId}@example.com`
});
});
const PORT = process.env.PORT || 8080;
app.listen(PORT, () => {
console.log(`App Engine app listening on port ${PORT}`);
});app.yaml Configuration:
runtime: nodejs18
env: standard
env_variables:
DATABASE_URL: "your-database-url"
API_KEY: "your-api-key"
handlers:
- url: /static
static_dir: public
- url: /.*
script: auto
automatic_scaling:
min_instances: 1
max_instances: 20
target_cpu_utilization: 0.65
target_throughput_utilization: 0.75Overview: Run containerized applications without managing servers.
Best For:
Key Characteristics:
Basic CLI Usage:
# Create Dockerfile
cat > Dockerfile << 'EOF'
FROM node:18-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
EXPOSE 3000
CMD ["node", "server.js"]
EOF
# Build and push to Container Registry
docker build -t gcr.io/my-project/my-api:v1.0.0 .
docker push gcr.io/my-project/my-api:v1.0.0
# Deploy to Cloud Run
gcloud run deploy my-api \
--image gcr.io/my-project/my-api:v1.0.0 \
--region us-central1 \
--memory 512Mi \
--cpu 1 \
--allow-unauthenticatedPricing Model:
Example Application:
// server.js for Cloud Run
const express = require('express');
const app = express();
app.use(express.json());
// Must listen on PORT environment variable
const PORT = process.env.PORT || 3000;
// Health check endpoint
app.get('/health', (req, res) => {
res.json({ status: 'healthy' });
});
// Main endpoint
app.get('/', (req, res) => {
res.send('Hello from Cloud Run!');
});
// API endpoint
app.post('/process', (req, res) => {
const data = req.body;
console.log('Processing:', data);
res.json({ success: true, id: Math.random() });
});
// Error handling
app.use((err, req, res, next) => {
console.error(err);
res.status(500).json({ error: 'Internal server error' });
});
app.listen(PORT, () => {
console.log(`Server listening on port ${PORT}`);
});Dockerfile for Cloud Run:
# Use lightweight base image
FROM node:18-alpine
WORKDIR /app
# Copy package files
COPY package*.json ./
# Install dependencies
RUN npm ci --only=production
# Copy application code
COPY . .
# Expose port (optional for documentation)
EXPOSE 3000
# Handle graceful shutdown
ENTRYPOINT ["node", "server.js"]Overview: Managed Kubernetes cluster for running containerized applications at scale.
Best For:
Key Characteristics:
Basic CLI Usage:
# Create a Kubernetes cluster
gcloud container clusters create my-cluster \
--zone us-central1-a \
--num-nodes 3 \
--enable-autoscaling \
--min-nodes 1 \
--max-nodes 10
# Deploy application using kubectl
kubectl create deployment my-app \
--image=gcr.io/my-project/my-app:v1.0.0
# Expose deployment as service
kubectl expose deployment my-app \
--type=LoadBalancer \
--port 80 \
--target-port 3000
# View running pods
kubectl get pods
# View service and get external IP
kubectl get servicesKubernetes YAML Example:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
replicas: 3
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: app
image: gcr.io/my-project/my-app:v1.0.0
ports:
- containerPort: 3000
env:
- name: PORT
value: "3000"
resources:
requests:
memory: "256Mi"
cpu: "100m"
limits:
memory: "512Mi"
cpu: "500m"
---
apiVersion: v1
kind: Service
metadata:
name: my-app-service
spec:
selector:
app: my-app
type: LoadBalancer
ports:
- protocol: TCP
port: 80
targetPort: 3000| Feature | Compute Engine | App Engine | Cloud Run | GKE |
|---|---|---|---|---|
| Abstraction Level | IaaS | PaaS | Serverless | Orchestration |
| Scaling | Manual/Auto Scaling Groups | Automatic | Automatic + Zero | Automatic |
| Idle Cost | Always charged | Low (minimum 1 instance) | No cost | Running nodes |
| Deployment | VM images | Source code | Containers | Kubernetes |
| Best For | Control, legacy apps | Web apps, APIs | Event-driven, variable load | Microservices, complex orchestration |
| Cold Start | Minutes | Seconds | Milliseconds | Seconds |
| Price Model | Per VM hour | Per instance hour | Per 100ms CPU + request | Per node hour |
| Easiest to Learn | Medium | Medium | Easiest | Hardest |
Does your application need to respond to requests?
āā YES
ā āā Can it be stateless and containerized?
ā āā YES
ā ā āā Do you need Kubernetes features?
ā ā āā YES ā Use GKE
ā ā āā NO ā Use Cloud Run
ā āā NO
ā āā Is it a traditional web app?
ā āā YES ā Use App Engine
ā āā NO ā Use Compute Engine
āā NO (background jobs, long-running processes)
āā Use Compute Engine or Cloud Run jobsRequirements:
Solution: Cloud Run
# 1. Create Node.js Express API
mkdir my-api && cd my-api
npm init -y
npm install express
# 2. Create server.js with API endpoints
# 3. Create Dockerfile
# 4. Deploy to Cloud Run (5 minutes total)
gcloud run deploy my-api \
--source . \
--region us-central1 \
--allow-unauthenticatedRequirements:
Solution: App Engine
# 1. Create app.yaml
# 2. Deploy source code
gcloud app deploy
# Application scales automatically
# Built-in monitoring and logging includedRequirements:
Solution: Compute Engine
# 1. Create VM with startup script
# 2. Install video processing tools (FFmpeg, etc.)
# 3. Run processing jobs
# 4. Store results in Cloud StorageRequirements:
Solution: GKE
# 1. Create Kubernetes cluster
# 2. Deploy multiple services
# 3. Configure networking and service mesh
# 4. Implement monitoring and loggingWhy? Simplest entry point to cloud computing
# Step 1: Create application
echo "const http = require('http');
const server = http.createServer((req, res) => {
res.end('Hello Cloud Run!');
});
server.listen(process.env.PORT || 3000);" > server.js
npm init -y
# Step 2: Create Dockerfile
echo "FROM node:18-alpine
WORKDIR /app
COPY . .
EXPOSE 3000
CMD ['node', 'server.js']" > Dockerfile
# Step 3: Deploy
docker build -t my-app:v1 .
docker tag my-app:v1 gcr.io/my-project/my-app:v1
docker push gcr.io/my-project/my-app:v1
gcloud run deploy my-app \
--image gcr.io/my-project/my-app:v1 \
--region us-central1 \
--allow-unauthenticatedDive deeper into your chosen compute service with dedicated lessons on Cloud Run Deployment for modern applications, App Engine for traditional web apps, or GKE for enterprise Kubernetes workloads.
Resources
Ojasa Mirai
Master AI-powered development skills through structured learning, real projects, and verified credentials. Whether you're upskilling your team or launching your career, we deliver the skills companies actually need.
Learn Deep ⢠Build Real ⢠Verify Skills ⢠Launch Forward