Ojasa Mirai

Ojasa Mirai

Cloud

Loading...

Learning Level

🟢 Beginner🔵 Advanced
🔧 GCP Account Setup⚙️ GCP Compute Overview🚀 Cloud Run Deployment🎯 App Engine Deployment📁 GCP Storage & Hosting🔥 Firebase Hosting🗄️ Firestore Setup⚡ Firestore Realtime💾 Cloud SQL Setup📊 GCP Monitoring🔑 GCP Authentication📈 GCP Scaling & Performance⚡ Firebase Functions💰 GCP Cost Optimization
Cloud/Gcp Deployment/Gcp Storage Hosting

📁 GCP Storage & Hosting - Advanced

Introduction

Advanced Cloud Storage patterns enable enterprise-grade deployments with multi-region replication, sophisticated caching strategies, data migration at scale, and compliance requirements.

Key Learning Outcomes

By the end of this lesson, you'll understand:

  • Multi-region replication and disaster recovery
  • Advanced caching and performance optimization
  • Data migration strategies at scale
  • Bucket policies and compliance controls
  • Cost optimization for large-scale deployments
  • Integration with streaming analytics
  • Data access logging and audit trails

Multi-Region Replication

Set Up Dual-Region Buckets

# Create dual-region bucket (2 regions, automatic failover)
gsutil mb -c STANDARD -l US -d gs://my-dual-region-bucket

# Create multi-region bucket (all US, EU, or ASIA regions)
gsutil mb -c STANDARD -l US gs://my-multi-region-bucket

Cross-Region Replication

# Create two buckets in different regions
gsutil mb -l us-central1 gs://source-bucket
gsutil mb -l europe-west1 gs://replica-bucket

# Use Cloud Tasks/Pub/Sub for replication

Replication using Pub/Sub + Cloud Functions:

const functions = require('firebase-functions');
const {Storage} = require('@google-cloud/storage');

const storage = new Storage();
const sourceBucket = storage.bucket('source-bucket');
const replicaBucket = storage.bucket('replica-bucket');

exports.replicateObject = functions.storage
  .bucket('source-bucket')
  .onFinalize(async (object) => {
    const file = sourceBucket.file(object.name);
    await file.copy(replicaBucket.file(object.name));
    console.log(`Replicated ${object.name}`);
  });

Advanced Caching Strategies

Multi-Tier Caching Architecture

// Tier 1: Browser cache (handled by CDN/HTTP headers)
app.use((req, res, next) => {
  res.set('Cache-Control', 'public, max-age=31536000, immutable');
  next();
});

// Tier 2: CDN cache (Cloud CDN)
// Tier 3: Application cache (Redis)
// Tier 4: Database (Cloud SQL)

const redis = require('redis');
const client = redis.createClient({host: 'redis-host'});

async function getDataWithCaching(key) {
  // L3: Check Redis
  let data = await client.get(key);
  if (data) return JSON.parse(data);

  // Query database
  data = await queryDatabase(key);

  // Store in Redis (1 hour TTL)
  await client.setex(key, 3600, JSON.stringify(data));

  return data;
}

Cache Invalidation Patterns

// Pattern 1: Time-based (TTL)
await client.setex('key', 3600, value);

// Pattern 2: Event-based
app.post('/update-data', async (req, res) => {
  await updateDatabase(req.body);
  
  // Invalidate cache
  await client.del('data:all');
  
  res.json({success: true});
});

// Pattern 3: Dependency-based
class CacheManager {
  invalidateRelated(entityType, entityId) {
    const keys = [
      `${entityType}:${entityId}`,
      `${entityType}:all`,
      `related:${entityType}:${entityId}`
    ];
    
    return Promise.all(
      keys.map(key => client.del(key))
    );
  }
}

Data Migration at Scale

Transfer Service for Large Datasets

# One-time data transfer
gcloud transfer jobs create gs://destination-bucket \
  --source-type=s3 \
  --s3-source-bucket=source-aws-bucket

# Recurring daily transfers
gcloud transfer jobs create gs://destination-bucket \
  --source-type=s3 \
  --s3-source-bucket=source-aws-bucket \
  --schedule="EVERY_DAY 03:00"

Gsutil Parallel Transfers

# Parallel composite upload
gsutil -m -h "Content-Type:text/plain" cp large-file.txt gs://bucket/

# Parallel recursive copy with retries
gsutil -m -J -C cp -r gs://source/* gs://destination/

# Monitor transfer progress
gsutil ls -R gs://bucket/

Enterprise Security and Compliance

Bucket Policies

# Enforce HTTPS only
gcloud storage buckets update gs://my-bucket \
  --uniform-bucket-level-access

# Add bucket policy
gsutil uniformbucketlevelaccess set on gs://my-bucket

Bucket policy JSON:

{
  "bindings": [
    {
      "role": "roles/storage.admin",
      "members": ["user:admin@company.com"]
    },
    {
      "role": "roles/storage.objectViewer",
      "members": ["serviceAccount:app@project.iam.gserviceaccount.com"]
    }
  ]
}

Object Lifecycle for Compliance

gsutil lifecycle set - gs://compliance-bucket << 'EOF'
{
  "lifecycle": {
    "rule": [
      {
        "action": {
          "type": "Delete"
        },
        "condition": {
          "age": 2555,
          "matchesPrefix": ["temp/"]
        }
      },
      {
        "action": {
          "type": "SetStorageClass",
          "storageClass": "COLDLINE"
        },
        "condition": {
          "age": 30,
          "isLive": true
        }
      }
    ]
  }
}
EOF

Cost Optimization at Scale

Storage Cost Analyzer

from google.cloud import storage
import json

def analyze_bucket_costs(bucket_name):
    client = storage.Client()
    bucket = client.bucket(bucket_name)
    blobs = bucket.list_blobs()
    
    costs = {
        'STANDARD': 0,
        'NEARLINE': 0,
        'COLDLINE': 0,
        'ARCHIVE': 0
    }
    
    class_size = {}
    
    for blob in blobs:
        storage_class = blob.storage_class
        size_gb = blob.size / (1024**3)
        
        if storage_class not in class_size:
            class_size[storage_class] = 0
        class_size[storage_class] += size_gb
    
    # Calculate costs (approximate)
    pricing = {
        'STANDARD': 0.020,
        'NEARLINE': 0.010,
        'COLDLINE': 0.004,
        'ARCHIVE': 0.0012
    }
    
    for storage_class, size_gb in class_size.items():
        cost = size_gb * pricing.get(storage_class, 0)
        costs[storage_class] = cost
    
    return {
        'class_size': class_size,
        'monthly_costs': costs,
        'total_cost': sum(costs.values())
    }

Streaming Analytics Integration

Stream Objects to BigQuery

const {Storage} = require('@google-cloud/storage');
const {BigQuery} = require('@google-cloud/bigquery');

const storage = new Storage();
const bigquery = new BigQuery();

async function loadCSVToBigQuery(bucketName, fileName, datasetId, tableId) {
  const dataset = bigquery.dataset(datasetId);
  const table = dataset.table(tableId);
  
  const file = storage.bucket(bucketName).file(fileName);
  
  const [job] = await table.load(file, {
    sourceFormat: 'CSV',
    skipLeadingRows: 1,
    autodetect: true
  });
  
  await job.promise();
  console.log('Data loaded to BigQuery');
}

Access Logging and Audit Trails

Enable Access Logging

# Create log bucket
gsutil mb -l us-central1 gs://logs-bucket

# Enable logging on data bucket
gsutil logging set on -b gs://logs-bucket gs://my-bucket

Query Logs

from google.cloud import logging_v2

def get_bucket_access_logs(bucket_name):
    client = logging_v2.Client()
    logger = client.logger('projects/my-project')
    
    filter_str = f'resource.bucket_name="{bucket_name}"'
    entries = logger.list_entries(filter_=filter_str)
    
    for entry in entries:
        print(f"{entry.timestamp}: {entry.payload}")

Performance Tuning

Signed URLs with Custom Policies

const {Storage} = require('@google-cloud/storage');

const storage = new Storage();
const bucket = storage.bucket('my-bucket');
const file = bucket.file('my-file.pdf');

async function generateSignedUrl(expirationMinutes = 15) {
  const options = {
    version: 'v4',
    action: 'read',
    expires: Date.now() + expirationMinutes * 60 * 1000,
  };
  
  const [url] = await file.getSignedUrl(options);
  return url;
}

Optimized Transfer Configuration

# Configure parallel threads for faster transfers
gsutil -m -n 8 cp -r gs://source/* gs://destination/

# Use sliced object downloads for large files
gsutil cp --gzip-compression-level=9 large-file gs://bucket/

Key Takeaways

  • **Multi-region buckets** provide automatic failover and compliance
  • **Tiered caching** reduces latency and database load
  • **Replication strategies** ensure data availability
  • **Lifecycle policies** optimize costs across storage classes
  • **Audit logging** meets compliance requirements
  • **Signed URLs** provide temporary secure access
  • **Transfer Service** simplifies large-scale data migration

Next Steps

Explore Cloud Pub/Sub for streaming ingestion, or learn about BigQuery for analyzing storage access patterns.


Resources

Python Docs

Ojasa Mirai

Master AI-powered development skills through structured learning, real projects, and verified credentials. Whether you're upskilling your team or launching your career, we deliver the skills companies actually need.

Learn Deep • Build Real • Verify Skills • Launch Forward

Courses

PythonFastapiReactJSCloud

© 2026 Ojasa Mirai. All rights reserved.

TwitterGitHubLinkedIn