Multi-Cloud Deployment Guide: AWS, GCP, Azure
Level: Advanced Pre-reading: 06 · Multi-Cloud MCP Servers · 00 · Overview
This document shows step-by-step how to deploy the TaskMaster agent to AWS, GCP, and Azure with identical agent code.
Architecture: Cloud-Agnostic Agent
graph TD
Agent["LangGraph Agent<br/>(same code on all clouds)"]
Agent --> JMCP["JIRA MCP<br/>:8001"]
Agent --> GMCP["GitHub MCP<br/>:8002"]
Agent --> RAG["RAG Service<br/>(embeddings)"]
Agent --> LLM["LLM<br/>(inference)"]
JMCP --> Secrets1["Secrets Service"]
GMCP --> Secrets2["Secrets Service"]
JMCP --> JIRA["JIRA API"]
GMCP --> GitHub["GitHub API"]
RAG --> VectorDB["pgvector DB"]
LLM --> LLMApi["LLM API"]
Secrets1 --> SecretsImpl["AWS Secrets Manager<br/>or GCP Secret Manager<br/>or Azure Key Vault"]
Secrets2 --> SecretsImpl
style Agent fill:#4a90e2
style JMCP fill:#50e3c2
style GMCP fill:#50e3c2
style RAG fill:#b8e986
style LLM fill:#f5a623
Deployment: AWS (Complete Example)
1. Setup: Secrets Manager
# Create secrets
aws secretsmanager create-secret \
--name taskmaster/jira \
--secret-string '{"base_url":"https://org.atlassian.net","email":"user@example.com","api_token":"xxx"}' \
--region us-east-1
aws secretsmanager create-secret \
--name taskmaster/github \
--secret-string '{"token":"ghp_xxx","repo_owner":"org","repo_name":"taskmaster"}' \
--region us-east-1
aws secretsmanager create-secret \
--name taskmaster/db \
--secret-string '{"host":"db.rds.amazonaws.com","port":5432,"username":"postgres","password":"xxx"}' \
--region us-east-1
2. Setup: RDS PostgreSQL + pgvector
# Create RDS instance
aws rds create-db-instance \
--db-instance-identifier taskmaster-db \
--db-instance-class db.t3.micro \
--engine postgres \
--master-username postgres \
--master-user-password yourpassword \
--allocated-storage 20
# Enable pgvector (after instance is running)
aws rds-custom describe-db-instances \
--db-instance-identifier taskmaster-db
# Connect and install pgvector
psql -h taskmaster-db.xxx.rds.amazonaws.com -U postgres -d postgres
-- In psql:
CREATE EXTENSION vector;
CREATE TABLE code_chunks (
id SERIAL PRIMARY KEY,
repo TEXT,
file_path TEXT,
chunk_text TEXT,
embedding vector(1536),
module TEXT,
language TEXT,
updated_at TIMESTAMPTZ,
UNIQUE(repo, file_path, chunk_text)
);
CREATE INDEX ON code_chunks USING ivfflat (embedding vector_cosine_ops) WITH (lists = 100);
3. Setup: ECR (Docker Registry)
# Create ECR repositories
aws ecr create-repository --repository-name taskmaster-agent --region us-east-1
aws ecr create-repository --repository-name taskmaster-jira-mcp --region us-east-1
aws ecr create-repository --repository-name taskmaster-github-mcp --region us-east-1
# Login to ECR
aws ecr get-login-password --region us-east-1 | \
docker login --username AWS --password-stdin 123456789.dkr.ecr.us-east-1.amazonaws.com
# Build and push
docker build -t taskmaster-jira-mcp -f Dockerfile.jira-mcp .
docker tag taskmaster-jira-mcp:latest 123456789.dkr.ecr.us-east-1.amazonaws.com/taskmaster-jira-mcp:latest
docker push 123456789.dkr.ecr.us-east-1.amazonaws.com/taskmaster-jira-mcp:latest
4. Setup: ECS Fargate Task
{
"family": "taskmaster-agent",
"requiresCompatibilities": ["FARGATE"],
"networkMode": "awsvpc",
"cpu": "512",
"memory": "1024",
"containerDefinitions": [
{
"name": "agent",
"image": "123456789.dkr.ecr.us-east-1.amazonaws.com/taskmaster-agent:latest",
"portMappings": [{"containerPort": 8000}],
"environment": [
{"name": "SECRETS_PROVIDER", "value": "aws"},
{"name": "AWS_REGION", "value": "us-east-1"},
{"name": "LLM_PROVIDER", "value": "aws"},
{"name": "EMBEDDINGS_PROVIDER", "value": "aws"}
]
},
{
"name": "jira-mcp",
"image": "123456789.dkr.ecr.us-east-1.amazonaws.com/taskmaster-jira-mcp:latest",
"portMappings": [{"containerPort": 8001}],
"environment": [
{"name": "SECRETS_PROVIDER", "value": "aws"}
]
},
{
"name": "github-mcp",
"image": "123456789.dkr.ecr.us-east-1.amazonaws.com/taskmaster-github-mcp:latest",
"portMappings": [{"containerPort": 8002}],
"environment": [
{"name": "SECRETS_PROVIDER", "value": "aws"}
]
}
]
}
5. Setup: IAM Role
# Create role with Bedrock + Secrets Manager access
cat > trust-policy.json << 'EOF'
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {"Service": "ecs-tasks.amazonaws.com"},
"Action": "sts:AssumeRole"
}
]
}
EOF
aws iam create-role \
--role-name taskmaster-agent-role \
--assume-role-policy-document file://trust-policy.json
# Attach policies
aws iam attach-role-policy \
--role-name taskmaster-agent-role \
--policy-arn arn:aws:iam::aws:policy/AmazonBedrockFullAccess
aws iam attach-role-policy \
--role-name taskmaster-agent-role \
--policy-arn arn:aws:iam::aws:policy/SecretsManagerReadWrite
Deployment: GCP (Complete Example)
1. Setup: Secret Manager
# Create secrets
gcloud secrets create taskmaster-jira \
--replication-policy="automatic" \
--data-file=- << 'EOF'
{
"base_url": "https://org.atlassian.net",
"email": "user@example.com",
"api_token": "xxx"
}
EOF
gcloud secrets create taskmaster-github \
--replication-policy="automatic" \
--data-file=- << 'EOF'
{
"token": "ghp_xxx",
"repo_owner": "org",
"repo_name": "taskmaster"
}
EOF
gcloud secrets create taskmaster-db \
--replication-policy="automatic" \
--data-file=- << 'EOF'
{
"host": "cloudsql-instance.c.project.internal",
"port": 5432,
"username": "postgres",
"password": "xxx"
}
EOF
# Grant service account access
gcloud secrets add-iam-policy-binding taskmaster-jira \
--member=serviceAccount:taskmaster@project.iam.gserviceaccount.com \
--role=roles/secretmanager.secretAccessor
2. Setup: Cloud SQL (PostgreSQL + pgvector)
# Create instance
gcloud sql instances create taskmaster-db \
--database-version=POSTGRES_15 \
--tier=db-f1-micro \
--region=us-central1
# Connect and setup
gcloud sql connect taskmaster-db --user=postgres
-- In psql:
CREATE EXTENSION vector;
CREATE TABLE code_chunks (
id SERIAL PRIMARY KEY,
repo TEXT,
file_path TEXT,
chunk_text TEXT,
embedding vector(1536),
module TEXT,
language TEXT,
updated_at TIMESTAMPTZ,
UNIQUE(repo, file_path, chunk_text)
);
CREATE INDEX ON code_chunks USING ivfflat (embedding vector_cosine_ops) WITH (lists = 100);
3. Setup: Artifact Registry (Docker)
# Enable Artifact Registry API
gcloud services enable artifactregistry.googleapis.com
# Create repository
gcloud artifacts repositories create taskmaster \
--repository-format=docker \
--location=us-central1
# Configure docker auth
gcloud auth configure-docker us-central1-docker.pkg.dev
# Build and push
docker build -t us-central1-docker.pkg.dev/project/taskmaster/taskmaster-jira-mcp:latest -f Dockerfile.jira-mcp .
docker push us-central1-docker.pkg.dev/project/taskmaster/taskmaster-jira-mcp:latest
4. Setup: Cloud Run (Serverless Containers)
# Deploy agent
gcloud run deploy taskmaster-agent \
--image us-central1-docker.pkg.dev/project/taskmaster/taskmaster-agent:latest \
--platform managed \
--region us-central1 \
--memory 1Gi \
--set-env-vars SECRETS_PROVIDER=gcp,GCP_PROJECT_ID=project,LLM_PROVIDER=gcp,EMBEDDINGS_PROVIDER=gcp
# Deploy JIRA MCP
gcloud run deploy taskmaster-jira-mcp \
--image us-central1-docker.pkg.dev/project/taskmaster/taskmaster-jira-mcp:latest \
--platform managed \
--region us-central1 \
--memory 512Mi \
--set-env-vars SECRETS_PROVIDER=gcp,GCP_PROJECT_ID=project \
--no-allow-unauthenticated
# Deploy GitHub MCP
gcloud run deploy taskmaster-github-mcp \
--image us-central1-docker.pkg.dev/project/taskmaster/taskmaster-github-mcp:latest \
--platform managed \
--region us-central1 \
--memory 512Mi \
--set-env-vars SECRETS_PROVIDER=gcp,GCP_PROJECT_ID=project \
--no-allow-unauthenticated
5. Setup: Cloud Build (CI/CD)
# cloudbuild.yaml
steps:
# Build and push agent
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'us-central1-docker.pkg.dev/$PROJECT_ID/taskmaster/taskmaster-agent:latest', '-f', 'Dockerfile.agent', '.']
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'us-central1-docker.pkg.dev/$PROJECT_ID/taskmaster/taskmaster-agent:latest']
# Deploy to Cloud Run
- name: 'gcr.io/cloud-builders/gke-deploy'
args: ['run', '--deploy-taskmaster-agent', '--image=us-central1-docker.pkg.dev/$PROJECT_ID/taskmaster/taskmaster-agent:latest']
Deployment: Azure (Complete Example)
1. Setup: Key Vault
# Create Key Vault
az keyvault create \
--name taskmaster-kv \
--resource-group myresourcegroup \
--location eastus
# Store secrets
az keyvault secret set \
--vault-name taskmaster-kv \
--name taskmaster-jira \
--value '{"base_url":"https://org.atlassian.net","email":"user@example.com","api_token":"xxx"}'
az keyvault secret set \
--vault-name taskmaster-kv \
--name taskmaster-github \
--value '{"token":"ghp_xxx","repo_owner":"org","repo_name":"taskmaster"}'
az keyvault secret set \
--vault-name taskmaster-kv \
--name taskmaster-db \
--value '{"host":"taskmaster.postgres.database.azure.com","port":5432,"username":"postgres","password":"xxx"}'
# Grant access
az keyvault set-policy \
--name taskmaster-kv \
--object-id $(az identity show -n taskmaster-identity -g myresourcegroup --query principalId -o tsv) \
--secret-permissions get list
2. Setup: Azure Database for PostgreSQL
# Create instance
az postgres server create \
--name taskmaster-db \
--resource-group myresourcegroup \
--admin-user postgres \
--admin-password yourpassword \
--sku-name B_Gen5_1 \
--storage-size 51200
# Connect and setup
psql --host=taskmaster-db.postgres.database.azure.com --user=postgres
-- In psql:
CREATE EXTENSION vector;
CREATE TABLE code_chunks (
id SERIAL PRIMARY KEY,
repo TEXT,
file_path TEXT,
chunk_text TEXT,
embedding vector(1536),
module TEXT,
language TEXT,
updated_at TIMESTAMPTZ,
UNIQUE(repo, file_path, chunk_text)
);
CREATE INDEX ON code_chunks USING ivfflat (embedding vector_cosine_ops) WITH (lists = 100);
3. Setup: Container Registry
# Create Container Registry
az acr create \
--resource-group myresourcegroup \
--name taskmaster \
--sku Basic
# Build and push
az acr build \
--registry taskmaster \
--image taskmaster-jira-mcp:latest \
--file Dockerfile.jira-mcp .
az acr build \
--registry taskmaster \
--image taskmaster-agent:latest \
--file Dockerfile.agent .
4. Setup: Azure Container Instances (ACI)
# Create container group
az container create \
--resource-group myresourcegroup \
--name taskmaster-agent \
--image taskmaster.azurecr.io/taskmaster-agent:latest \
--memory 1.0 \
--cpu 0.5 \
--environment-variables SECRETS_PROVIDER=azure AZURE_VAULT_URL=https://taskmaster-kv.vault.azure.net/ LLM_PROVIDER=azure \
--registry-login-server taskmaster.azurecr.io \
--registry-username taskmaster \
--registry-password <password>
# For production, use Azure Container Apps
az containerapp create \
--name taskmaster-agent \
--resource-group myresourcegroup \
--image taskmaster.azurecr.io/taskmaster-agent:latest \
--environment-variables SECRETS_PROVIDER=azure AZURE_VAULT_URL=https://taskmaster-kv.vault.azure.net/ \
--memory 1.0Gi \
--cpu 0.5
5. Setup: Azure Identity (Managed Identity)
# Create managed identity
az identity create \
--name taskmaster-identity \
--resource-group myresourcegroup
# Grant Key Vault access
az keyvault set-policy \
--name taskmaster-kv \
--object-id $(az identity show -n taskmaster-identity -g myresourcegroup --query principalId -o tsv) \
--secret-permissions get list
# Associate with Container App
az containerapp identity assign \
--name taskmaster-agent \
--resource-group myresourcegroup \
--identities taskmaster-identity
Comparison: Deployment Effort
| Task | AWS | GCP | Azure |
|---|---|---|---|
| Secrets Storage | 3 min | 3 min | 3 min |
| Database Setup | 10 min | 10 min | 10 min |
| Container Registry | 5 min | 5 min | 5 min |
| App Deployment | 10 min (ECS) | 5 min (Cloud Run) | 10 min (Container Apps) |
| IAM/Auth Setup | 10 min | 5 min | 5 min |
| Total | ~38 min | ~28 min | ~33 min |
Easiest: GCP Cloud Run (minimal config) Most Mature: AWS ECS (best documentation) Best Enterprise: Azure Container Apps (Azure AD integration)
Code Changes Between Clouds: Zero ✅
# Same agent code runs everywhere
from llm.provider import get_llm_provider
from embeddings.provider import get_embeddings_provider
from mcp_common.secrets import get_secrets_provider
llm = get_llm_provider() # AWS/GCP/Azure based on ENV
embeddings = get_embeddings_provider()
secrets = get_secrets_provider()
# No if/elif/else cloud detection!
Cost Comparison: 1 Month
| Component | AWS | GCP | Azure |
|---|---|---|---|
| LLM (1M tokens) | $5 | $3 | $6 |
| Embeddings (1M) | $1 | $0.30 | $1 |
| Database | $50 | $40 | $70 |
| Compute | $150 | $100 | $150 |
| Storage | $5 | $5 | $5 |
| Total | $211 | $148 | $232 |
Winner: GCP (30% cheaper)
Migration: AWS → GCP
-
Export data from AWS RDS:
-
Import to GCP Cloud SQL:
-
Update environment:
-
Redeploy:
docker-compose --profile gcp up
Total time: ~30 mins (mostly waiting for database import)
Monitoring Across Clouds
AWS CloudWatch
import boto3
logs_client = boto3.client('logs')
logs_client.put_log_events(
logGroupName='/taskmaster/agent',
logStreamName='inference',
logEvents=[{'timestamp': int(time.time() * 1000), 'message': 'Invoked LLM'}]
)
GCP Cloud Logging
from google.cloud import logging as gcp_logging
logger = gcp_logging.Client().logger("taskmaster-agent")
logger.log_struct({"action": "invoke_llm", "latency_ms": 1234})
Azure Monitor
from opentelemetry import trace
tracer = trace.get_tracer(__name__)
with tracer.start_as_current_span("invoke_llm"):
# LLM call
pass
Summary: Multi-Cloud Readiness
✅ MCP Servers: Cloud-agnostic (secrets abstraction)
✅ LLM Provider: Cloud-agnostic (abstract LLMProvider)
✅ Embeddings: Cloud-agnostic (abstract EmbeddingsProvider)
✅ Agent Code: Zero changes needed
✅ Data Migration: 30 mins (database dump + restore)
✅ Deployment: 30–40 mins per cloud
You can run this demo on AWS, GCP, or Azure with the exact same agent code.
Which cloud should I pick for this demo?
GCP for cheapest + fastest deployment. AWS for most documentation. Azure if you're already in Microsoft ecosystem.
Can I run across multiple clouds simultaneously?
Yes, but complex. Agents would need multi-cloud routing logic. Recommended: pick one for MVP, add multi-cloud later.
What happens if one cloud region goes down?
Data is lost unless replicated. Solution: use Cloud SQL read replicas (AWS), Cross-region replication (GCP), Geo-redundancy (Azure).
How hard is it to move between clouds?
Easy: MCP servers + LLM/embeddings code (~1 hour). Hard: Data migration if you have millions of vectors (~4–8 hours).