Multi-Account AWS Deployment Best Practices
Multi-Account AWS Deployment Best Practices
Running everything in a single AWS account works fine until it doesn’t. A misconfigured IAM policy in staging deletes your production database. A developer’s experiment racks up a $40,000 bill that gets lumped in with production costs. An overly broad role lets a CI job touch resources it was never meant to reach. Separate AWS accounts per environment fix all of these problems by putting hard boundaries between workloads.
Why Separate Accounts Matter
AWS accounts are the strongest isolation boundary available. IAM policies, VPCs, and resource tags all help, but they’re soft limits that one bad policy can punch through. An account boundary is absolute: a role in your staging account physically cannot modify resources in your production account unless you explicitly grant cross-account access.
Three things drive most teams to multi-account setups:
Blast radius containment. A Terraform destroy that runs against the wrong environment is annoying in staging. In production, it’s an incident. When staging and production live in different accounts, there’s no way to accidentally target the wrong one with the wrong credentials.
Billing separation. AWS Cost Explorer can filter by tags, but tags are optional and inconsistently applied. Separate accounts give you clean billing per environment with zero tagging discipline required. Your CFO can see exactly what production costs versus what your dev team is experimenting with.
IAM boundaries. The IAM permission model is additive. In a single account, it’s surprisingly easy for policies to interact in ways that grant more access than intended. Separate accounts mean each environment has its own IAM namespace. A developer with admin access in the dev account has zero implicit access to production.
Common Account Structures
The most common pattern is straightforward: one account per environment, plus a dedicated CI/CD account.
┌─────────────────┐
│ CI/CD Account │ ← Artifacts, state files, ECR repos
└────────┬────────┘
│
┌────┴────┬──────────┐
▼ ▼ ▼
┌────────┐ ┌────────┐ ┌────────┐
│ Dev │ │Staging │ │ Prod │
└────────┘ └────────┘ └────────┘
The CI/CD account holds your Docker images, Terraform state, build artifacts, and encryption keys. It never runs application workloads. Target accounts (dev, staging, production) only run what gets deployed to them. This separation means a compromised build step can’t directly access production resources, because it’s operating in a completely different account.
Some teams go further with per-team or per-workload accounts. This makes sense when teams operate independently and you want billing and access isolated at the team level. But for most organizations, environment-based separation covers 90% of the value. Start there and split further if you actually need it.
Cross-Account IAM Without Long-Lived Keys
The biggest pitfall in multi-account setups is how you handle authentication between accounts. Long-lived AWS access keys are the wrong answer. They get committed to repos, leaked in logs, shared in Slack messages, and forgotten in CI environment variables long after someone leaves the team.
OIDC federation eliminates this entirely. Instead of storing credentials, your CI/CD platform requests temporary STS tokens from each target account at deployment time. The tokens expire automatically, and there’s nothing to rotate or leak.
DevRamps sets this up during bootstrap. Running npx @devramps/cli bootstrap creates an OIDC identity provider and a scoped IAM role in each target account. The trust policy restricts access to your specific organization and pipeline:
{
"Effect": "Allow",
"Principal": { "Federated": "arn:aws:iam::oidc-provider/devramps.com" },
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"devramps.com:org": "your-org",
"devramps.com:pipeline": "your-pipeline"
}
}
}
This means a compromised pipeline in another org can’t assume your deployment role. And the IAM policies on that role are scoped to the specific step types your pipeline uses. If you only deploy ECS services, the role doesn’t have Lambda or EKS permissions.
When you need additional permissions beyond the built-in step types, add them explicitly in aws_additional_iam_policies.yaml with specific actions and resource ARNs. For more on least-privilege IAM in pipelines, see CI/CD Pipeline Security.
Setting Up Multi-Account Pipelines
With DevRamps, multi-account deployment is a matter of specifying account_id on each stage. Each stage targets a specific account and region, and artifacts are automatically mirrored from the CI/CD account to the target account before deployment begins.
stages:
- name: staging
account_id: "111111111111"
region: us-east-1
vars:
env: staging
replicas: 1
- name: prod-us-east-1
account_id: "222222222222"
region: us-east-1
vars:
env: prod
replicas: 3
auto_rollback_alarm_name: api-health-us-east-1
- name: prod-us-west-2
account_id: "222222222222"
region: us-west-2
vars:
env: prod
replicas: 3
auto_rollback_alarm_name: api-health-us-west-2
Each account needs to be bootstrapped once before the first deployment:
# Bootstrap all target accounts referenced in your pipelines
npx @devramps/cli bootstrap
# Preview what will be created (dry-run)
npx @devramps/cli bootstrap --dry-run
The bootstrap creates a CloudFormation stack in each target account with the OIDC provider and deployment role. For accounts outside your AWS Organization, pass a custom role name: npx @devramps/cli bootstrap --target-account-role-name MyCustomRole.
Artifact mirroring happens automatically during stage execution. Docker images built once get pushed to the CI/CD account’s ECR, then mirrored to each target account’s ECR before deployment. You build once; DevRamps distributes. If the source artifact hasn’t changed since the last deployment, the mirror step is skipped entirely.
Terraform variables can reference the current stage’s account using expressions:
variables:
aws_account_id: ${{ stage.account_id }}
cicd_account_id: ${{ organization.cicd_account_id }}
environment: ${{ vars.env }}
For full stage configuration options, see the stages documentation.
Environment Promotion and Safety
Stages execute sequentially, not in parallel. This is intentional. Your first production region acts as a canary: if the deployment fails or triggers a CloudWatch alarm, auto-rollback kicks in and the remaining regions never see the bad deploy.
A typical promotion flow looks like this:
- Deploy to staging (auto-deploy on merge)
- Run integration tests as a gate
- Bake for 5 minutes, watching metrics
- Deploy to prod-us-east-1 (first canary region)
- Bake again, watching the auto-rollback alarm
- Deploy to prod-us-west-2
If step 4 fails, step 6 never runs. Your us-west-2 region stays on the previous known-good version. This sequential model, combined with per-stage auto-rollback alarms, gives you progressive delivery across accounts and regions without needing a separate feature-flagging system.
You can also restrict when deployments happen. Production stages can use deployment time windows to prevent 2am deploys, while staging stays open for anytime deployment:
stage_defaults:
deployment_time_window: PACIFIC_WORKING_HOURS
stages:
- name: staging
deployment_time_window: NONE # Deploy anytime
# ...
- name: production
# Inherits PACIFIC_WORKING_HOURS from defaults
# ...
For deeper coverage of rollback strategies, see Deployment Rollback Strategies for AWS.
Secrets and Config Across Accounts
Managing configuration across accounts is where multi-account setups get messy without good tooling. You need some values shared across all stages and others scoped to a specific account or environment.
DevRamps handles this with two scopes. Organization-level secrets (like a shared Stripe API key) are available to all stages. Stage-level secrets (like a database password) are only available within that specific stage’s execution:
variables:
stripe_key: ${{ secret("STRIPE_PUBLIC_KEY") }}
db_password: ${{ stage_secret("DB_PASSWORD") }}
Stage variables in your pipeline definition handle non-secret configuration. Things like replica counts, log levels, and feature flags vary per environment and are defined inline:
stages:
- name: staging
account_id: "111111111111"
vars:
log_level: debug
replicas: 1
- name: production
account_id: "222222222222"
vars:
log_level: info
replicas: 3
These variables flow into Terraform and deployment steps via expressions. The configuration lives in version control alongside your pipeline definition, so changes go through the same review process as code.
For test environments that need complete isolation, ephemeral environments can target a dedicated test account with their own variables and secrets, spun up per pull request and cleaned up automatically.
Wrapping Up
Multi-account AWS deployment comes down to three things: use separate accounts for real isolation, use OIDC instead of long-lived keys for cross-account access, and use sequential promotion with auto-rollback to limit the blast radius of bad deploys. The rest is configuration.
If you’re already running in a single account, start by separating production into its own account and bootstrapping a deployment role. You can add more accounts incrementally as your needs grow.