How to Deploy a Containerized App to AWS ECS
How to Deploy a Containerized App to AWS ECS
To deploy a containerized application to AWS ECS, you need a Dockerfile, an ECS cluster with a service and task definition, and a deployment pipeline that builds your image, pushes it to ECR, and updates the ECS service. This guide walks through the full setup using Dev Ramps to handle the build-push-deploy cycle automatically on every git push.
What You Need Before Starting
You’ll need a few things in place before configuring the pipeline:
- A Dockerfile in your repository. Dev Ramps builds the image for you, so you don’t need a local Docker build step or manual ECR push.
- An ECS cluster and service in your target AWS account. You can create these with Terraform, CloudFormation, or the AWS console.
- A task definition that defines your container’s CPU, memory, environment variables, and port mappings. Dev Ramps uses this as a reference template and creates new revisions with updated images on each deploy.
- Dev Ramps bootstrapped in your AWS account. Run
npx @devramps/cli bootstrapto set up the IAM roles that allow Dev Ramps to deploy into your account. Use--dry-runfirst to preview the changes.
If you’re managing your ECS infrastructure with Terraform, the task definition ARN can be passed directly from Terraform outputs into the deploy step. More on that below.
Defining Your Docker Artifact
The artifacts section in your pipeline.yaml tells Dev Ramps how to build your Docker image. Here’s a typical configuration:
artifacts:
API Image:
type: DEVRAMPS:DOCKER:BUILD
rebuild_when_changed:
- /src
- /Dockerfile
- /package.json
params:
dockerfile: /Dockerfile
build_root: /
args:
- NODE_ENV=production
On each push, Dev Ramps checks whether any files in the rebuild_when_changed paths actually changed. If you only edited a README or a Terraform file, the image build is skipped entirely and the previous artifact is reused. This saves a few minutes per deployment when your application code hasn’t changed.
The build runs on an isolated VM (not in a shared container). You can choose the VM size with host_size: small (2 vCPUs, 4GB), medium (4 vCPUs, 8GB), or large (8 vCPUs, 16GB). The built image is automatically pushed to an ECR repository in your CI/CD account, then mirrored to each target account before the deploy step runs.
The ECS Deploy Step
The DEVRAMPS:ECS:DEPLOY step updates your ECS service with a new container image. It reads your existing task definition, creates a new revision with the updated image URL, updates the service, and waits for the rolling deployment to stabilize.
steps:
- name: Deploy to ECS
type: DEVRAMPS:ECS:DEPLOY
params:
cluster_name: my-cluster
service_name: api-service
reference_task_definition: arn:aws:ecs:us-east-1:123456789012:task-definition/api:1
images:
- container_name: api
image: "${{ stage.artifacts[\"API Image\"].image_url }}"
timeout: 30
The reference_task_definition is the ARN of your existing task definition. Dev Ramps doesn’t modify this definition. It creates a new revision, swaps in the fresh image URL, and points the ECS service at the new revision. All the settings you’ve configured in the task definition (memory, CPU, environment variables, log configuration) carry over unchanged.
The images array maps container names in your task definition to the image URLs from your build artifacts. If your task definition has multiple containers (say, an app container and a sidecar), you can map each one separately.
During the rolling deployment, ECS launches new tasks with the updated image, waits for them to pass health checks through your Application Load Balancer, then drains the old tasks. The step doesn’t complete until the deployment stabilizes. If the timeout expires before that happens, the step fails.
Full Pipeline Example
Here’s a complete pipeline.yaml that builds a Docker image, runs Terraform for infrastructure, deploys to ECS, and includes a bake period before promoting to production:
version: "1.0.0"
pipeline:
cloud_provider: AWS
pipeline_updates_require_approval: ALWAYS
stages:
- name: staging
account_id: "111111111111"
region: us-east-1
vars:
env: staging
- name: production
account_id: "222222222222"
region: us-east-1
auto_rollback_alarm_name: api-health-alarm
vars:
env: production
steps:
- name: Synthesize Infrastructure
id: infra
type: DEVRAMPS:TERRAFORM:SYNTHESIZE
params:
requires_approval: DESTRUCTIVE_CHANGES_ONLY
source: /infrastructure
variables:
environment: ${{ vars.env }}
- name: Deploy to ECS
type: DEVRAMPS:ECS:DEPLOY
goes_after: ["Synthesize Infrastructure"]
params:
cluster_name: ${{ steps.infra.ecs_cluster_name }}
service_name: ${{ steps.infra.ecs_service_name }}
reference_task_definition: ${{ steps.infra.task_definition_arn }}
images:
- container_name: api
image: ${{ stage.artifacts["API Image"].image_url }}
- name: 5 Minute Bake
type: DEVRAMPS:APPROVAL:BAKE
params:
duration_minutes: 5
artifacts:
API Image:
type: DEVRAMPS:DOCKER:BUILD
rebuild_when_changed:
- /src
- /Dockerfile
- /package.json
params:
dockerfile: /Dockerfile
A few things to notice. The goes_after field ensures the ECS deploy waits for Terraform to finish, since the deploy step references Terraform outputs (${{ steps.infra.ecs_cluster_name }}). The bake period pauses for 5 minutes after the deploy, giving you time to spot issues before the pipeline auto-promotes to production.
Steps are shared across stages. The same deploy step runs against staging first, then production, with stage-specific variables and account IDs substituted via expressions.
Health Checks and Auto-Rollback
ECS rolling deployments already use ALB health checks to verify new tasks are healthy before draining old ones. But health checks alone won’t catch problems that surface after the deployment completes, like a slow memory leak or a spike in error rates under real traffic.
For that, add a CloudWatch alarm to your production stage. The auto_rollback_alarm_name field (shown in the full example above) tells Dev Ramps to monitor that alarm during and after deployment. If the alarm fires during a deploy, Dev Ramps cancels the deployment and rolls back to the last successfully deployed revision. If it fires after the deploy has succeeded, it blocks new deployments until the alarm clears.
For more on setting up rollback alarms and choosing good metrics, see deployment rollback strategies.
Common Pitfalls
Wrong task definition ARN format. The reference_task_definition needs the full ARN including the revision number, like arn:aws:ecs:us-east-1:123456789012:task-definition/api:1. If you’re using Terraform outputs, the ARN comes back correctly. If you’re hardcoding it, don’t forget the :1 (or whichever revision) at the end.
Health check timeouts. If your container takes 30 seconds to start but your ALB health check expects a response within 5 seconds with a 2-check threshold, the deployment will oscillate between starting tasks and failing health checks. Set your health check interval and grace period to account for your application’s actual startup time.
Missing bootstrap. If you add a new step type (like DEVRAMPS:ECS:DEPLOY) to a pipeline that was previously only running Terraform, you need to re-run npx @devramps/cli bootstrap to update the IAM role with ECS permissions. The deploy will fail with an access denied error otherwise.
Image architecture mismatch. If your ECS tasks run on ARM-based Graviton instances but your Docker build defaults to linux/amd64, the container will fail to start. Set architecture: "linux/arm64" in your artifact configuration to match your ECS compute.
Conclusion
The core loop is simple: define a Docker artifact, add an ECS deploy step that references your task definition, and push code. Dev Ramps handles the image build, ECR push, cross-account mirroring, task definition revision, and rolling deployment. Add a CloudWatch alarm for automatic rollback, and you have a production-grade ECS deployment pipeline in about 40 lines of YAML. For the full configuration reference, see the ECS deploy step docs.