What Is a Deployment Pipeline?
What Is a Deployment Pipeline?
A deployment pipeline is an automated sequence of steps that takes code from a repository and delivers it to a running environment. It builds your application, runs tests, and deploys the result, all without manual intervention. The pipeline replaces the error-prone ritual of SSHing into a box, pulling the latest commit, and crossing your fingers.
Why You Need One
Without a pipeline, deployments are a manual process. Someone runs a script, maybe copy-pastes some commands from a wiki page that was last updated eight months ago, and hopes everything works. When it doesn’t, nobody knows which step failed or which commit caused the issue.
This gets worse as your team grows. Two engineers deploying at the same time. A staging step that gets skipped because it’s Friday and everyone wants to go home. A production incident at 2am where the on-call engineer can’t tell what changed because there’s no audit trail.
A deployment pipeline fixes this by making the process repeatable and automatic. Every change goes through the same stages. Every deployment is logged. If something breaks, you know exactly which commit triggered it and which step failed.
Anatomy of a Pipeline
Most pipelines follow the same basic shape:
Source. A push to your main branch triggers the pipeline. This is the starting signal. Some teams trigger on every push; others only deploy from specific branches.
Build. The pipeline compiles your code, installs dependencies, and produces a deployable artifact. For containerized applications, this means building a Docker image. For serverless or static apps, it might be bundling files into a zip archive.
Test. Automated tests run against the build output. Unit tests, integration tests, maybe a quick smoke test against a temporary environment. If tests fail, the pipeline stops and nothing gets deployed.
Deploy. The artifact goes out to your target environment. This could be updating an ECS service, pushing a Lambda function, or applying Terraform changes. In a multi-stage pipeline, this happens once for staging, then again for production after the staging deployment proves stable.
Verify. After deployment, the pipeline checks that everything is actually working. This might be a health check endpoint, a bake period where you watch metrics, or an automated test suite running against the live environment. If verification fails, a good pipeline rolls back automatically.
Not every pipeline has all five phases. A simple project might skip the verification step. A Terraform-only pipeline has no build artifact. But the pattern holds: trigger, build, validate, deploy, confirm.
Pipeline as Code
Early CI/CD tools had you configure pipelines through a web UI. Click some buttons, fill in some forms, hope the settings match what you think they do. The problem is that the pipeline definition lives outside your codebase. It can’t be versioned, reviewed, or rolled back alongside the code it deploys.
Pipeline-as-code solves this by defining your pipeline in a YAML file that lives in your repository. You review pipeline changes in pull requests. You can see exactly what the pipeline does by reading the file. And if someone makes a bad change, you roll it back with a git revert.
Here’s what a real pipeline definition looks like in Dev Ramps:
version: "1.0.0"
pipeline:
cloud_provider: AWS
pipeline_updates_require_approval: ALWAYS
stages:
- name: staging
deployment_time_window: NONE
account_id: "111111111111"
region: us-east-1
vars:
env: staging
- name: production
deployment_time_window: PACIFIC_WORKING_HOURS
account_id: "222222222222"
region: us-east-1
vars:
env: production
steps:
- name: Synthesize Infrastructure
id: infra
type: DEVRAMPS:TERRAFORM:SYNTHESIZE
params:
requires_approval: ALWAYS
source: /infrastructure
variables:
environment: ${{ vars.env }}
region: ${{ stage.region }}
- name: Deploy Service
type: DEVRAMPS:ECS:DEPLOY
goes_after: ["Synthesize Infrastructure"]
params:
cluster_name: ${{ steps.infra.ecs_cluster_name }}
service_name: ${{ steps.infra.ecs_service_name }}
reference_task_definition: ${{ steps.infra.task_definition }}
image_url: ${{ stage.artifacts["API Image"].image_url }}
- name: Bake Period
type: DEVRAMPS:APPROVAL:BAKE
params:
duration_minutes: 5
artifacts:
API Image:
type: DEVRAMPS:DOCKER:BUILD
rebuild_when_changed:
- /services/api
params:
dockerfile: /services/api/Dockerfile
This pipeline has two stages (staging and production), three steps (infrastructure provisioning, ECS deployment, and a bake period), and one Docker artifact. Staging deploys anytime; production only deploys during business hours. The ${{ }} expressions pull in dynamic values like stage variables and step outputs, so the same steps work across different environments without duplication.
Everything is in the YAML file. Change it in a PR, get it reviewed, merge it. Dev Ramps picks up the new definition automatically on push.
Types of Pipelines
Single-stage. One environment, one deployment. Good for internal tools, prototypes, or projects where staging and production are the same thing. Simple to set up, simple to reason about.
Multi-stage. The most common pattern. Code deploys to staging first, gets validated (manually or automatically), then promotes to production. Some teams add a dedicated QA or integration environment between staging and production.
Multi-account and multi-region. For production workloads that need isolation or geographic distribution. Each stage targets a different AWS account or region. In Dev Ramps, stages execute sequentially, so your first production region acts as a natural canary. If monitoring shows problems, you stop the pipeline before the deployment reaches other regions.
Best Practices
Keep pipelines fast. A 45-minute pipeline discourages frequent deploys, which means larger changesets, which means more risk per deploy. Optimize your build step first since that’s usually the bottleneck. Use caching, skip unnecessary rebuilds (Dev Ramps does this with rebuild_when_changed), and parallelize where you can.
Gate with tests, not hope. Every pipeline should have at least one automated check before deploying to production. Doesn’t have to be a full test suite. Even a basic health check after deployment catches the obvious failures that slip through manual testing.
Treat pipeline config as production code. Review it in PRs. Don’t let anyone push pipeline changes directly to main without review. In Dev Ramps, you can enforce this by setting pipeline_updates_require_approval: ALWAYS, which requires manual approval for any pipeline definition change.
Plan for rollback from day one. Your pipeline should make it easy to deploy the previous version. Automatic rollback on CloudWatch alarms, circuit-breaker bake periods, and clear deployment history all help. See our deployment rollback strategies post for the full breakdown.
Don’t deploy on Friday at 5pm. Or rather, use deployment time windows to prevent it. Restricting production deploys to business hours means someone is around to respond if things go wrong.
Where to Go from Here
If you’re starting from scratch, deploying a containerized app to ECS walks through a complete example. For securing what you’ve built, read CI/CD pipeline security. And if you want to spin up temporary environments for testing PRs, check out ephemeral environments.
The full pipeline configuration reference covers every field and option available in Dev Ramps.