Multiple Deployment Pipelines in One Repo
Multiple Deployment Pipelines in One Repo
You can run multiple independent deployment pipelines from a single repository. Each pipeline gets its own stages, steps, artifacts, and deploy cadence. A push that only touches your API code won’t trigger a rebuild of your frontend, and vice versa. This is how monorepos stay manageable as you add services.
How Auto-Discovery Works
Dev Ramps looks for pipeline definitions under the .devramps/ directory in your repository. Each subfolder becomes its own pipeline:
my-monorepo/
├── .devramps/
│ ├── api/
│ │ └── pipeline.yaml
│ └── worker/
│ └── pipeline.yaml
├── services/
│ ├── api/
│ │ ├── Dockerfile
│ │ └── src/
│ └── worker/
│ ├── Dockerfile
│ └── src/
└── infrastructure/
├── api/
└── worker/
The folder name (e.g., api, worker) becomes the pipeline slug, which shows up in URLs, the dashboard, and API calls. When you push to a connected repository, Dev Ramps scans for .devramps/*/pipeline.yaml files and creates or updates pipelines automatically. No manual registration, no clicking through a UI.
A Concrete Example
Say you have a monorepo with two services: an API that runs on ECS and a background worker that also runs on ECS. Each needs its own build, its own infrastructure, and its own deployment flow.
Here’s the API pipeline at .devramps/api/pipeline.yaml:
version: "1.0.0"
pipeline:
cloud_provider: AWS
pipeline_updates_require_approval: ALWAYS
stages:
- name: staging
deployment_time_window: NONE
account_id: "111111111111"
region: us-east-1
vars:
env: staging
service_name: api
- name: production
deployment_time_window: PACIFIC_WORKING_HOURS
account_id: "222222222222"
region: us-east-1
vars:
env: production
service_name: api
steps:
- name: Synthesize Infrastructure
id: infra
type: DEVRAMPS:TERRAFORM:SYNTHESIZE
params:
requires_approval: ALWAYS
source: /infrastructure/api
variables:
environment: ${{ vars.env }}
region: ${{ stage.region }}
image_url: ${{ stage.artifacts["API Image"].image_url }}
- name: Deploy Service
type: DEVRAMPS:ECS:DEPLOY
goes_after: ["Synthesize Infrastructure"]
params:
cluster_name: ${{ steps.infra.ecs_cluster_name }}
service_name: ${{ steps.infra.ecs_service_name }}
reference_task_definition: ${{ steps.infra.task_definition }}
image_url: ${{ stage.artifacts["API Image"].image_url }}
- name: Bake Period
type: DEVRAMPS:APPROVAL:BAKE
params:
duration_minutes: 5
artifacts:
API Image:
type: DEVRAMPS:DOCKER:BUILD
rebuild_when_changed:
- /services/api
params:
dockerfile: /services/api/Dockerfile
And the worker pipeline at .devramps/worker/pipeline.yaml:
version: "1.0.0"
pipeline:
cloud_provider: AWS
pipeline_updates_require_approval: ALWAYS
stages:
- name: staging
deployment_time_window: NONE
account_id: "111111111111"
region: us-east-1
vars:
env: staging
service_name: worker
- name: production
deployment_time_window: PACIFIC_WORKING_HOURS
account_id: "222222222222"
region: us-east-1
vars:
env: production
service_name: worker
steps:
- name: Synthesize Infrastructure
id: infra
type: DEVRAMPS:TERRAFORM:SYNTHESIZE
params:
requires_approval: ALWAYS
source: /infrastructure/worker
variables:
environment: ${{ vars.env }}
region: ${{ stage.region }}
image_url: ${{ stage.artifacts["Worker Image"].image_url }}
- name: Deploy Service
type: DEVRAMPS:ECS:DEPLOY
goes_after: ["Synthesize Infrastructure"]
params:
cluster_name: ${{ steps.infra.ecs_cluster_name }}
service_name: ${{ steps.infra.ecs_service_name }}
reference_task_definition: ${{ steps.infra.task_definition }}
image_url: ${{ stage.artifacts["Worker Image"].image_url }}
artifacts:
Worker Image:
type: DEVRAMPS:DOCKER:BUILD
rebuild_when_changed:
- /services/worker
params:
dockerfile: /services/worker/Dockerfile
Both pipelines deploy to the same AWS accounts and regions, but they build different Docker images, use different Terraform roots, and deploy to different ECS services. A commit that only touches files in /services/api/ will trigger the API pipeline but skip the worker image rebuild entirely.
Selective Rebuilds
The rebuild_when_changed field is what makes multi-pipeline repos practical. Without it, every push would rebuild every artifact in every pipeline, even if the relevant code didn’t change.
artifacts:
API Image:
type: DEVRAMPS:DOCKER:BUILD
rebuild_when_changed:
- /services/api
- /shared/lib # Shared code that the API depends on
params:
dockerfile: /services/api/Dockerfile
Dev Ramps compares the paths you specify against the files changed in each push. If nothing in those paths changed, the artifact from the previous successful build is reused. This keeps deploy times short. Changing a README or a file in an unrelated service won’t trigger a 10-minute Docker build you don’t need.
A few things to know about the paths:
- They’re relative to the repository root and start with
/ - Directory paths match all files recursively
- Glob patterns aren’t supported yet, so use directory paths
- If you omit
rebuild_when_changedentirely, the artifact rebuilds on every push
If your services share common libraries (a /shared/ or /packages/ directory), include those paths in the rebuild_when_changed for each service that depends on them. Otherwise you’ll get stale builds that miss shared code changes.
Branch Tracking
By default, every pipeline tracks the main branch. But each pipeline can track a different branch with the tracks field:
# .devramps/api/pipeline.yaml
pipeline:
cloud_provider: AWS
pipeline_updates_require_approval: ALWAYS
tracks: main
# ...
# .devramps/api-experimental/pipeline.yaml
pipeline:
cloud_provider: AWS
pipeline_updates_require_approval: NEVER
tracks: experimental
# ...
Only pushes to the tracked branch trigger that pipeline. This is useful for a few patterns: running an experimental branch through a separate pipeline with different stages, or having a release branch that triggers production deploys while main only deploys to staging.
Things to Watch Out For
Keep pipeline names descriptive. The folder name becomes the slug that shows up everywhere. .devramps/api/ is better than .devramps/service-1/. Your on-call engineer at 2am will thank you.
Shared infrastructure gets tricky. If both services share a VPC, database, or load balancer, you need to decide which pipeline owns that Terraform. The cleanest approach: put shared infrastructure in its own pipeline (e.g., .devramps/platform/) that runs Terraform only, and have the service pipelines reference the outputs via remote state or SSM parameters.
Watch for cross-service dependencies. If your worker consumes messages that the API produces, deploying them independently means they can briefly be on different versions. This is usually fine, but design your message schemas to be backwards-compatible. Don’t rename a field in the API and the worker in the same commit and expect both pipelines to deploy atomically.
Don’t duplicate stage configuration unnecessarily. If all your pipelines deploy to the same accounts and regions, the stage blocks will look similar across pipelines. That’s fine. Resist the urge to build a meta-abstraction on top. Each pipeline being self-contained and readable is more valuable than DRY stage configs.
Further Reading
For the full configuration reference, see the pipeline overview and artifacts documentation. If you’re new to deployment pipelines in general, start with what is a deployment pipeline.