Adding Another Microservice to Your Pipeline
Adding Another Microservice to Your Pipeline
You have one service deploying through Dev Ramps. Now you need a second. The pattern is straightforward: create a new folder under .devramps/ with its own pipeline.yaml, scope the artifact builds so each service only rebuilds when its own code changes, and re-bootstrap if you’re targeting a new AWS account or using step types you haven’t used before. The whole process takes about 10 minutes.
Folder Structure
Each pipeline in Dev Ramps lives in its own folder under .devramps/. The folder name becomes the pipeline slug. If your existing service is an API and you’re adding a worker, your repo might look like this:
my-project/
├── .devramps/
│ ├── api/
│ │ └── pipeline.yaml # existing pipeline
│ └── worker/
│ └── pipeline.yaml # new pipeline
├── services/
│ ├── api/
│ │ ├── src/
│ │ ├── Dockerfile
│ │ └── package.json
│ └── worker/
│ ├── src/
│ ├── Dockerfile
│ └── package.json
├── infrastructure/
│ └── main.tf # shared Terraform
└── shared/
└── schemas/ # shared code
Dev Ramps auto-discovers pipelines by scanning for .devramps/*/pipeline.yaml on each push. You don’t need to register the new pipeline anywhere. Just commit the folder and push.
Creating the New Pipeline
The new service’s pipeline.yaml follows the same structure as your first one. Here’s a minimal example for the worker service:
version: "1.0.0"
pipeline:
cloud_provider: AWS
pipeline_updates_require_approval: ALWAYS
stages:
- name: staging
account_id: "111111111111"
region: us-east-1
vars:
env: staging
- name: production
account_id: "222222222222"
region: us-east-1
vars:
env: production
steps:
- name: Synthesize Infrastructure
id: infra
type: DEVRAMPS:TERRAFORM:SYNTHESIZE
params:
requires_approval: DESTRUCTIVE_CHANGES_ONLY
source: /infrastructure
variables:
environment: ${{ vars.env }}
service_name: worker
- name: Deploy Worker
type: DEVRAMPS:ECS:DEPLOY
goes_after: ["Synthesize Infrastructure"]
params:
cluster_name: ${{ steps.infra.ecs_cluster_name }}
service_name: ${{ steps.infra.worker_service_name }}
reference_task_definition: ${{ steps.infra.worker_task_definition_arn }}
images:
- container_name: worker
image: ${{ stage.artifacts["Worker Image"].image_url }}
artifacts:
Worker Image:
type: DEVRAMPS:DOCKER:BUILD
rebuild_when_changed:
- /services/worker
- /shared
params:
dockerfile: /services/worker/Dockerfile
build_root: /
The important detail here is rebuild_when_changed. It’s scoped to /services/worker and /shared (if your worker depends on shared code). When someone pushes a change that only touches /services/api, the worker image doesn’t rebuild. The previous artifact gets reused and the deploy step runs with the existing image. This keeps deployment times down in a monorepo where services change independently.
When to Re-Bootstrap
If your new service deploys to the same AWS accounts using the same step types as your existing pipeline, you don’t need to re-bootstrap. The IAM roles already have the right permissions.
You do need to re-bootstrap when:
- The new pipeline targets an AWS account that hasn’t been bootstrapped yet. Each account needs its own IAM role and OIDC provider.
- The new pipeline uses step types you haven’t used before. If your API uses
DEVRAMPS:ECS:DEPLOYand the new worker usesDEVRAMPS:EC2:DEPLOY, the IAM role needs additional permissions. - You’ve added custom IAM policies in
aws_additional_iam_policies.yamlfor the new pipeline.
To re-bootstrap:
# Preview changes first
npx @devramps/cli bootstrap --dry-run
# Apply
npx @devramps/cli bootstrap
You can also scope the bootstrap to just the new pipeline:
npx @devramps/cli bootstrap --pipeline-slugs worker
Sharing Terraform Across Services
Both pipelines can reference the same /infrastructure directory for Terraform. Each pipeline passes different variables, so Terraform knows which resources to create or update.
Your Terraform module might use the service_name variable to create per-service resources:
variable "service_name" {
type = string
}
resource "aws_ecs_service" "main" {
name = var.service_name
cluster = aws_ecs_cluster.main.id
task_definition = aws_ecs_task_definition.main.arn
# ...
}
output "ecs_cluster_name" {
value = aws_ecs_cluster.main.name
}
output "${var.service_name}_service_name" {
value = aws_ecs_service.main.name
}
Dev Ramps manages Terraform state automatically. Each pipeline gets its own state file per stage, so the API pipeline’s Terraform runs won’t interfere with the worker pipeline’s state.
One thing to watch: if both pipelines modify the same Terraform module and you push changes to both at once, the two Terraform applies could conflict. In practice this is rare because separate pushes trigger separate deployments, and Dev Ramps locks each stage during execution. But if you find yourself in this situation, consider splitting shared infrastructure (like the VPC and cluster) into its own pipeline that deploys first, and keep per-service resources in each service’s Terraform.
Independent Deploy Cycles
Each pipeline runs independently. A push that changes files in /services/api triggers the API pipeline. A push that changes /services/worker triggers the worker pipeline. If a commit touches both directories, both pipelines trigger simultaneously but deploy independently.
This means services can move at different speeds. The API might be on its 50th revision while the worker is on its 10th. Staging and production promotions happen per-pipeline, so a failure in the worker’s staging deploy doesn’t block the API’s production promotion.
If you need both services to deploy atomically (same commit, same order, guaranteed), that requires putting both artifacts and deploy steps in a single pipeline. But for most microservice setups, independent pipelines are simpler and give each team more control over their own deploy cadence.
Conclusion
Adding a service is mostly copy-and-configure. Create a new folder under .devramps/, write a pipeline.yaml with scoped rebuild_when_changed paths, and push. Re-bootstrap only if you’re hitting a new account or step type. For a deeper walkthrough of the ECS deploy step itself, see deploying a containerized app to ECS. For the full pipeline YAML reference, check the pipeline configuration docs.