Pipelines
Pipelines let you deploy complex, multi-resource environments by chaining deployments together in a defined order. The key feature: outputs from one deployment automatically feed as inputs to the next.Why pipelines?
Real-world infrastructure rarely consists of a single resource. A production environment might need:- A VPC/VNet (networking foundation)
- Subnets and security groups (network segmentation)
- EC2/VM instances (compute — must reference the VPC)
- A database (must be in the same network)
- A load balancer (must reference the instances)
Output chaining
The core of pipelines is output chaining — outputs from a completed deployment become available as inputs for subsequent deployments.How it works
- Each template defines outputs in its Terraform code (e.g.,
vpc_id,subnet_ids,security_group_id) - When a deployment completes, Amnify captures these outputs
- The next deployment in the pipeline can reference these outputs as input variables
- This creates a dependency chain where infrastructure builds on itself
Example: VPC + EC2 + RDS Pipeline
- Step 1 creates the network foundation and outputs the VPC ID and subnet IDs
- Step 2 receives the VPC ID and subnet ID from Step 1 to deploy the EC2 instance inside the correct network
- Step 3 receives the VPC ID and subnet IDs from Step 1 to deploy the database in the same network
Creating a pipeline
- Navigate to Deploy in the sidebar
- Go to Pipelines
- Click “Create Pipeline”
- Configure:
- Name — Descriptive name for the pipeline
- Cloud provider — AWS, Azure, or GCP
- Description — What the pipeline deploys
- Deployment order — Select and order the deployments that make up the pipeline
The deployments referenced in a pipeline must already exist. Create your individual deployments first, then assemble them into a pipeline.
Executing a pipeline
- Navigate to the pipeline and click “Execute”
- Amnify runs each deployment sequentially in the defined order
- Each step goes through the full plan → approve → apply cycle
- Outputs from completed steps are passed to subsequent steps
- Track progress across the entire pipeline in the Pipeline Run detail view
Execution flow
Pipeline statuses
| Status | Description |
|---|---|
| Queued | Pipeline execution is queued |
| Planning | Current step is running terraform plan |
| Awaiting Approval | Current step needs approval to proceed |
| Applying | Current step is provisioning infrastructure |
| Completed | All steps completed successfully |
| Failed | A step failed — the pipeline stops |
Referencing existing resources
Not every pipeline needs to create everything from scratch. You can also:- Reference existing VPCs, subnets, and security groups using the resource discovery dropdowns when configuring deployment variables
- Mix new and existing resources — create a new EC2 instance in an existing VPC
- Build incrementally — add new deployments to a project over time
Common pipeline patterns
Network + Compute
Deploy a VPC first, then compute instances inside it:- VPC → EC2 instances
- VNet → Azure VMs
- VPC → GCE instances
Full application stack
Deploy the complete infrastructure for an application:- VPC → Subnets → Security Groups → EC2 → RDS → Load Balancer
Database + Application
Deploy a database first, then the application that connects to it:- RDS → Application (with database endpoint as input)
Multi-tier architecture
Deploy frontend and backend tiers in the same network:- VPC → Frontend instances → Backend instances → Database