Preview Environments with Terraform: Per-PR Infrastructure on AWS with Bunnyshell
GuideMarch 20, 202613 min read

Preview Environments with Terraform: Per-PR Infrastructure on AWS with Bunnyshell

Why Preview Environments with Terraform?

Most preview environment setups deploy application containers — your API, frontend, worker processes — into Kubernetes. The database? Usually a containerized PostgreSQL or MySQL running inside the same cluster. That works for many teams, but it breaks down when your application depends on managed cloud services that can't be containerized:

  • Amazon RDS with specific parameter groups, read replicas, or IAM authentication
  • ElastiCache (Redis/Memcached) with cluster mode or encryption at rest
  • S3 buckets with lifecycle policies and CORS configurations
  • SQS queues, SNS topics, DynamoDB tables, or any other AWS service
  • Custom VPC networking, security groups, or IAM roles

When your production stack uses these services, testing against containerized substitutes means you're not really testing your application — you're testing a simulation of it. Terraform preview environments solve this by provisioning real cloud infrastructure per pull request, giving every reviewer access to a fully production-like stack.

Bunnyshell's kind: Terraform component type lets you run Terraform modules as part of your environment lifecycle. Infrastructure is created on deploy, outputs are passed to application components, and everything is destroyed when the PR is closed.

With Bunnyshell + Terraform, you get:

  • Real cloud services per PR — Each pull request gets its own RDS instance, ElastiCache cluster, S3 bucket
  • Automatic provisioning — Terraform runs on environment create, no manual steps
  • Output sharing — Terraform outputs (endpoints, ARNs, credentials) are automatically available to application components
  • Automatic teardownterraform destroy runs when the environment is deleted, cleaning up all cloud resources
  • State isolation — Each environment gets its own Terraform workspace, preventing state conflicts

How Bunnyshell Integrates with Terraform

Bunnyshell treats Terraform as a first-class component type. When you define a kind: Terraform component in your bunnyshell.yaml, Bunnyshell:

  1. Clones your Git repository containing the Terraform modules
  2. Runs terraform init with the configured backend
  3. Runs terraform apply with variables injected from the environment
  4. Captures outputs and makes them available to other components via {{ components.<name>.exported.<output> }}
  5. Runs terraform destroy when the environment is deleted (if destroyOnDelete: true)

This means your Terraform modules run inside the Bunnyshell pipeline — no separate CI/CD job, no manual triggers, no state drift between what's deployed and what's in Git.

The Component Lifecycle

Text
1Environment Create/Deploy
2  └── Terraform Component
3        ├── terraform init (backend config)
4        ├── terraform plan
5        ├── terraform apply -auto-approve
6        └── Export outputs → available to other components
7
8Environment Delete
9  └── Terraform Component (destroyOnDelete: true)
10        ├── terraform init
11        └── terraform destroy -auto-approve

Terraform components run before application components that depend on them. Bunnyshell resolves the dependency graph automatically — if your app component references {{ components.terraform-rds.exported.endpoint }}, Bunnyshell knows to provision infrastructure first.


Prerequisites

Before setting up Terraform preview environments, ensure you have:

  • A Bunnyshell account with a connected Kubernetes cluster — sign up free
  • An AWS account with programmatic access (Access Key ID + Secret Access Key)
  • Terraform modules in a Git repository (GitHub, GitLab, or Bitbucket)
  • An S3 bucket for Terraform remote state (we'll configure this below)
  • IAM permissions for the resources you want to provision (RDS, ElastiCache, S3, etc.)

Required IAM Policy

Your AWS credentials need permissions to create and destroy the target resources. Here's a minimal policy for the RDS + ElastiCache + S3 example in this guide:

JSON
1{
2  "Version": "2012-10-17",
3  "Statement": [
4    {
5      "Effect": "Allow",
6      "Action": [
7        "rds:CreateDBInstance",
8        "rds:DeleteDBInstance",
9        "rds:DescribeDBInstances",
10        "rds:ModifyDBInstance",
11        "rds:CreateDBSubnetGroup",
12        "rds:DeleteDBSubnetGroup",
13        "rds:DescribeDBSubnetGroups",
14        "rds:AddTagsToResource",
15        "rds:ListTagsForResource",
16        "elasticache:CreateCacheCluster",
17        "elasticache:DeleteCacheCluster",
18        "elasticache:DescribeCacheClusters",
19        "elasticache:CreateCacheSubnetGroup",
20        "elasticache:DeleteCacheSubnetGroup",
21        "elasticache:DescribeCacheSubnetGroups",
22        "elasticache:AddTagsToResource",
23        "elasticache:ListTagsForResource",
24        "s3:CreateBucket",
25        "s3:DeleteBucket",
26        "s3:PutBucketPolicy",
27        "s3:GetBucketPolicy",
28        "s3:PutBucketCORS",
29        "s3:GetBucketCORS",
30        "s3:PutLifecycleConfiguration",
31        "s3:GetLifecycleConfiguration",
32        "s3:ListBucket",
33        "s3:PutObject",
34        "s3:GetObject",
35        "s3:DeleteObject",
36        "ec2:DescribeVpcs",
37        "ec2:DescribeSubnets",
38        "ec2:DescribeSecurityGroups",
39        "ec2:CreateSecurityGroup",
40        "ec2:DeleteSecurityGroup",
41        "ec2:AuthorizeSecurityGroupIngress",
42        "ec2:RevokeSecurityGroupIngress"
43      ],
44      "Resource": "*"
45    },
46    {
47      "Effect": "Allow",
48      "Action": [
49        "s3:GetObject",
50        "s3:PutObject",
51        "s3:DeleteObject",
52        "s3:ListBucket"
53      ],
54      "Resource": [
55        "arn:aws:s3:::your-terraform-state-bucket",
56        "arn:aws:s3:::your-terraform-state-bucket/*"
57      ]
58    }
59  ]
60}

For production use, scope the Resource fields to specific ARN patterns (e.g., arn:aws:rds:*:*:db:preview-*) and use IAM roles with OIDC instead of long-lived access keys.


Step-by-Step: Adding Terraform Components to bunnyshell.yaml

Let's build a complete environment that provisions AWS infrastructure with Terraform and deploys an application on top of it.

Repository Structure

Your Git repository should contain Terraform modules alongside your application code:

Text
1your-repo/
2├── app/                          # Application source code
3│   ├── Dockerfile
4│   └── ...
5├── terraform/
6│   ├── rds/                      # RDS module
7│   │   ├── main.tf
8│   │   ├── variables.tf
9│   │   └── outputs.tf
10│   ├── elasticache/              # ElastiCache module
11│   │   ├── main.tf
12│   │   ├── variables.tf
13│   │   └── outputs.tf
14│   └── storage/                  # S3 module
15│       ├── main.tf
16│       ├── variables.tf
17│       └── outputs.tf
18└── bunnyshell.yaml

The bunnyshell.yaml Configuration

Here's the complete environment configuration with Terraform and application components:

YAML
1kind: Environment
2name: preview-with-terraform
3type: primary
4
5environmentVariables:
6  AWS_ACCESS_KEY_ID: SECRET["your-aws-access-key"]
7  AWS_SECRET_ACCESS_KEY: SECRET["your-aws-secret-key"]
8  AWS_REGION: us-east-1
9  DB_USERNAME: appuser
10  DB_PASSWORD: SECRET["your-db-password"]
11  VPC_ID: vpc-0abc123def456
12  SUBNET_IDS: '["subnet-aaa111", "subnet-bbb222"]'
13
14components:
15  # ── Terraform: RDS Database ──
16  - kind: Terraform
17    name: terraform-rds
18    gitRepo: 'https://github.com/your-org/your-repo.git'
19    gitBranch: main
20    gitApplicationPath: terraform/rds
21    runnerImage: 'hashicorp/terraform:1.7'
22    deploy:
23      - 'cd /bns/repo/terraform/rds'
24      - |
25        cat << TFVARS > terraform.tfvars
26        environment_name = "{{ env.unique }}"
27        vpc_id           = "{{ env.vars.VPC_ID }}"
28        subnet_ids       = {{ env.vars.SUBNET_IDS }}
29        db_name          = "appdb"
30        db_username      = "{{ env.vars.DB_USERNAME }}"
31        db_password      = "{{ env.vars.DB_PASSWORD }}"
32        instance_class   = "db.t3.micro"
33        TFVARS
34      - 'terraform init
35          -backend-config="bucket=your-terraform-state-bucket"
36          -backend-config="key=preview/{{ env.unique }}/rds/terraform.tfstate"
37          -backend-config="region={{ env.vars.AWS_REGION }}"'
38      - 'terraform apply -auto-approve'
39      - |
40        RDS_ENDPOINT=$(terraform output -raw endpoint)
41        RDS_PORT=$(terraform output -raw port)
42    destroy:
43      - 'cd /bns/repo/terraform/rds'
44      - 'terraform init
45          -backend-config="bucket=your-terraform-state-bucket"
46          -backend-config="key=preview/{{ env.unique }}/rds/terraform.tfstate"
47          -backend-config="region={{ env.vars.AWS_REGION }}"'
48      - 'terraform destroy -auto-approve'
49    exportVariables:
50      - RDS_ENDPOINT
51      - RDS_PORT
52    environment:
53      AWS_ACCESS_KEY_ID: '{{ env.vars.AWS_ACCESS_KEY_ID }}'
54      AWS_SECRET_ACCESS_KEY: '{{ env.vars.AWS_SECRET_ACCESS_KEY }}'
55      AWS_DEFAULT_REGION: '{{ env.vars.AWS_REGION }}'
56
57  # ── Terraform: ElastiCache Redis ──
58  - kind: Terraform
59    name: terraform-redis
60    gitRepo: 'https://github.com/your-org/your-repo.git'
61    gitBranch: main
62    gitApplicationPath: terraform/elasticache
63    runnerImage: 'hashicorp/terraform:1.7'
64    deploy:
65      - 'cd /bns/repo/terraform/elasticache'
66      - |
67        cat << TFVARS > terraform.tfvars
68        environment_name = "{{ env.unique }}"
69        vpc_id           = "{{ env.vars.VPC_ID }}"
70        subnet_ids       = {{ env.vars.SUBNET_IDS }}
71        node_type        = "cache.t3.micro"
72        TFVARS
73      - 'terraform init
74          -backend-config="bucket=your-terraform-state-bucket"
75          -backend-config="key=preview/{{ env.unique }}/elasticache/terraform.tfstate"
76          -backend-config="region={{ env.vars.AWS_REGION }}"'
77      - 'terraform apply -auto-approve'
78      - |
79        REDIS_ENDPOINT=$(terraform output -raw endpoint)
80        REDIS_PORT=$(terraform output -raw port)
81    destroy:
82      - 'cd /bns/repo/terraform/elasticache'
83      - 'terraform init
84          -backend-config="bucket=your-terraform-state-bucket"
85          -backend-config="key=preview/{{ env.unique }}/elasticache/terraform.tfstate"
86          -backend-config="region={{ env.vars.AWS_REGION }}"'
87      - 'terraform destroy -auto-approve'
88    exportVariables:
89      - REDIS_ENDPOINT
90      - REDIS_PORT
91    environment:
92      AWS_ACCESS_KEY_ID: '{{ env.vars.AWS_ACCESS_KEY_ID }}'
93      AWS_SECRET_ACCESS_KEY: '{{ env.vars.AWS_SECRET_ACCESS_KEY }}'
94      AWS_DEFAULT_REGION: '{{ env.vars.AWS_REGION }}'
95
96  # ── Terraform: S3 Bucket ──
97  - kind: Terraform
98    name: terraform-s3
99    gitRepo: 'https://github.com/your-org/your-repo.git'
100    gitBranch: main
101    gitApplicationPath: terraform/storage
102    runnerImage: 'hashicorp/terraform:1.7'
103    deploy:
104      - 'cd /bns/repo/terraform/storage'
105      - |
106        cat << TFVARS > terraform.tfvars
107        environment_name = "{{ env.unique }}"
108        bucket_prefix    = "preview-uploads"
109        TFVARS
110      - 'terraform init
111          -backend-config="bucket=your-terraform-state-bucket"
112          -backend-config="key=preview/{{ env.unique }}/s3/terraform.tfstate"
113          -backend-config="region={{ env.vars.AWS_REGION }}"'
114      - 'terraform apply -auto-approve'
115      - |
116        S3_BUCKET=$(terraform output -raw bucket_name)
117        S3_REGION=$(terraform output -raw bucket_region)
118    destroy:
119      - 'cd /bns/repo/terraform/storage'
120      - 'terraform init
121          -backend-config="bucket=your-terraform-state-bucket"
122          -backend-config="key=preview/{{ env.unique }}/s3/terraform.tfstate"
123          -backend-config="region={{ env.vars.AWS_REGION }}"'
124      - 'terraform destroy -auto-approve'
125    exportVariables:
126      - S3_BUCKET
127      - S3_REGION
128    environment:
129      AWS_ACCESS_KEY_ID: '{{ env.vars.AWS_ACCESS_KEY_ID }}'
130      AWS_SECRET_ACCESS_KEY: '{{ env.vars.AWS_SECRET_ACCESS_KEY }}'
131      AWS_DEFAULT_REGION: '{{ env.vars.AWS_REGION }}'
132
133  # ── Application ──
134  - kind: Application
135    name: app
136    gitRepo: 'https://github.com/your-org/your-repo.git'
137    gitBranch: main
138    gitApplicationPath: /app
139    dockerCompose:
140      build:
141        context: ./app
142        dockerfile: Dockerfile
143      environment:
144        DATABASE_URL: 'postgres://{{ env.vars.DB_USERNAME }}:{{ env.vars.DB_PASSWORD }}@{{ components.terraform-rds.exported.RDS_ENDPOINT }}:{{ components.terraform-rds.exported.RDS_PORT }}/appdb'
145        REDIS_URL: 'redis://{{ components.terraform-redis.exported.REDIS_ENDPOINT }}:{{ components.terraform-redis.exported.REDIS_PORT }}'
146        AWS_S3_BUCKET: '{{ components.terraform-s3.exported.S3_BUCKET }}'
147        AWS_S3_REGION: '{{ components.terraform-s3.exported.S3_REGION }}'
148        AWS_ACCESS_KEY_ID: '{{ env.vars.AWS_ACCESS_KEY_ID }}'
149        AWS_SECRET_ACCESS_KEY: '{{ env.vars.AWS_SECRET_ACCESS_KEY }}'
150        APP_URL: 'https://{{ components.app.ingress.hosts[0] }}'
151      ports:
152        - '3000:3000'
153    dependsOn:
154      - terraform-rds
155      - terraform-redis
156      - terraform-s3
157    hosts:
158      - hostname: 'app-{{ env.base_domain }}'
159        path: /
160        servicePort: 3000

Always store AWS credentials using Bunnyshell's SECRET["..."] syntax. Never hardcode access keys in your configuration. Bunnyshell encrypts secrets at rest and injects them only during deployment.


Terraform for Infrastructure: RDS, ElastiCache, S3

Let's look at the Terraform modules that the configuration above references. Each module is self-contained with its own main.tf, variables.tf, and outputs.tf.

RDS Module

HCL
1# terraform/rds/main.tf
2
3terraform {
4  required_version = ">= 1.5"
5  required_providers {
6    aws = {
7      source  = "hashicorp/aws"
8      version = "~> 5.0"
9    }
10  }
11  backend "s3" {}
12}
13
14provider "aws" {}
15
16resource "aws_security_group" "rds" {
17  name_prefix = "preview-rds-${var.environment_name}-"
18  vpc_id      = var.vpc_id
19
20  ingress {
21    from_port   = 5432
22    to_port     = 5432
23    protocol    = "tcp"
24    cidr_blocks = ["10.0.0.0/8"]
25  }
26
27  egress {
28    from_port   = 0
29    to_port     = 0
30    protocol    = "-1"
31    cidr_blocks = ["0.0.0.0/0"]
32  }
33
34  tags = {
35    Name        = "preview-rds-${var.environment_name}"
36    Environment = var.environment_name
37    ManagedBy   = "bunnyshell-terraform"
38  }
39
40  lifecycle {
41    create_before_destroy = true
42  }
43}
44
45resource "aws_db_subnet_group" "rds" {
46  name       = "preview-${var.environment_name}"
47  subnet_ids = var.subnet_ids
48
49  tags = {
50    Name        = "preview-${var.environment_name}"
51    Environment = var.environment_name
52    ManagedBy   = "bunnyshell-terraform"
53  }
54}
55
56resource "aws_db_instance" "main" {
57  identifier     = "preview-${var.environment_name}"
58  engine         = "postgres"
59  engine_version = "15.4"
60  instance_class = var.instance_class
61
62  allocated_storage     = 20
63  max_allocated_storage = 50
64  storage_type          = "gp3"
65
66  db_name  = var.db_name
67  username = var.db_username
68  password = var.db_password
69
70  db_subnet_group_name   = aws_db_subnet_group.rds.name
71  vpc_security_group_ids = [aws_security_group.rds.id]
72
73  # Preview environment settings — optimize for cost, not durability
74  multi_az                = false
75  backup_retention_period = 0
76  skip_final_snapshot     = true
77  deletion_protection     = false
78
79  # Apply changes immediately in preview envs
80  apply_immediately = true
81
82  tags = {
83    Name        = "preview-${var.environment_name}"
84    Environment = var.environment_name
85    ManagedBy   = "bunnyshell-terraform"
86  }
87}
HCL
1# terraform/rds/variables.tf
2
3variable "environment_name" {
4  description = "Unique environment identifier from Bunnyshell"
5  type        = string
6}
7
8variable "vpc_id" {
9  description = "VPC ID where RDS will be provisioned"
10  type        = string
11}
12
13variable "subnet_ids" {
14  description = "Subnet IDs for the DB subnet group"
15  type        = list(string)
16}
17
18variable "db_name" {
19  description = "Database name to create"
20  type        = string
21  default     = "appdb"
22}
23
24variable "db_username" {
25  description = "Master database username"
26  type        = string
27}
28
29variable "db_password" {
30  description = "Master database password"
31  type        = string
32  sensitive   = true
33}
34
35variable "instance_class" {
36  description = "RDS instance class"
37  type        = string
38  default     = "db.t3.micro"
39}
HCL
1# terraform/rds/outputs.tf
2
3output "endpoint" {
4  description = "RDS instance endpoint (hostname)"
5  value       = aws_db_instance.main.address
6}
7
8output "port" {
9  description = "RDS instance port"
10  value       = aws_db_instance.main.port
11}
12
13output "db_name" {
14  description = "Database name"
15  value       = aws_db_instance.main.db_name
16}

ElastiCache Module

HCL
1# terraform/elasticache/main.tf
2
3terraform {
4  required_version = ">= 1.5"
5  required_providers {
6    aws = {
7      source  = "hashicorp/aws"
8      version = "~> 5.0"
9    }
10  }
11  backend "s3" {}
12}
13
14provider "aws" {}
15
16resource "aws_security_group" "redis" {
17  name_prefix = "preview-redis-${var.environment_name}-"
18  vpc_id      = var.vpc_id
19
20  ingress {
21    from_port   = 6379
22    to_port     = 6379
23    protocol    = "tcp"
24    cidr_blocks = ["10.0.0.0/8"]
25  }
26
27  egress {
28    from_port   = 0
29    to_port     = 0
30    protocol    = "-1"
31    cidr_blocks = ["0.0.0.0/0"]
32  }
33
34  tags = {
35    Name        = "preview-redis-${var.environment_name}"
36    Environment = var.environment_name
37    ManagedBy   = "bunnyshell-terraform"
38  }
39
40  lifecycle {
41    create_before_destroy = true
42  }
43}
44
45resource "aws_elasticache_subnet_group" "redis" {
46  name       = "preview-${var.environment_name}"
47  subnet_ids = var.subnet_ids
48}
49
50resource "aws_elasticache_cluster" "main" {
51  cluster_id           = "prev-${substr(var.environment_name, 0, 16)}"
52  engine               = "redis"
53  engine_version       = "7.0"
54  node_type            = var.node_type
55  num_cache_nodes      = 1
56  port                 = 6379
57  parameter_group_name = "default.redis7"
58
59  subnet_group_name  = aws_elasticache_subnet_group.redis.name
60  security_group_ids = [aws_security_group.redis.id]
61
62  # Preview environment settings
63  snapshot_retention_limit = 0
64
65  tags = {
66    Name        = "preview-${var.environment_name}"
67    Environment = var.environment_name
68    ManagedBy   = "bunnyshell-terraform"
69  }
70}
HCL
1# terraform/elasticache/variables.tf
2
3variable "environment_name" {
4  description = "Unique environment identifier from Bunnyshell"
5  type        = string
6}
7
8variable "vpc_id" {
9  description = "VPC ID where ElastiCache will be provisioned"
10  type        = string
11}
12
13variable "subnet_ids" {
14  description = "Subnet IDs for the cache subnet group"
15  type        = list(string)
16}
17
18variable "node_type" {
19  description = "ElastiCache node type"
20  type        = string
21  default     = "cache.t3.micro"
22}
HCL
1# terraform/elasticache/outputs.tf
2
3output "endpoint" {
4  description = "ElastiCache primary endpoint"
5  value       = aws_elasticache_cluster.main.cache_nodes[0].address
6}
7
8output "port" {
9  description = "ElastiCache port"
10  value       = aws_elasticache_cluster.main.cache_nodes[0].port
11}

S3 Storage Module

HCL
1# terraform/storage/main.tf
2
3terraform {
4  required_version = ">= 1.5"
5  required_providers {
6    aws = {
7      source  = "hashicorp/aws"
8      version = "~> 5.0"
9    }
10  }
11  backend "s3" {}
12}
13
14provider "aws" {}
15
16resource "aws_s3_bucket" "uploads" {
17  bucket = "${var.bucket_prefix}-${var.environment_name}"
18
19  force_destroy = true  # Allow deletion even with objects — critical for preview envs
20
21  tags = {
22    Name        = "${var.bucket_prefix}-${var.environment_name}"
23    Environment = var.environment_name
24    ManagedBy   = "bunnyshell-terraform"
25  }
26}
27
28resource "aws_s3_bucket_lifecycle_configuration" "uploads" {
29  bucket = aws_s3_bucket.uploads.id
30
31  rule {
32    id     = "expire-preview-objects"
33    status = "Enabled"
34
35    expiration {
36      days = 7
37    }
38  }
39}
40
41resource "aws_s3_bucket_cors_configuration" "uploads" {
42  bucket = aws_s3_bucket.uploads.id
43
44  cors_rule {
45    allowed_headers = ["*"]
46    allowed_methods = ["GET", "PUT", "POST"]
47    allowed_origins = ["*"]
48    max_age_seconds = 3600
49  }
50}
HCL
1# terraform/storage/variables.tf
2
3variable "environment_name" {
4  description = "Unique environment identifier from Bunnyshell"
5  type        = string
6}
7
8variable "bucket_prefix" {
9  description = "Prefix for the S3 bucket name"
10  type        = string
11  default     = "preview-uploads"
12}
HCL
1# terraform/storage/outputs.tf
2
3output "bucket_name" {
4  description = "S3 bucket name"
5  value       = aws_s3_bucket.uploads.id
6}
7
8output "bucket_region" {
9  description = "S3 bucket region"
10  value       = aws_s3_bucket.uploads.region
11}
12
13output "bucket_arn" {
14  description = "S3 bucket ARN"
15  value       = aws_s3_bucket.uploads.arn
16}

Combining Terraform + Application Components

The key to making Terraform and application components work together is output sharing. Terraform outputs are captured by Bunnyshell and made available through the interpolation syntax.

How Output Sharing Works

In the bunnyshell.yaml, Terraform components export variables using exportVariables:

YAML
1- kind: Terraform
2  name: terraform-rds
3  # ...
4  deploy:
5    - # ... terraform apply ...
6    - |
7      RDS_ENDPOINT=$(terraform output -raw endpoint)
8      RDS_PORT=$(terraform output -raw port)
9  exportVariables:
10    - RDS_ENDPOINT
11    - RDS_PORT

Application components consume them with the {{ components.<name>.exported.<var> }} syntax:

YAML
1- kind: Application
2  name: app
3  dockerCompose:
4    environment:
5      DATABASE_URL: 'postgres://{{ env.vars.DB_USERNAME }}:{{ env.vars.DB_PASSWORD }}@{{ components.terraform-rds.exported.RDS_ENDPOINT }}:{{ components.terraform-rds.exported.RDS_PORT }}/appdb'
6  dependsOn:
7    - terraform-rds

The dependsOn field is critical. It tells Bunnyshell to provision the Terraform component before deploying the application. Without it, the app might start before the database exists, and the interpolation variables would be empty.

Dependency Graph

Bunnyshell builds a directed acyclic graph (DAG) of component dependencies:

Text
1terraform-rds ─────┐
2terraform-redis ────┼──► app (waits for all three)
3terraform-s3 ──────┘

Components without dependencies (like the three Terraform modules) run in parallel, reducing total deployment time. The application component only starts once all its dependencies have completed.


State Management

Terraform state is the source of truth for what infrastructure exists. In preview environments, you need isolated state per environment to prevent conflicts.

S3 Backend with Per-Environment Keys

The recommended approach is using S3 backend with unique state keys per environment:

YAML
1deploy:
2  - 'terraform init
3      -backend-config="bucket=your-terraform-state-bucket"
4      -backend-config="key=preview/{{ env.unique }}/rds/terraform.tfstate"
5      -backend-config="region=us-east-1"'

The {{ env.unique }} interpolation generates a unique identifier for each Bunnyshell environment. This means:

  • Primary environment state: preview/env-abc123/rds/terraform.tfstate
  • PR #42 environment state: preview/env-def456/rds/terraform.tfstate
  • PR #87 environment state: preview/env-ghi789/rds/terraform.tfstate

No state conflicts. No locking issues. Each environment is fully independent.

State Bucket Setup

Create a dedicated S3 bucket for Terraform state with versioning enabled:

HCL
1# This is a one-time setup — run manually or in a bootstrap Terraform config
2
3resource "aws_s3_bucket" "terraform_state" {
4  bucket = "your-org-terraform-state"
5
6  tags = {
7    Purpose = "Terraform remote state for preview environments"
8  }
9}
10
11resource "aws_s3_bucket_versioning" "terraform_state" {
12  bucket = aws_s3_bucket.terraform_state.id
13  versioning_configuration {
14    status = "Enabled"
15  }
16}
17
18resource "aws_s3_bucket_server_side_encryption_configuration" "terraform_state" {
19  bucket = aws_s3_bucket.terraform_state.id
20  rule {
21    apply_server_side_encryption_by_default {
22      sse_algorithm = "AES256"
23    }
24  }
25}

Enable versioning on your state bucket. If a terraform destroy goes wrong, you can recover the previous state file and retry. Without versioning, a corrupted state file means orphaned resources you'll have to clean up manually.

Alternative: Terraform Workspaces

If you prefer Terraform workspaces over key-per-environment:

YAML
1deploy:
2  - 'terraform init
3      -backend-config="bucket=your-terraform-state-bucket"
4      -backend-config="key=preview/rds/terraform.tfstate"
5      -backend-config="region=us-east-1"'
6  - 'terraform workspace select {{ env.unique }} || terraform workspace new {{ env.unique }}'
7  - 'terraform apply -auto-approve'
8destroy:
9  - 'terraform init ...'
10  - 'terraform workspace select {{ env.unique }}'
11  - 'terraform destroy -auto-approve'
12  - 'terraform workspace select default'
13  - 'terraform workspace delete {{ env.unique }}'

Both approaches work. Per-environment keys are simpler; workspaces are idiomatic Terraform. Choose based on your team's preference.


Variables and Secrets

Bunnyshell provides multiple ways to pass configuration into Terraform components.

Environment Variables in bunnyshell.yaml

Top-level environment variables are available to all components:

YAML
1environmentVariables:
2  AWS_ACCESS_KEY_ID: SECRET["AKIAIOSFODNN7EXAMPLE"]
3  AWS_SECRET_ACCESS_KEY: SECRET["wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"]
4  AWS_REGION: us-east-1
5  VPC_ID: vpc-0abc123def456
6  DB_PASSWORD: SECRET["super-secure-preview-password"]

These are injected into the component's shell environment during deploy and destroy steps.

Terraform Variables via tfvars

The deploy script can generate a terraform.tfvars file using Bunnyshell interpolation:

YAML
1deploy:
2  - |
3    cat << TFVARS > terraform.tfvars
4    environment_name = "{{ env.unique }}"
5    vpc_id           = "{{ env.vars.VPC_ID }}"
6    db_password      = "{{ env.vars.DB_PASSWORD }}"
7    TFVARS
8  - 'terraform init ...'
9  - 'terraform apply -auto-approve'

Terraform Variables via -var Flags

For simpler cases, pass variables directly:

YAML
1deploy:
2  - 'terraform init ...'
3  - 'terraform apply -auto-approve
4      -var="environment_name={{ env.unique }}"
5      -var="vpc_id={{ env.vars.VPC_ID }}"
6      -var="db_password={{ env.vars.DB_PASSWORD }}"'

Use SECRET["..."] for any sensitive value — AWS credentials, database passwords, API keys. Bunnyshell encrypts these at rest and masks them in logs. Never put raw secrets in the YAML.


Destroy Lifecycle

When a preview environment is deleted (manually or on PR merge/close), Bunnyshell runs the destroy steps for each Terraform component. This is the mechanism that prevents orphaned cloud resources.

The Destroy Sequence

YAML
1destroy:
2  - 'cd /bns/repo/terraform/rds'
3  - 'terraform init
4      -backend-config="bucket=your-terraform-state-bucket"
5      -backend-config="key=preview/{{ env.unique }}/rds/terraform.tfstate"
6      -backend-config="region={{ env.vars.AWS_REGION }}"'
7  - 'terraform destroy -auto-approve'

Bunnyshell destroys components in reverse dependency order:

Text
1. app (destroyed first — no longer needs infra)
2. terraform-s3, terraform-redis, terraform-rds (destroyed in parallel)

Handling Destroy Failures

If terraform destroy fails (e.g., network timeout, AWS API error), Bunnyshell marks the component as failed. You can:

  1. Retry — Click "Destroy" again in the Bunnyshell UI
  2. Fix and retry — SSH into the runner, fix the state, and retry
  3. Manual cleanup — Use the AWS Console or CLI to delete orphaned resources

To make destroy more resilient, add retry logic:

YAML
1destroy:
2  - 'cd /bns/repo/terraform/rds'
3  - 'terraform init ...'
4  - 'terraform destroy -auto-approve || (sleep 30 && terraform destroy -auto-approve)'

Force Destroy for S3 Buckets

S3 buckets can't be deleted if they contain objects. The force_destroy = true flag in the Terraform module handles this:

HCL
1resource "aws_s3_bucket" "uploads" {
2  bucket        = "${var.bucket_prefix}-${var.environment_name}"
3  force_destroy = true  # Delete all objects before destroying bucket
4}

Without force_destroy, terraform destroy will fail on non-empty buckets, leaving orphaned resources.


Enabling Preview Environments with Terraform

Once your primary environment is deployed and working, enable automatic preview environments:

  1. Ensure your primary environment status shows Running or Stopped
  2. Go to Settings in your environment
  3. Toggle "Create ephemeral environments on pull request" to ON
  4. Toggle "Destroy environment after merge or close pull request" to ON
  5. Select the target Kubernetes cluster

What happens when a developer opens a PR:

  1. Bunnyshell receives the webhook from your Git provider
  2. Creates a new environment cloned from the primary, with the PR's branch
  3. Each {{ env.unique }} resolves to a new unique identifier
  4. Terraform provisions fresh RDS, ElastiCache, and S3 resources
  5. The application deploys with connection strings pointing to the new infrastructure
  6. Bunnyshell posts a comment on the PR with the live URL

When the PR is merged or closed:

  1. Bunnyshell triggers environment deletion
  2. Application components are removed from Kubernetes
  3. Terraform runs destroy for each infrastructure component
  4. RDS instance, ElastiCache cluster, and S3 bucket are deleted
  5. Terraform state files remain in S3 (for audit/debugging) but can be cleaned up with a lifecycle policy

Set an S3 lifecycle policy on your state bucket to automatically delete state files older than 30 days. This prevents unbounded state file accumulation from long-running projects with many PRs.


Cost Considerations

Ephemeral infrastructure incurs real cloud costs. Here's how to keep them under control.

Cost Per Preview Environment (Estimate)

ResourceInstance TypeHourly CostDaily Cost (8h)
RDS PostgreSQLdb.t3.micro~$0.018~$0.14
ElastiCache Rediscache.t3.micro~$0.017~$0.14
S3 BucketStandard~$0.001~$0.001
Total per env~$0.036/hr~$0.28/day

For a team of 10 developers with an average of 5 open PRs at any time, that's roughly $1.40/day or $42/month in infrastructure costs.

Cost Optimization Strategies

Use the smallest instance classes:

HCL
instance_class = "db.t3.micro"    # RDS: cheapest option
node_type      = "cache.t3.micro" # ElastiCache: cheapest option

Disable expensive features for preview environments:

HCL
1multi_az                = false   # No multi-AZ — it's a preview
2backup_retention_period = 0       # No backups — data is ephemeral
3skip_final_snapshot     = true    # No snapshot on delete
4deletion_protection     = false   # Allow immediate deletion

Use Bunnyshell's auto-stop:

Configure environments to automatically stop after a period of inactivity. Stopped environments don't run application pods, but Terraform resources remain (RDS, ElastiCache still incur costs). To fully eliminate costs, use auto-delete instead of auto-stop.

Set up AWS billing alerts:

HCL
1resource "aws_budgets_budget" "preview_envs" {
2  name         = "preview-environments"
3  budget_type  = "COST"
4  limit_amount = "100"
5  limit_unit   = "USD"
6  time_unit    = "MONTHLY"
7
8  notification {
9    comparison_operator = "GREATER_THAN"
10    threshold           = 80
11    threshold_type      = "PERCENTAGE"
12    notification_type   = "ACTUAL"
13    subscriber_email_addresses = ["devops@yourcompany.com"]
14  }
15}

Remember: Terraform-managed cloud resources incur costs even when the Bunnyshell environment is stopped (only Kubernetes pods are scaled down). Always use destroy (not stop) for environments you're done with, or rely on the auto-destroy on PR merge/close setting.


Troubleshooting

IssueSolution
Terraform init fails with "backend configuration changed"Delete the .terraform directory in the deploy step before terraform init: rm -rf .terraform && terraform init ...
"Error acquiring the state lock"Another deploy/destroy is running against the same state. Wait for it to finish, or force-unlock: terraform force-unlock LOCK_ID
Terraform apply times outRDS instances take 5-10 minutes to provision. Increase the Bunnyshell component timeout in the environment settings.
"Error: creating DB Instance: DBInstanceAlreadyExists"The environment identifier collided with an existing instance. Ensure {{ env.unique }} is used in resource names.
Application can't connect to RDSCheck security group rules allow traffic from the Kubernetes cluster VPC CIDR. Verify the exported endpoint variable is populated.
ElastiCache cluster ID too longElastiCache cluster IDs are limited to 20 characters. Use substr() to truncate: cluster_id = "prev-${substr(var.environment_name, 0, 16)}"
S3 bucket destroy failsEnsure force_destroy = true is set on the bucket resource. Without it, non-empty buckets can't be deleted.
Terraform destroy leaves orphaned resourcesCheck the state file is intact. If corrupted, import resources manually: terraform import aws_db_instance.main preview-env-abc123
"No valid credential sources found"AWS credentials not passed to the Terraform component. Verify environment block includes AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.
State file conflicts between PRsEnsure each environment uses a unique state key with {{ env.unique }}. Don't hardcode the key path.

What's Next?

  • Add more AWS services — Extend with SQS, SNS, DynamoDB, or any Terraform-supported resource
  • Use Terraform modules registry — Reference published modules from the Terraform Registry for battle-tested configurations
  • Implement cost tagging — Add ManagedBy: bunnyshell-terraform and Environment: {{ env.unique }} tags to all resources for cost tracking
  • Set up a cleanup Lambda — Create a scheduled Lambda function that finds and deletes orphaned preview resources (tagged but not in any active environment)
  • Use OIDC instead of access keys — Configure IAM roles with OIDC federation for keyless authentication from the Bunnyshell runner

Ship faster starting today.

14-day full-feature trial. No credit card required. Pay-as-you-go from $0.007/min per environment.