Preview Environments with PostgreSQL: Per-PR Database Isolation with Bunnyshell
GuideMarch 20, 202612 min read

Preview Environments with PostgreSQL: Per-PR Database Isolation with Bunnyshell

Why Preview Environments Need Their Own Database

Every team that shares a staging database eventually runs into the same problem: one developer's migration drops a column that another developer's feature branch depends on. Or seed data from PR #42 pollutes the test results for PR #43. Or someone runs TRUNCATE on a table during a demo.

The fix is isolation. Each preview environment gets its own PostgreSQL instance — its own schema, its own data, its own lifecycle. When the PR is merged, the database is destroyed along with everything else. No cleanup scripts, no orphaned test data, no conflicts.

With Bunnyshell, every preview environment automatically provisions:

  • A dedicated PostgreSQL instance — Running in the same Kubernetes namespace as your app
  • Isolated data — Each PR starts with a clean database seeded from your baseline
  • Automatic cleanup — The database is destroyed when the PR is merged or closed
  • Connection strings injected automatically — Your app connects without manual configuration

This guide covers three approaches to running PostgreSQL in Bunnyshell preview environments, from the simplest built-in component to production-grade Terraform-managed instances.


The Challenge: Database Per Environment at Scale

Running a separate database for every open pull request sounds expensive and complex. Here's why it's actually practical with Bunnyshell:

Resource efficiency: Preview databases are small. They run with minimal resources (256Mi RAM, 1Gi storage) and only exist while the PR is open. A team with 10 open PRs might use 2.5Gi of RAM total for all database instances — less than a single staging database.

Lifecycle management: Bunnyshell handles creation and destruction automatically. When a PR opens, a new PostgreSQL container starts. When the PR merges or closes, the container and its persistent volume are deleted. No orphaned databases accumulating over months.

Configuration consistency: Every preview environment uses the same bunnyshell.yaml configuration. The database version, extensions, init scripts, and seed data are defined once and reproduced exactly for every PR.

ConcernShared Staging DBPer-PR Database (Bunnyshell)
Migration conflictsFrequent — developers overwrite each otherNone — each PR has its own schema
Test data isolationImpossible — all PRs share the same rowsComplete — each PR starts clean
CleanupManual, error-proneAutomatic on PR merge/close
CostOne instance, always runningMany small instances, only while PRs are open
Production parityDrift over timeFresh from config every time

Bunnyshell's Approach to Database Components

Bunnyshell offers three ways to provision PostgreSQL in preview environments. Choose based on your team's needs:

ApproachBest forComplexityProduction parity
Approach A: Built-in Database ComponentMost teams — fast setup, minimal configEasiestGood — same engine, lightweight instance
Approach B: Helm ChartTeams with existing Helm infrastructureModerateBetter — Bitnami chart with replication options
Approach C: Terraform-ManagedTeams needing managed databases (RDS, Cloud SQL)AdvancedBest — actual managed database instances

All three approaches work with Bunnyshell's automatic preview environment lifecycle. When a PR opens, the database is provisioned. When it closes, the database is destroyed.


Approach A: Built-in Database Component

The simplest way to add PostgreSQL to a preview environment. Use kind: Database in your bunnyshell.yaml and Bunnyshell handles the rest.

Minimal Configuration

YAML
1kind: Environment
2name: myapp-preview
3type: primary
4
5environmentVariables:
6  DB_PASSWORD: SECRET["your-secure-password"]
7  DB_USER: appuser
8  DB_NAME: appdb
9
10components:
11  # ── Your Application ──
12  - kind: Application
13    name: api
14    gitRepo: 'https://github.com/your-org/your-app.git'
15    gitBranch: main
16    gitApplicationPath: /
17    dockerCompose:
18      build:
19        context: .
20        dockerfile: Dockerfile
21      environment:
22        DATABASE_URL: 'postgresql://{{ env.vars.DB_USER }}:{{ env.vars.DB_PASSWORD }}@db:5432/{{ env.vars.DB_NAME }}'
23        DB_HOST: db
24        DB_PORT: '5432'
25        DB_NAME: '{{ env.vars.DB_NAME }}'
26        DB_USER: '{{ env.vars.DB_USER }}'
27        DB_PASSWORD: '{{ env.vars.DB_PASSWORD }}'
28      ports:
29        - '3000:3000'
30    dependsOn:
31      - db
32    hosts:
33      - hostname: 'api-{{ env.base_domain }}'
34        path: /
35        servicePort: 3000
36
37  # ── PostgreSQL Database ──
38  - kind: Database
39    name: db
40    dockerCompose:
41      image: 'postgres:16-alpine'
42      environment:
43        POSTGRES_DB: '{{ env.vars.DB_NAME }}'
44        POSTGRES_USER: '{{ env.vars.DB_USER }}'
45        POSTGRES_PASSWORD: '{{ env.vars.DB_PASSWORD }}'
46      ports:
47        - '5432:5432'
48
49volumes:
50  - name: pg-data
51    mount:
52      component: db
53      containerPath: /var/lib/postgresql/data
54    size: 1Gi

Internal networking only. The PostgreSQL component does not need an ingress or public hostname. Your application connects to it via the Kubernetes service name (db in this example) on port 5432. The {{ components.db.ingress.hosts[0] }} interpolation is NOT used for databases — that's for HTTP services only.

Key Configuration Details

Image choice: postgres:16-alpine is the recommended image. Alpine variants are smaller (80MB vs 400MB), start faster, and use less disk in preview environments. Use the major version tag (16) to get automatic patch updates.

Environment variables: PostgreSQL's official Docker image reads three variables on first start:

VariablePurposeExample
POSTGRES_DBDatabase name to createappdb
POSTGRES_USERSuperuser usernameappuser
POSTGRES_PASSWORDSuperuser passwordVia SECRET["..."]

Volume mount: The pg-data volume at /var/lib/postgresql/data persists data across container restarts within the same environment. Set size: 1Gi for preview environments — this is plenty for test data and keeps costs low.

Connection string format:

Text
postgresql://appuser:password@db:5432/appdb

Your application references the database by its component name (db), which resolves to the Kubernetes service. No IP addresses, no external DNS.

Multiple App Components Sharing One Database

If your architecture has multiple services that connect to the same PostgreSQL instance, reference the same component name:

YAML
1components:
2  - kind: Application
3    name: api
4    dockerCompose:
5      environment:
6        DATABASE_URL: 'postgresql://{{ env.vars.DB_USER }}:{{ env.vars.DB_PASSWORD }}@db:5432/{{ env.vars.DB_NAME }}'
7    dependsOn:
8      - db
9
10  - kind: Application
11    name: worker
12    dockerCompose:
13      environment:
14        DATABASE_URL: 'postgresql://{{ env.vars.DB_USER }}:{{ env.vars.DB_PASSWORD }}@db:5432/{{ env.vars.DB_NAME }}'
15    dependsOn:
16      - db
17
18  - kind: Service
19    name: scheduler
20    dockerCompose:
21      environment:
22        DATABASE_URL: 'postgresql://{{ env.vars.DB_USER }}:{{ env.vars.DB_PASSWORD }}@db:5432/{{ env.vars.DB_NAME }}'
23    dependsOn:
24      - db
25
26  - kind: Database
27    name: db
28    dockerCompose:
29      image: 'postgres:16-alpine'
30      environment:
31        POSTGRES_DB: '{{ env.vars.DB_NAME }}'
32        POSTGRES_USER: '{{ env.vars.DB_USER }}'
33        POSTGRES_PASSWORD: '{{ env.vars.DB_PASSWORD }}'
34      ports:
35        - '5432:5432'

All three components (api, worker, scheduler) connect to the same db service. The dependsOn ensures PostgreSQL starts before any application component.

Adding PostgreSQL Extensions

Many applications require extensions like PostGIS (geospatial), pgvector (embeddings), or pg_trgm (fuzzy search). There are two ways to add them:

Option 1: Init script (for bundled extensions)

Extensions that ship with PostgreSQL (like pg_trgm, uuid-ossp, hstore) can be enabled via an init script. Create docker/postgres/init-extensions.sql in your repo:

SQL
1-- Enable extensions on database creation
2CREATE EXTENSION IF NOT EXISTS "uuid-ossp";
3CREATE EXTENSION IF NOT EXISTS "pg_trgm";
4CREATE EXTENSION IF NOT EXISTS "hstore";

Then mount it in the component:

YAML
1  - kind: Database
2    name: db
3    gitRepo: 'https://github.com/your-org/your-app.git'
4    gitBranch: main
5    gitApplicationPath: /docker/postgres
6    dockerCompose:
7      image: 'postgres:16-alpine'
8      environment:
9        POSTGRES_DB: '{{ env.vars.DB_NAME }}'
10        POSTGRES_USER: '{{ env.vars.DB_USER }}'
11        POSTGRES_PASSWORD: '{{ env.vars.DB_PASSWORD }}'
12      ports:
13        - '5432:5432'

Place your init-extensions.sql file in /docker/entrypoint-initdb.d/ inside your Docker build context so it runs automatically on first container start.

Option 2: Custom Docker image (for external extensions)

Extensions like PostGIS or pgvector require additional system libraries. Create docker/postgres/Dockerfile:

Dockerfile
1FROM postgres:16-alpine
2
3# Install PostGIS
4RUN apk add --no-cache postgis
5
6# Install pgvector
7RUN apk add --no-cache --repository=https://dl-cdn.alpinelinux.org/alpine/edge/community pgvector
8
9# Copy init scripts
10COPY init-extensions.sql /docker-entrypoint-initdb.d/

And docker/postgres/init-extensions.sql:

SQL
1CREATE EXTENSION IF NOT EXISTS postgis;
2CREATE EXTENSION IF NOT EXISTS vector;
3CREATE EXTENSION IF NOT EXISTS "uuid-ossp";

Update the component to build from your Dockerfile:

YAML
1  - kind: Database
2    name: db
3    gitRepo: 'https://github.com/your-org/your-app.git'
4    gitBranch: main
5    gitApplicationPath: /docker/postgres
6    dockerCompose:
7      build:
8        context: docker/postgres
9        dockerfile: Dockerfile
10      environment:
11        POSTGRES_DB: '{{ env.vars.DB_NAME }}'
12        POSTGRES_USER: '{{ env.vars.DB_USER }}'
13        POSTGRES_PASSWORD: '{{ env.vars.DB_PASSWORD }}'
14      ports:
15        - '5432:5432'

Extension compatibility matters. If your production runs PostgreSQL 16 with PostGIS 3.4, use the same versions in preview environments. Extension version mismatches can cause subtle query differences, especially with geospatial functions.


Approach B: Helm Chart for PostgreSQL

For teams with existing Helm infrastructure or those who need more control over PostgreSQL configuration (replication, custom postgresql.conf, monitoring), the Bitnami PostgreSQL Helm chart is an excellent option.

Bunnyshell Configuration with Bitnami Helm Chart

YAML
1kind: Environment
2name: myapp-helm
3type: primary
4
5environmentVariables:
6  DB_PASSWORD: SECRET["your-secure-password"]
7  DB_USER: appuser
8  DB_NAME: appdb
9
10components:
11  # ── Docker Image Build ──
12  - kind: DockerImage
13    name: api-image
14    context: /
15    dockerfile: Dockerfile
16    gitRepo: 'https://github.com/your-org/your-app.git'
17    gitBranch: main
18    gitApplicationPath: /
19
20  # ── PostgreSQL via Bitnami Helm Chart ──
21  - kind: Helm
22    name: postgresql
23    runnerImage: 'dtzar/helm-kubectl:3.8.2'
24    deploy:
25      - |
26        cat << EOF > pg_values.yaml
27          global:
28            storageClass: bns-network-sc
29          auth:
30            postgresPassword: {{ env.vars.DB_PASSWORD }}
31            username: {{ env.vars.DB_USER }}
32            password: {{ env.vars.DB_PASSWORD }}
33            database: {{ env.vars.DB_NAME }}
34          primary:
35            persistence:
36              size: 1Gi
37            resources:
38              requests:
39                memory: 256Mi
40                cpu: 100m
41              limits:
42                memory: 512Mi
43                cpu: 500m
44            extendedConfiguration: |
45              max_connections = 50
46              shared_buffers = 64MB
47              effective_cache_size = 128MB
48              work_mem = 4MB
49              maintenance_work_mem = 32MB
50              log_min_duration_statement = 500
51        EOF
52      - 'helm repo add bitnami https://charts.bitnami.com/bitnami'
53      - 'helm upgrade --install --namespace {{ env.k8s.namespace }}
54        --post-renderer /bns/helpers/helm/bns_post_renderer
55        -f pg_values.yaml postgresql bitnami/postgresql --version 15.5.23'
56      - |
57        PG_HOST="postgresql.{{ env.k8s.namespace }}.svc.cluster.local"
58    destroy:
59      - 'helm uninstall postgresql --namespace {{ env.k8s.namespace }}'
60    start:
61      - 'kubectl scale --replicas=1 --namespace {{ env.k8s.namespace }}
62        statefulset/postgresql'
63    stop:
64      - 'kubectl scale --replicas=0 --namespace {{ env.k8s.namespace }}
65        statefulset/postgresql'
66    exportVariables:
67      - PG_HOST
68
69  # ── Application via Helm ──
70  - kind: Helm
71    name: api
72    runnerImage: 'dtzar/helm-kubectl:3.8.2'
73    deploy:
74      - |
75        cat << EOF > api_values.yaml
76          replicaCount: 1
77          image:
78            repository: {{ components.api-image.image }}
79          service:
80            port: 3000
81          ingress:
82            enabled: true
83            className: bns-nginx
84            host: api-{{ env.base_domain }}
85          env:
86            DATABASE_URL: 'postgresql://{{ env.vars.DB_USER }}:{{ env.vars.DB_PASSWORD }}@{{ components.postgresql.exported.PG_HOST }}:5432/{{ env.vars.DB_NAME }}'
87            DB_HOST: '{{ components.postgresql.exported.PG_HOST }}'
88            DB_PORT: '5432'
89            DB_NAME: '{{ env.vars.DB_NAME }}'
90            DB_USER: '{{ env.vars.DB_USER }}'
91            DB_PASSWORD: '{{ env.vars.DB_PASSWORD }}'
92        EOF
93      - 'helm upgrade --install --namespace {{ env.k8s.namespace }}
94        --post-renderer /bns/helpers/helm/bns_post_renderer
95        -f api_values.yaml api-{{ env.unique }} ./helm/api'
96    destroy:
97      - 'helm uninstall api-{{ env.unique }} --namespace {{ env.k8s.namespace }}'
98    start:
99      - 'helm upgrade --namespace {{ env.k8s.namespace }}
100        --post-renderer /bns/helpers/helm/bns_post_renderer
101        --reuse-values --set replicaCount=1 api-{{ env.unique }} ./helm/api'
102    stop:
103      - 'helm upgrade --namespace {{ env.k8s.namespace }}
104        --post-renderer /bns/helpers/helm/bns_post_renderer
105        --reuse-values --set replicaCount=0 api-{{ env.unique }} ./helm/api'
106    gitRepo: 'https://github.com/your-org/your-app.git'
107    gitBranch: main
108    gitApplicationPath: /helm/api
109    dependsOn:
110      - postgresql

Always include --post-renderer /bns/helpers/helm/bns_post_renderer in your Helm commands. This adds Bunnyshell labels to all Kubernetes resources so the platform can track them, show logs, and manage lifecycle.

Helm Chart Configuration Explained

The Bitnami chart exposes many configuration options. Here are the most relevant for preview environments:

primary.persistence.size: 1Gi — Keep storage small for preview environments. 1Gi is sufficient for most test datasets. This saves costs when you have many concurrent PRs.

primary.extendedConfiguration — PostgreSQL tuning parameters injected into postgresql.conf. The values above are tuned for a small preview instance:

ParameterValueWhy
max_connections50Preview envs don't need 100+ connections
shared_buffers64MB25% of available memory (256Mi limit)
effective_cache_size128MBConservative estimate for preview
work_mem4MBPer-operation memory, keep low
log_min_duration_statement500Log slow queries > 500ms for debugging

auth.postgresPassword — Sets the postgres superuser password. The chart also creates the application user (auth.username / auth.password) and database (auth.database) automatically.


Approach C: Terraform-Managed PostgreSQL

For teams that need production-like managed databases (AWS RDS, GCP Cloud SQL, Azure Database for PostgreSQL) in their preview environments. This approach creates real managed instances and destroys them when the PR closes.

Cost consideration: Managed database instances (even the smallest tiers) cost more than in-cluster containers. Use this approach only when you need production parity that an in-cluster PostgreSQL cannot provide — for example, testing against specific RDS parameter groups, IAM authentication, or read replicas.

Terraform Configuration

Create terraform/preview-db/main.tf in your repo:

HCL
1terraform {
2  required_providers {
3    aws = {
4      source  = "hashicorp/aws"
5      version = "~> 5.0"
6    }
7  }
8}
9
10variable "env_id" {
11  description = "Bunnyshell environment unique ID"
12  type        = string
13}
14
15variable "db_password" {
16  description = "Database password"
17  type        = string
18  sensitive   = true
19}
20
21resource "aws_db_instance" "preview" {
22  identifier = "preview-${var.env_id}"
23  engine     = "postgres"
24  engine_version    = "16.3"
25  instance_class    = "db.t4g.micro"
26  allocated_storage = 20
27
28  db_name  = "appdb"
29  username = "appuser"
30  password = var.db_password
31
32  # Preview environment settings — cost optimization
33  skip_final_snapshot    = true
34  deletion_protection    = false
35  backup_retention_period = 0
36  multi_az               = false
37  publicly_accessible    = false
38
39  vpc_security_group_ids = [aws_security_group.preview_db.id]
40  db_subnet_group_name   = aws_db_subnet_group.preview.name
41
42  tags = {
43    Environment = "preview"
44    ManagedBy   = "bunnyshell-terraform"
45    EnvID       = var.env_id
46  }
47}
48
49output "db_host" {
50  value = aws_db_instance.preview.address
51}
52
53output "db_port" {
54  value = aws_db_instance.preview.port
55}

Bunnyshell Configuration with Terraform

YAML
1kind: Environment
2name: myapp-terraform
3type: primary
4
5environmentVariables:
6  DB_PASSWORD: SECRET["your-secure-password"]
7  AWS_ACCESS_KEY_ID: SECRET["your-aws-key"]
8  AWS_SECRET_ACCESS_KEY: SECRET["your-aws-secret"]
9
10components:
11  # ── Terraform-Managed PostgreSQL ──
12  - kind: Terraform
13    name: postgresql
14    gitRepo: 'https://github.com/your-org/your-app.git'
15    gitBranch: main
16    gitApplicationPath: /terraform/preview-db
17    runnerImage: 'hashicorp/terraform:1.7'
18    deploy:
19      - 'cd /bns/repo/terraform/preview-db'
20      - 'terraform init'
21      - 'terraform apply -auto-approve
22        -var="env_id={{ env.unique }}"
23        -var="db_password={{ env.vars.DB_PASSWORD }}"'
24      - |
25        PG_HOST=$(terraform output -raw db_host)
26        PG_PORT=$(terraform output -raw db_port)
27    destroy:
28      - 'cd /bns/repo/terraform/preview-db'
29      - 'terraform init'
30      - 'terraform destroy -auto-approve
31        -var="env_id={{ env.unique }}"
32        -var="db_password={{ env.vars.DB_PASSWORD }}"'
33    exportVariables:
34      - PG_HOST
35      - PG_PORT
36
37  # ── Application ──
38  - kind: Application
39    name: api
40    gitRepo: 'https://github.com/your-org/your-app.git'
41    gitBranch: main
42    gitApplicationPath: /
43    dockerCompose:
44      build:
45        context: .
46        dockerfile: Dockerfile
47      environment:
48        DATABASE_URL: 'postgresql://appuser:{{ env.vars.DB_PASSWORD }}@{{ components.postgresql.exported.PG_HOST }}:{{ components.postgresql.exported.PG_PORT }}/appdb'
49      ports:
50        - '3000:3000'
51    dependsOn:
52      - postgresql
53    hosts:
54      - hostname: 'api-{{ env.base_domain }}'
55        path: /
56        servicePort: 3000

Data Seeding and Migrations

Every preview environment needs a consistent starting state. There are three strategies for populating PostgreSQL in preview environments:

Strategy 1: Application-Level Migrations

Most frameworks (Django, Rails, Laravel, Prisma, Alembic) have built-in migration tools. Run them post-deploy:

Bash
1# Django
2bns exec COMPONENT_ID -- python manage.py migrate
3
4# Rails
5bns exec COMPONENT_ID -- rails db:migrate db:seed
6
7# Node.js with Prisma
8bns exec COMPONENT_ID -- npx prisma migrate deploy
9bns exec COMPONENT_ID -- npx prisma db seed
10
11# Laravel
12bns exec COMPONENT_ID -- php artisan migrate --force
13bns exec COMPONENT_ID -- php artisan db:seed

Strategy 2: pg_dump / pg_restore Seed File

For larger datasets or when you need production-like data, create a seed dump from your reference database:

Bash
1# Create a seed dump from your reference database
2pg_dump --format=custom \
3  --no-owner \
4  --no-privileges \
5  --exclude-table-data='audit_logs' \
6  --exclude-table-data='sessions' \
7  -h prod-replica.example.com \
8  -U readonly \
9  appdb > seed.dump

Add a restore script at docker/postgres/seed.sh:

Bash
1#!/bin/bash
2set -e
3
4# Wait for PostgreSQL to be ready
5until pg_isready -U "$POSTGRES_USER" -d "$POSTGRES_DB"; do
6  echo "Waiting for PostgreSQL..."
7  sleep 2
8done
9
10# Restore seed data if the database is empty
11TABLE_COUNT=$(psql -U "$POSTGRES_USER" -d "$POSTGRES_DB" -t -c \
12  "SELECT count(*) FROM information_schema.tables WHERE table_schema = 'public'")
13
14if [ "$TABLE_COUNT" -lt 2 ]; then
15  echo "Seeding database from dump..."
16  pg_restore --no-owner --no-privileges \
17    -U "$POSTGRES_USER" -d "$POSTGRES_DB" /seed/seed.dump
18  echo "Seed complete."
19else
20  echo "Database already has tables, skipping seed."
21fi

Strategy 3: SQL Init Scripts

For simpler setups, place .sql files in /docker-entrypoint-initdb.d/:

SQL
1-- docker/postgres/initdb.d/01-schema.sql
2CREATE TABLE IF NOT EXISTS users (
3    id SERIAL PRIMARY KEY,
4    email VARCHAR(255) UNIQUE NOT NULL,
5    name VARCHAR(255) NOT NULL,
6    created_at TIMESTAMP DEFAULT NOW()
7);
8
9CREATE TABLE IF NOT EXISTS projects (
10    id SERIAL PRIMARY KEY,
11    name VARCHAR(255) NOT NULL,
12    owner_id INTEGER REFERENCES users(id),
13    created_at TIMESTAMP DEFAULT NOW()
14);
15
16-- docker/postgres/initdb.d/02-seed.sql
17INSERT INTO users (email, name) VALUES
18    ('alice@example.com', 'Alice Dev'),
19    ('bob@example.com', 'Bob Tester')
20ON CONFLICT (email) DO NOTHING;

Init scripts run only once. PostgreSQL's /docker-entrypoint-initdb.d/ scripts execute only when the data directory is empty (first container start). If you update your seed data, you need to destroy and recreate the environment — or use the pg_restore approach instead.


Connection Strings and Secrets

Connection String Formats

PostgreSQL supports several connection string formats. Here's how to use them with Bunnyshell interpolation:

YAML
1# URI format (most common)
2DATABASE_URL: 'postgresql://{{ env.vars.DB_USER }}:{{ env.vars.DB_PASSWORD }}@db:5432/{{ env.vars.DB_NAME }}'
3
4# URI with SSL mode (for Terraform-managed instances)
5DATABASE_URL: 'postgresql://{{ env.vars.DB_USER }}:{{ env.vars.DB_PASSWORD }}@{{ components.postgresql.exported.PG_HOST }}:5432/{{ env.vars.DB_NAME }}?sslmode=require'
6
7# Separate parameters (for frameworks that prefer individual vars)
8DB_HOST: db
9DB_PORT: '5432'
10DB_NAME: '{{ env.vars.DB_NAME }}'
11DB_USER: '{{ env.vars.DB_USER }}'
12DB_PASSWORD: '{{ env.vars.DB_PASSWORD }}'

Secret Management

Always use Bunnyshell's SECRET["..."] syntax for passwords:

YAML
environmentVariables:
  DB_PASSWORD: SECRET["your-password-here"]

Secrets are encrypted at rest and never exposed in logs or the Bunnyshell UI. They are injected into containers as environment variables at runtime.

Never hardcode passwords in your bunnyshell.yaml. Even for preview environments, use the SECRET["..."] syntax. Hardcoded passwords end up in Git history, Bunnyshell audit logs, and container inspect output.


Persistent Storage and Backup Considerations

Volume Configuration

For the built-in Database component, attach a persistent volume:

YAML
1volumes:
2  - name: pg-data
3    mount:
4      component: db
5      containerPath: /var/lib/postgresql/data
6    size: 1Gi

Size guidelines for preview environments:

Dataset sizeRecommended volumeNotes
Small (< 100MB seed)1GiDefault for most projects
Medium (100MB - 1GB seed)2GiLarge seed data or file-heavy apps
Large (> 1GB seed)5GiConsider Approach C (Terraform) instead

Backup Strategy for Preview Environments

Preview environments are ephemeral — they're destroyed when the PR closes. In most cases, you don't need backups for preview databases. The seed data and migrations are reproducible from your repository.

However, if your team needs to preserve preview database state (e.g., for debugging a complex issue), you can take a manual dump before destroying:

Bash
1# Dump before environment destruction
2bns exec DB_COMPONENT_ID -- pg_dump -U appuser -d appdb --format=custom > pr-123-debug.dump
3
4# Or port-forward and dump locally
5bns port-forward 15432:5432 --component DB_COMPONENT_ID
6pg_dump -h 127.0.0.1 -p 15432 -U appuser -d appdb > pr-123-debug.dump

Performance Tuning for Preview Environments

Preview databases don't need production-level performance, but they should be fast enough that developers aren't waiting on queries during testing.

INI
1# Connection settings
2max_connections = 50
3# Memory — tuned for 256Mi-512Mi container limit
4shared_buffers = 64MB
5effective_cache_size = 128MB
6work_mem = 4MB
7maintenance_work_mem = 32MB
8# WAL — relaxed for preview (faster writes, less durability)
9wal_level = minimal
10max_wal_senders = 0
11fsync = off
12synchronous_commit = off
13full_page_writes = off
14# Logging — more verbose for debugging
15log_min_duration_statement = 200
16log_statement = 'ddl'
17log_line_prefix = '%t [%p] %u@%d '

Do NOT use fsync = off in production. These settings trade durability for speed. They're safe for preview environments because the data is ephemeral — if the container crashes, you just redeploy. Never apply these settings to production or staging databases.

Applying Custom Configuration

With the built-in component, create a custom postgresql.conf and mount it:

Dockerfile
1# docker/postgres/Dockerfile
2FROM postgres:16-alpine
3COPY postgresql.conf /etc/postgresql/postgresql.conf
4CMD ["postgres", "-c", "config_file=/etc/postgresql/postgresql.conf"]

With the Helm chart, use primary.extendedConfiguration:

YAML
1primary:
2  extendedConfiguration: |
3    max_connections = 50
4    shared_buffers = 64MB
5    fsync = off
6    synchronous_commit = off

Troubleshooting

IssueSolution
Connection refused on port 5432PostgreSQL container not ready. Check dependsOn ensures app waits for db. Add retry logic in your app's startup.
FATAL: role "appuser" does not existPOSTGRES_USER must be set on first container start. If you changed it, delete the volume and redeploy.
FATAL: database "appdb" does not existSame as above — POSTGRES_DB only works on first init. Delete volume and redeploy.
Could not open extension control fileExtension not installed in the image. Use a custom Dockerfile with the required packages (see Extensions section).
Disk full / no space left on deviceVolume too small. Increase size in the volumes section. For preview envs, 1Gi-2Gi is usually sufficient.
Slow queries in previewApply the performance tuning settings above. Preview DBs with fsync=off and synchronous_commit=off are significantly faster.
Migrations fail with lock timeoutAnother migration is running concurrently. Ensure only one component runs migrations. Use dependsOn ordering.
pg_restore errors on seedVersion mismatch between dump source and target. Ensure pg_dump and pg_restore versions match the PostgreSQL server version.
SSL connection requiredTerraform-managed instances (RDS, Cloud SQL) require SSL by default. Add ?sslmode=require to the connection string.
Init scripts not runningScripts in /docker-entrypoint-initdb.d/ only run when the data directory is empty. Delete the volume to re-trigger.

What's Next?

  • Add connection pooling — Use PgBouncer as a sidecar for apps with many short-lived connections
  • Enable PostGIS — Build a custom image with geospatial extensions for location-based features
  • Add pgAdmin — Include a dpage/pgadmin4 Service component for visual database management in preview environments
  • Monitor with pg_stat_statements — Enable query performance tracking for debugging slow endpoints
  • Automate migrations — Add a Kubernetes Job or init container that runs migrations before the app starts

Ship faster starting today.

14-day full-feature trial. No credit card required. Pay-as-you-go from $0.007/min per environment.