Preview Environments for FastAPI: Automated Per-PR Deployments with Bunnyshell
GuideMarch 20, 202612 min read

Preview Environments for FastAPI: Automated Per-PR Deployments with Bunnyshell

Why Preview Environments for FastAPI?

Every FastAPI team has been there: a PR looks clean, tests are green in CI, but when it touches staging — something breaks. Maybe an Alembic migration conflicts with another branch, or the new async endpoint behaves differently against a real PostgreSQL connection than against the in-memory test database.

Preview environments solve this. Every pull request gets its own isolated deployment — FastAPI app, PostgreSQL database, Redis, the works — running in Kubernetes with production-like configuration. Reviewers click a link and interact with the actual running API (or the frontend that calls it), not just the diff.

With Bunnyshell, you get:

  • Automatic deployment — A new environment spins up for every PR
  • Production parity — Same Docker images, same database engine, same infrastructure
  • Isolation — Each PR environment is fully independent, no shared staging conflicts
  • Automatic cleanup — Environments are destroyed when the PR is merged or closed

Choose Your Approach

Bunnyshell supports three ways to set up preview environments for FastAPI. Pick the one that fits your workflow:

ApproachBest forComplexityCI/CD maintenance
Approach A: Bunnyshell UITeams that want the fastest setup with zero pipeline maintenanceEasiestNone — Bunnyshell manages webhooks automatically
Approach B: Docker Compose ImportTeams already using docker-compose.yml for local developmentEasyNone — import converts to Bunnyshell config automatically
Approach C: Helm ChartsTeams with existing Helm infrastructure or complex K8s needsAdvancedOptional — can use CLI or Bunnyshell UI

All three approaches end the same way: a toggle in Bunnyshell Settings that enables automatic preview environments for every PR. No GitHub Actions, no GitLab CI pipelines to maintain — Bunnyshell adds webhooks to your Git provider and listens for PR events.

Prerequisites: Prepare Your FastAPI App

Regardless of which approach you choose, your FastAPI app needs two things: a Dockerfile and the right configuration.

1. Create a Production-Ready Dockerfile

If your FastAPI project doesn't already have a Dockerfile:

Dockerfile
1FROM python:3.12-slim AS base
2
3ENV PYTHONDONTWRITEBYTECODE=1 \
4    PYTHONUNBUFFERED=1 \
5    PIP_NO_CACHE_DIR=1
6
7WORKDIR /app
8
9# Install system dependencies
10RUN apt-get update && apt-get install -y --no-install-recommends \
11    libpq-dev gcc && \
12    rm -rf /var/lib/apt/lists/*
13
14# Install Python dependencies
15COPY requirements.txt .
16RUN pip install --no-cache-dir -r requirements.txt
17
18# Copy application code
19COPY . .
20
21EXPOSE 8000
22
23# Run with Uvicorn — replace "app.main:app" with your module path
24CMD ["uvicorn", "app.main:app", "--host", "0.0.0.0", "--port", "8000", "--workers", "2"]

For production workloads you may prefer Gunicorn managing Uvicorn workers:

Dockerfile
1CMD ["gunicorn", "app.main:app", \
2     "-k", "uvicorn.workers.UvicornWorker", \
3     "--bind", "0.0.0.0:8000", \
4     "--workers", "2"]

The app must listen on 0.0.0.0, not 127.0.0.1 or localhost. Container networking in Kubernetes requires the process to accept connections on all interfaces. Uvicorn's --host 0.0.0.0 handles this.

2. Configure FastAPI for Kubernetes

FastAPI (via Uvicorn/Starlette) handles X-Forwarded-* headers automatically when you use the ProxyHeadersMiddleware. Add it in your app factory:

Python
1# app/main.py
2from fastapi import FastAPI
3from uvicorn.middleware.proxy_headers import ProxyHeadersMiddleware
4from app.core.config import settings
5from app.db.session import engine
6from app.db.base import Base
7
8app = FastAPI(title=settings.PROJECT_NAME)
9
10# Trust X-Forwarded-Proto / X-Forwarded-For from the Kubernetes ingress
11app.add_middleware(ProxyHeadersMiddleware, trusted_hosts="*")
12
13@app.on_event("startup")
14async def startup_event():
15    # Run Alembic migrations on startup (optional — see migration step below)
16    pass
17
18@app.get("/health")
19async def health():
20    return {"status": "ok"}

Store all secrets and connection strings in environment variables:

Python
1# app/core/config.py
2from pydantic_settings import BaseSettings
3
4class Settings(BaseSettings):
5    PROJECT_NAME: str = "my-fastapi-app"
6    SECRET_KEY: str = "change-me-in-production"
7    DATABASE_URL: str = "postgresql+asyncpg://user:password@localhost/dbname"
8    REDIS_URL: str = "redis://localhost:6379/0"
9    ALLOWED_HOSTS: str = "*"
10
11    class Config:
12        env_file = ".env"
13        case_sensitive = True
14
15settings = Settings()

pydantic-settings automatically reads from environment variables (case-insensitive by default unless you set case_sensitive = True). When deployed on Bunnyshell, the values you set in the environment definition override anything in .env.

FastAPI Deployment Checklist

  • App listens on 0.0.0.0:8000 (not localhost)
  • ProxyHeadersMiddleware added for TLS termination at ingress
  • SECRET_KEY loaded from environment variable
  • DATABASE_URL constructed from environment variables
  • REDIS_URL loaded from environment variable
  • Alembic configured and alembic.ini present in repo
  • Health check endpoint available (/health or /)

Approach A: Bunnyshell UI — Zero CI/CD Maintenance

This is the easiest approach. You connect your repo, paste a YAML config, deploy, and flip a toggle. No CI/CD pipelines to write or maintain — Bunnyshell automatically adds webhooks to your Git provider and creates/destroys preview environments when PRs are opened/closed.

Step 1: Create a Project and Environment

  1. Log into Bunnyshell
  2. Click Create project and name it (e.g., "FastAPI App")
  3. Inside the project, click Create environment and name it (e.g., "fastapi-main")

Step 2: Define the Environment Configuration

Click Configuration in your environment view and paste this bunnyshell.yaml:

YAML
1kind: Environment
2name: fastapi-preview
3type: primary
4
5environmentVariables:
6  SECRET_KEY: SECRET["your-secret-key-here"]
7  DB_PASSWORD: SECRET["your-db-password"]
8
9components:
10  # ── FastAPI Application ──
11  - kind: Application
12    name: fastapi-app
13    gitRepo: 'https://github.com/your-org/your-fastapi-repo.git'
14    gitBranch: main
15    gitApplicationPath: /
16    dockerCompose:
17      build:
18        context: .
19        dockerfile: Dockerfile
20      environment:
21        SECRET_KEY: '{{ env.vars.SECRET_KEY }}'
22        DATABASE_URL: 'postgresql+asyncpg://fastapi:{{ env.vars.DB_PASSWORD }}@postgres:5432/fastapi_db'
23        REDIS_URL: 'redis://redis:6379/0'
24        ALLOWED_HOSTS: '{{ components.fastapi-app.ingress.hosts[0] }}'
25      ports:
26        - '8000:8000'
27    hosts:
28      - hostname: 'app-{{ env.base_domain }}'
29        path: /
30        servicePort: 8000
31    dependsOn:
32      - postgres
33      - redis
34
35  # ── PostgreSQL Database ──
36  - kind: Database
37    name: postgres
38    dockerCompose:
39      image: 'postgres:16-alpine'
40      environment:
41        POSTGRES_DB: fastapi_db
42        POSTGRES_USER: fastapi
43        POSTGRES_PASSWORD: '{{ env.vars.DB_PASSWORD }}'
44      ports:
45        - '5432:5432'
46
47  # ── Redis Cache ──
48  - kind: Service
49    name: redis
50    dockerCompose:
51      image: 'redis:7-alpine'
52      ports:
53        - '6379:6379'
54
55volumes:
56  - name: postgres-data
57    mount:
58      component: postgres
59      containerPath: /var/lib/postgresql/data
60    size: 1Gi

Replace your-org/your-fastapi-repo with your actual repository. Save the configuration.

Step 3: Deploy

Click the Deploy button, select your Kubernetes cluster, and click Deploy Environment. Bunnyshell will:

  1. Build your FastAPI Docker image from the Dockerfile
  2. Pull PostgreSQL and Redis images
  3. Deploy everything into an isolated Kubernetes namespace
  4. Generate HTTPS URLs automatically with DNS

Monitor the deployment in the environment detail page. When status shows Running, click Endpoints to access your live FastAPI app.

Step 4: Run Alembic Migrations

After deployment, run Alembic migrations via the component's terminal in the Bunnyshell UI, or via CLI:

Bash
1export BUNNYSHELL_TOKEN=your-api-token
2bns components list --environment ENV_ID --output json | jq '._embedded.item[] | {id, name}'
3bns exec COMPONENT_ID -- alembic upgrade head

You can also run migrations automatically on container startup by adding this to your Dockerfile or an entrypoint script:

Bash
1#!/bin/sh
2# entrypoint.sh
3set -e
4echo "Running Alembic migrations..."
5alembic upgrade head
6echo "Starting Uvicorn..."
7exec uvicorn app.main:app --host 0.0.0.0 --port 8000 --workers 2
Dockerfile
1COPY entrypoint.sh /entrypoint.sh
2RUN chmod +x /entrypoint.sh
3CMD ["/entrypoint.sh"]

If you run migrations at container startup, make sure your FastAPI app retries the database connection. PostgreSQL might not be ready when the app container starts. Use tenacity or a similar library for retry logic, or add a wait loop in your entrypoint script.

Step 5: Enable Automatic Preview Environments

This is the magic step — no CI/CD configuration needed:

  1. In your environment, go to Settings
  2. Find the Ephemeral environments section
  3. Toggle "Create ephemeral environments on pull request" to ON
  4. Toggle "Destroy environment after merge or close pull request" to ON
  5. Select the Kubernetes cluster for ephemeral environments

That's it. Bunnyshell automatically adds a webhook to your Git provider (GitHub, GitLab, or Bitbucket). From now on:

  • Open a PR → Bunnyshell creates an ephemeral environment with the PR's branch
  • Push to PR → The environment redeploys with the latest changes
  • Bunnyshell posts a comment on the PR with a link to the live deployment
  • Merge or close the PR → The ephemeral environment is automatically destroyed

The primary environment must be in Running or Stopped status before ephemeral environments can be created from it.


Approach B: Docker Compose Import

Already have a docker-compose.yml for local development? Bunnyshell can import it directly and convert it to its environment format. No manual YAML writing required.

Step 1: Add a docker-compose.yml to Your Repo

If you don't already have one, create docker-compose.yml in your repo root:

YAML
1version: '3.8'
2
3services:
4  fastapi-app:
5    build:
6      context: .
7      dockerfile: Dockerfile
8    ports:
9      - '8000:8000'
10    environment:
11      SECRET_KEY: 'dev-secret-key'
12      DATABASE_URL: 'postgresql+asyncpg://fastapi:fastapi@postgres:5432/fastapi_db'
13      REDIS_URL: 'redis://redis:6379/0'
14      ALLOWED_HOSTS: '*'
15    depends_on:
16      - postgres
17      - redis
18
19  postgres:
20    image: postgres:16-alpine
21    environment:
22      POSTGRES_DB: fastapi_db
23      POSTGRES_USER: fastapi
24      POSTGRES_PASSWORD: fastapi
25    volumes:
26      - postgres-data:/var/lib/postgresql/data
27
28  redis:
29    image: redis:7-alpine
30
31volumes:
32  postgres-data:

Step 2: Import into Bunnyshell

  1. Create a Project and Environment in Bunnyshell (same as Approach A, Step 1)
  2. Click Define environment
  3. Select your Git account and repository
  4. Set the branch (e.g., main) and the path to docker-compose.yml (use / if it's in the root)
  5. Click Continue — Bunnyshell parses and validates your Docker Compose file

Bunnyshell automatically detects:

  • All services (fastapi-app, postgres, redis)
  • Exposed ports
  • Build configurations (Dockerfiles)
  • Volumes
  • Environment variables

It converts everything into a bunnyshell.yaml environment definition.

The docker-compose.yml is only read during the initial import. Subsequent changes to the file won't auto-propagate — edit the environment configuration in Bunnyshell instead.

Step 3: Adjust the Configuration

After import, go to Configuration in the environment view and update:

  • Replace hardcoded secrets with SECRET["..."] syntax
  • Update DATABASE_URL and ALLOWED_HOSTS using Bunnyshell interpolation:
YAML
DATABASE_URL: 'postgresql+asyncpg://fastapi:{{ env.vars.DB_PASSWORD }}@postgres:5432/fastapi_db'
ALLOWED_HOSTS: '{{ components.fastapi-app.ingress.hosts[0] }}'

Step 4: Deploy and Enable Preview Environments

Same as Approach A — click Deploy, then go to Settings and toggle on ephemeral environments.

Best Practices for Docker Compose with Bunnyshell

  • Use separate env files — Keep .env for local dev; override sensitive values in Bunnyshell's environment config
  • Design for startup resilience — Kubernetes doesn't guarantee depends_on ordering. Use tenacity or an entrypoint wait loop for DB connection retries
  • Use Bunnyshell interpolation for dynamic values like the public URL:
YAML
1# Local docker-compose.yml
2BACKEND_URL: http://localhost:8000
3
4# Bunnyshell environment config (after import)
5BACKEND_URL: 'https://{{ components.fastapi-app.ingress.hosts[0] }}'

Approach C: Helm Charts

For teams with existing Helm infrastructure or complex Kubernetes requirements (custom ingress, service mesh, advanced scaling). Helm gives you full control over every Kubernetes resource.

Step 1: Create a Helm Chart

Structure your FastAPI Helm chart in your repo:

Text
1helm/fastapi/
2├── Chart.yaml
3├── values.yaml
4└── templates/
5    ├── deployment.yaml
6    ├── service.yaml
7    ├── ingress.yaml
8    └── configmap.yaml

A minimal values.yaml:

YAML
1replicaCount: 1
2image:
3  repository: ""
4  tag: latest
5service:
6  port: 8000
7ingress:
8  enabled: true
9  className: bns-nginx
10  host: ""
11env:
12  SECRET_KEY: ""
13  DATABASE_URL: ""
14  REDIS_URL: ""
15  ALLOWED_HOSTS: ""

Step 2: Define the Bunnyshell Configuration

Create a bunnyshell.yaml using Helm components:

YAML
1kind: Environment
2name: fastapi-helm
3type: primary
4
5environmentVariables:
6  SECRET_KEY: SECRET["your-secret-key"]
7  DB_PASSWORD: SECRET["your-db-password"]
8  POSTGRES_DB: fastapi_db
9  POSTGRES_USER: fastapi
10
11components:
12  # ── Docker Image Build ──
13  - kind: DockerImage
14    name: fastapi-image
15    context: /
16    dockerfile: Dockerfile
17    gitRepo: 'https://github.com/your-org/your-fastapi-repo.git'
18    gitBranch: main
19    gitApplicationPath: /
20
21  # ── PostgreSQL via Helm ──
22  - kind: Helm
23    name: postgres
24    runnerImage: 'dtzar/helm-kubectl:3.8.2'
25    deploy:
26      - |
27        cat << EOF > pg_values.yaml
28          global:
29            storageClass: bns-network-sc
30          auth:
31            postgresPassword: {{ env.vars.DB_PASSWORD }}
32            database: {{ env.vars.POSTGRES_DB }}
33        EOF
34      - 'helm repo add bitnami https://charts.bitnami.com/bitnami'
35      - 'helm upgrade --install --namespace {{ env.k8s.namespace }}
36        --post-renderer /bns/helpers/helm/bns_post_renderer
37        -f pg_values.yaml postgres bitnami/postgresql --version 11.9.11'
38      - |
39        POSTGRES_HOST="postgres-postgresql.{{ env.k8s.namespace }}.svc.cluster.local"
40    destroy:
41      - 'helm uninstall postgres --namespace {{ env.k8s.namespace }}'
42    start:
43      - 'kubectl scale --replicas=1 --namespace {{ env.k8s.namespace }}
44        statefulset/postgres-postgresql'
45    stop:
46      - 'kubectl scale --replicas=0 --namespace {{ env.k8s.namespace }}
47        statefulset/postgres-postgresql'
48    exportVariables:
49      - POSTGRES_HOST
50
51  # ── FastAPI App via Helm ──
52  - kind: Helm
53    name: fastapi-app
54    runnerImage: 'dtzar/helm-kubectl:3.8.2'
55    deploy:
56      - |
57        cat << EOF > fastapi_values.yaml
58          replicaCount: 1
59          image:
60            repository: {{ components.fastapi-image.image }}
61          service:
62            port: 8000
63          ingress:
64            enabled: true
65            className: bns-nginx
66            host: app-{{ env.base_domain }}
67          env:
68            SECRET_KEY: '{{ env.vars.SECRET_KEY }}'
69            DATABASE_URL: 'postgresql+asyncpg://{{ env.vars.POSTGRES_USER }}:{{ env.vars.DB_PASSWORD }}@{{ components.postgres.exported.POSTGRES_HOST }}/{{ env.vars.POSTGRES_DB }}'
70            REDIS_URL: 'redis://redis:6379/0'
71            ALLOWED_HOSTS: 'app-{{ env.base_domain }}'
72        EOF
73      - 'helm upgrade --install --namespace {{ env.k8s.namespace }}
74        --post-renderer /bns/helpers/helm/bns_post_renderer
75        -f fastapi_values.yaml fastapi-{{ env.unique }} ./helm/fastapi'
76    destroy:
77      - 'helm uninstall fastapi-{{ env.unique }} --namespace {{ env.k8s.namespace }}'
78    start:
79      - 'helm upgrade --namespace {{ env.k8s.namespace }}
80        --post-renderer /bns/helpers/helm/bns_post_renderer
81        --reuse-values --set replicaCount=1 fastapi-{{ env.unique }} ./helm/fastapi'
82    stop:
83      - 'helm upgrade --namespace {{ env.k8s.namespace }}
84        --post-renderer /bns/helpers/helm/bns_post_renderer
85        --reuse-values --set replicaCount=0 fastapi-{{ env.unique }} ./helm/fastapi'
86    gitRepo: 'https://github.com/your-org/your-fastapi-repo.git'
87    gitBranch: main
88    gitApplicationPath: /helm/fastapi
89
90  # ── Redis ──
91  - kind: Service
92    name: redis
93    dockerCompose:
94      image: 'redis:7-alpine'
95      ports:
96        - '6379:6379'

Always include --post-renderer /bns/helpers/helm/bns_post_renderer in your helm commands. This adds labels so Bunnyshell can track resources, show logs, and manage component lifecycle.

Step 3: Deploy and Enable Preview Environments

Same flow: paste the config in Configuration, hit Deploy, then enable ephemeral environments in Settings.


Enabling Preview Environments (All Approaches)

Regardless of which approach you used, enabling automatic preview environments is the same:

  1. Ensure your primary environment has been deployed at least once (Running or Stopped status)
  2. Go to Settings in your environment
  3. Toggle "Create ephemeral environments on pull request" → ON
  4. Toggle "Destroy environment after merge or close pull request" → ON
  5. Select the target Kubernetes cluster

What happens next:

  • Bunnyshell adds a webhook to your Git provider automatically
  • When a developer opens a PR, Bunnyshell creates an ephemeral environment cloned from the primary, using the PR's branch
  • Bunnyshell posts a comment on the PR with a direct link to the running deployment
  • When the PR is merged or closed, the ephemeral environment is automatically destroyed

No GitHub Actions. No GitLab CI pipelines. No maintenance. It just works.

Optional: CI/CD Integration via CLI

If you prefer to control preview environments from your CI/CD pipeline (e.g., for custom migration or seed scripts), you can use the Bunnyshell CLI:

Bash
1# Install
2brew install bunnyshell/tap/bunnyshell-cli
3
4# Authenticate
5export BUNNYSHELL_TOKEN=your-api-token
6
7# Create, deploy, and run migrations in one flow
8bns environments create --from-path bunnyshell.yaml --name "pr-123" --project PROJECT_ID --k8s CLUSTER_ID
9bns environments deploy --id ENV_ID --wait
10bns exec COMPONENT_ID -- alembic upgrade head

Remote Development and Debugging

Bunnyshell makes it easy to develop and debug directly against any environment — primary or ephemeral:

Port Forwarding

Connect your local tools to the remote database or Redis:

Bash
1# Forward PostgreSQL to local port 15432
2bns port-forward 15432:5432 --component POSTGRES_COMPONENT_ID
3
4# Connect with psql or any DB tool
5psql -h localhost -p 15432 -U fastapi fastapi_db
6
7# Forward Redis to local port 16379
8bns port-forward 16379:6379 --component REDIS_COMPONENT_ID

Execute Commands in the Container

Bash
1# Run Alembic migrations
2bns exec COMPONENT_ID -- alembic upgrade head
3
4# Roll back one migration
5bns exec COMPONENT_ID -- alembic downgrade -1
6
7# Open a Python shell
8bns exec COMPONENT_ID -- python -c "from app.main import app; print(app.routes)"
9
10# Inspect environment variables
11bns exec COMPONENT_ID -- env | grep -i database

Live Logs

Bash
1# Stream logs in real time
2bns logs --component COMPONENT_ID -f
3
4# Last 200 lines
5bns logs --component COMPONENT_ID --tail 200
6
7# Logs from the last 5 minutes
8bns logs --component COMPONENT_ID --since 5m

Live Code Sync

For active development, sync your local code changes to the remote container in real time:

Bash
1bns remote-development up --component COMPONENT_ID
2# Edit files locally — changes sync automatically to the running container
3# When done:
4bns remote-development down

Troubleshooting

IssueSolution
502 Bad GatewayFastAPI isn't listening on 0.0.0.0:8000. Check --host 0.0.0.0 in your Uvicorn CMD.
HTTPS URLs returned as HTTPAdd ProxyHeadersMiddleware with trusted_hosts="*" to trust X-Forwarded-Proto from the ingress.
asyncpg connection refusedDATABASE_URL host must be postgres (the component name), not localhost.
Alembic: "Can't locate revision"Run alembic upgrade head after deployment. Ensure alembic.ini points to the correct DB URL env var.
Connection refused to RedisVerify REDIS_URL uses redis as hostname (the Bunnyshell component name).
Container exits at startupCheck startup logs with bns logs --component ID --tail 100. Often a missing env var or DB not yet ready.
Service startup order issuesKubernetes doesn't guarantee depends_on ordering. Add retry logic with tenacity or a wait loop in your entrypoint.
522 Connection timed outCluster may be behind a firewall. Verify Cloudflare IPs are whitelisted on the ingress controller.

What's Next?

  • Add background tasks — Add a Celery or ARQ worker as a separate Bunnyshell component
  • Seed test data — Run bns exec <ID> -- python seed.py post-deploy
  • Add async DB sessions — Use asyncpg with SQLAlchemy async engine for fully async database access
  • Monitor with Sentry — Pass SENTRY_DSN as an environment variable and install sentry-sdk[fastapi]
  • Add OpenAPI docs link — Your Bunnyshell preview URL + /docs gives reviewers interactive API docs out of the box

Ship faster starting today.

14-day full-feature trial. No credit card required. Pay-as-you-go from $0.007/min per environment.