Preview Environments for Django: Automated Per-PR Deployments with Bunnyshell
GuideJune 1, 202515 min read

Preview Environments for Django: Automated Per-PR Deployments with Bunnyshell

Why Preview Environments for Django?

Every Django team has been here: a PR looks good in code review, the tests pass, but when it hits staging — something breaks. Maybe a migration conflicts with another branch, or the new Redis cache layer doesn't behave like it did locally.

Preview environments solve this. Every pull request gets its own isolated deployment — Django app, PostgreSQL database, Redis, the works — running in Kubernetes with production-like configuration. Reviewers click a link and see the actual running app, not just the diff.

With Bunnyshell, you get:

  • Automatic deployment — A new environment spins up for every PR
  • Production parity — Same Docker images, same database engine, same infrastructure
  • Isolation — Each PR environment is fully independent, no shared staging conflicts
  • Automatic cleanup — Environments are destroyed when the PR is merged or closed

Choose Your Approach

Bunnyshell supports three ways to set up preview environments for Django. Pick the one that fits your workflow:

ApproachBest forComplexityCI/CD maintenance
Approach A: Bunnyshell UITeams that want the fastest setup with zero pipeline maintenanceEasiestNone — Bunnyshell manages webhooks automatically
Approach B: Docker Compose ImportTeams already using docker-compose.yml for local developmentEasyNone — import converts to Bunnyshell config automatically
Approach C: Helm ChartsTeams with existing Helm infrastructure or complex K8s needsAdvancedOptional — can use CLI or Bunnyshell UI

All three approaches end the same way: a toggle in Bunnyshell Settings that enables automatic preview environments for every PR. No GitHub Actions, no GitLab CI pipelines to maintain — Bunnyshell adds webhooks to your Git provider and listens for PR events.

Prerequisites: Prepare Your Django App

Regardless of which approach you choose, your Django app needs two things: a Dockerfile and the right settings.

1. Create a Production-Ready Dockerfile

If your Django project doesn't already have a Dockerfile:

Dockerfile
1FROM python:3.12-slim AS base
2
3ENV PYTHONDONTWRITEBYTECODE=1 \
4    PYTHONUNBUFFERED=1 \
5    PIP_NO_CACHE_DIR=1
6
7WORKDIR /app
8
9# Install system dependencies
10RUN apt-get update && apt-get install -y --no-install-recommends \
11    libpq-dev gcc && \
12    rm -rf /var/lib/apt/lists/*
13
14# Install Python dependencies
15COPY requirements.txt .
16RUN pip install --no-cache-dir -r requirements.txt gunicorn
17
18# Copy application code
19COPY . .
20
21# Collect static files
22RUN python manage.py collectstatic --noinput
23
24EXPOSE 8000
25CMD ["gunicorn", "myproject.wsgi:application", "--bind", "0.0.0.0:8000", "--workers", "3"]

Important: Replace myproject with your actual Django project name. The app must listen on 0.0.0.0, not localhost — this is required for container networking in Kubernetes.

2. Configure Django for Kubernetes

Django needs these settings to work correctly behind Kubernetes ingress (which terminates TLS):

Python
1# settings.py
2import os
3
4# Kubernetes ingress terminates TLS — Django sees HTTP.
5# This tells Django to trust X-Forwarded-Proto from the ingress.
6SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')
7
8# Allow the Bunnyshell-generated domain
9ALLOWED_HOSTS = os.environ.get('ALLOWED_HOSTS', '*').split(',')
10
11# Trust CSRF from the Bunnyshell domain
12CSRF_TRUSTED_ORIGINS = [
13    origin.strip()
14    for origin in os.environ.get('CSRF_TRUSTED_ORIGINS', '').split(',')
15    if origin.strip()
16]
17
18# Database from environment variables
19DATABASES = {
20    'default': {
21        'ENGINE': 'django.db.backends.postgresql',
22        'NAME': os.environ.get('DB_NAME', 'django_db'),
23        'USER': os.environ.get('DB_USER', 'django'),
24        'PASSWORD': os.environ.get('DB_PASSWORD', 'django'),
25        'HOST': os.environ.get('DB_HOST', 'localhost'),
26        'PORT': os.environ.get('DB_PORT', '5432'),
27    }
28}
29
30# Redis cache (optional)
31if os.environ.get('REDIS_URL'):
32    CACHES = {
33        'default': {
34            'BACKEND': 'django.core.cache.backends.redis.RedisCache',
35            'LOCATION': os.environ.get('REDIS_URL'),
36        }
37    }
38
39SECRET_KEY = os.environ.get('DJANGO_SECRET_KEY', 'change-me-in-production')

Django Deployment Checklist

  • SECURE_PROXY_SSL_HEADER set for TLS behind ingress
  • ALLOWED_HOSTS includes the Bunnyshell domain
  • CSRF_TRUSTED_ORIGINS includes https://<your-bunnyshell-domain>
  • SECRET_KEY loaded from environment variable
  • Database connection via environment variables
  • Static files collected in Dockerfile (collectstatic)
  • App listens on 0.0.0.0:8000 (not localhost)

Approach A: Bunnyshell UI — Zero CI/CD Maintenance

This is the easiest approach. You connect your repo, paste a YAML config, deploy, and flip a toggle. No CI/CD pipelines to write or maintain — Bunnyshell automatically adds webhooks to your Git provider and creates/destroys preview environments when PRs are opened/closed.

Step 1: Create a Project and Environment

  1. Log into Bunnyshell
  2. Click Create project and name it (e.g., "Django App")
  3. Inside the project, click Create environment and name it (e.g., "django-main")

Step 2: Define the Environment Configuration

Click Configuration in your environment view and paste this bunnyshell.yaml:

YAML
1kind: Environment
2name: django-preview
3type: primary
4
5environmentVariables:
6  DJANGO_SECRET_KEY: SECRET["your-secret-key-here"]
7  DB_PASSWORD: SECRET["your-db-password"]
8
9components:
10  # ── Django Application ──
11  - kind: Application
12    name: django-app
13    gitRepo: 'https://github.com/your-org/your-django-repo.git'
14    gitBranch: main
15    gitApplicationPath: /
16    dockerCompose:
17      build:
18        context: .
19        dockerfile: Dockerfile
20      environment:
21        DJANGO_SECRET_KEY: '{{ env.vars.DJANGO_SECRET_KEY }}'
22        DB_NAME: django_db
23        DB_USER: django
24        DB_PASSWORD: '{{ env.vars.DB_PASSWORD }}'
25        DB_HOST: postgres
26        DB_PORT: '5432'
27        REDIS_URL: 'redis://redis:6379/0'
28        ALLOWED_HOSTS: '{{ components.django-app.ingress.hosts[0] }}'
29        CSRF_TRUSTED_ORIGINS: 'https://{{ components.django-app.ingress.hosts[0] }}'
30      ports:
31        - '8000:8000'
32    hosts:
33      - hostname: 'app-{{ env.base_domain }}'
34        path: /
35        servicePort: 8000
36    dependsOn:
37      - postgres
38      - redis
39
40  # ── PostgreSQL Database ──
41  - kind: Database
42    name: postgres
43    dockerCompose:
44      image: 'postgres:16-alpine'
45      environment:
46        POSTGRES_DB: django_db
47        POSTGRES_USER: django
48        POSTGRES_PASSWORD: '{{ env.vars.DB_PASSWORD }}'
49      ports:
50        - '5432:5432'
51
52  # ── Redis Cache ──
53  - kind: Service
54    name: redis
55    dockerCompose:
56      image: 'redis:7-alpine'
57      ports:
58        - '6379:6379'
59
60volumes:
61  - name: postgres-data
62    mount:
63      component: postgres
64      containerPath: /var/lib/postgresql/data
65    size: 1Gi

Replace your-org/your-django-repo with your actual repository. Save the configuration.

Step 3: Deploy

Click the Deploy button, select your Kubernetes cluster, and click Deploy Environment. Bunnyshell will:

  1. Build your Django Docker image from the Dockerfile
  2. Pull PostgreSQL and Redis images
  3. Deploy everything into an isolated Kubernetes namespace
  4. Generate HTTPS URLs automatically with DNS

Monitor the deployment in the environment detail page. When status shows Running, click Endpoints to access your live Django app.

Step 4: Run Migrations

After deployment, run Django migrations via the component's terminal in the Bunnyshell UI, or via CLI:

Bash
1export BUNNYSHELL_TOKEN=your-api-token
2bns components list --environment ENV_ID --output json | jq '._embedded.item[] | {id, name}'
3bns exec COMPONENT_ID -- python manage.py migrate --noinput

Step 5: Enable Automatic Preview Environments

This is the magic step — no CI/CD configuration needed:

  1. In your environment, go to Settings
  2. Find the Ephemeral environments section
  3. Toggle "Create ephemeral environments on pull request" to ON
  4. Toggle "Destroy environment after merge or close pull request" to ON
  5. Select the Kubernetes cluster for ephemeral environments

That's it. Bunnyshell automatically adds a webhook to your Git provider (GitHub, GitLab, or Bitbucket). From now on:

  • Open a PR → Bunnyshell creates an ephemeral environment with the PR's branch
  • Push to PR → The environment redeploys with the latest changes
  • Bunnyshell posts a comment on the PR with a link to the live deployment
  • Merge or close the PR → The ephemeral environment is automatically destroyed

Note: The primary environment must be in Running or Stopped status before ephemeral environments can be created from it.


Approach B: Docker Compose Import

Already have a docker-compose.yml for local development? Bunnyshell can import it directly and convert it to its environment format. No manual YAML writing required.

Step 1: Add a docker-compose.yml to Your Repo

If you don't already have one, create docker-compose.yml in your repo root:

YAML
1version: '3.8'
2
3services:
4  django-app:
5    build:
6      context: .
7      dockerfile: Dockerfile
8    ports:
9      - '8000:8000'
10    environment:
11      DJANGO_SECRET_KEY: 'dev-secret-key'
12      DB_NAME: django_db
13      DB_USER: django
14      DB_PASSWORD: django
15      DB_HOST: postgres
16      DB_PORT: '5432'
17      REDIS_URL: 'redis://redis:6379/0'
18      ALLOWED_HOSTS: '*'
19    depends_on:
20      - postgres
21      - redis
22
23  postgres:
24    image: postgres:16-alpine
25    environment:
26      POSTGRES_DB: django_db
27      POSTGRES_USER: django
28      POSTGRES_PASSWORD: django
29    volumes:
30      - postgres-data:/var/lib/postgresql/data
31
32  redis:
33    image: redis:7-alpine
34
35volumes:
36  postgres-data:

Step 2: Import into Bunnyshell

  1. Create a Project and Environment in Bunnyshell (same as Approach A, Steps 1)
  2. Click Define environment
  3. Select your Git account and repository
  4. Set the branch (e.g., main) and the path to docker-compose.yml (use / if it's in the root)
  5. Click Continue — Bunnyshell parses and validates your Docker Compose file

Bunnyshell automatically detects:

  • All services (django-app, postgres, redis)
  • Exposed ports
  • Build configurations (Dockerfiles)
  • Volumes
  • Environment variables

It converts everything into a bunnyshell.yaml environment definition.

Important: The docker-compose.yml is only read during the initial import. Subsequent changes to the file won't auto-propagate — edit the environment configuration in Bunnyshell instead.

Step 3: Adjust the Configuration

After import, go to Configuration in the environment view and update:

  • Replace hardcoded secrets with SECRET["..."] syntax
  • Add CSRF_TRUSTED_ORIGINS and ALLOWED_HOSTS using Bunnyshell interpolation:
YAML
ALLOWED_HOSTS: '{{ components.django-app.ingress.hosts[0] }}'
CSRF_TRUSTED_ORIGINS: 'https://{{ components.django-app.ingress.hosts[0] }}'

Step 4: Deploy and Enable Preview Environments

Same as Approach A — click Deploy, then go to Settings and toggle on ephemeral environments.

Best Practices for Docker Compose with Bunnyshell

  • Use separate env files — Keep .env for local dev and .env.bunnyshell for Bunnyshell-specific config
  • Design for startup resilience — Kubernetes doesn't guarantee depends_on ordering. Make your Django app retry database connections on startup (use a library like django-db-connection-retrier or an entrypoint wait script)
  • Use Bunnyshell interpolation for dynamic values like URLs:
YAML
1# Local docker-compose.yml
2BACKEND_URL: http://localhost:8000
3
4# Bunnyshell environment config (after import)
5BACKEND_URL: 'https://{{ components.django-app.ingress.hosts[0] }}'

Approach C: Helm Charts

For teams with existing Helm infrastructure or complex Kubernetes requirements (custom ingress, service mesh, advanced scaling). Helm gives you full control over every Kubernetes resource.

Step 1: Create a Helm Chart

Structure your Django Helm chart in your repo:

Text
1helm/django/
2├── Chart.yaml
3├── values.yaml
4└── templates/
5    ├── deployment.yaml
6    ├── service.yaml
7    ├── ingress.yaml
8    └── configmap.yaml

A minimal values.yaml:

YAML
1replicaCount: 1
2image:
3  repository: ""
4  tag: latest
5service:
6  port: 8000
7ingress:
8  enabled: true
9  className: bns-nginx
10  host: ""
11env:
12  DJANGO_SECRET_KEY: ""
13  DB_HOST: ""
14  DB_NAME: django_db
15  DB_USER: django
16  DB_PASSWORD: ""
17  REDIS_URL: ""

Step 2: Define the Bunnyshell Configuration

Create a bunnyshell.yaml using Helm components:

YAML
1kind: Environment
2name: django-helm
3type: primary
4
5environmentVariables:
6  DJANGO_SECRET_KEY: SECRET["your-secret-key"]
7  DB_PASSWORD: SECRET["your-db-password"]
8  POSTGRES_DB: django_db
9  POSTGRES_USER: django
10
11components:
12  # ── Docker Image Build ──
13  - kind: DockerImage
14    name: django-image
15    context: /
16    dockerfile: Dockerfile
17    gitRepo: 'https://github.com/your-org/your-django-repo.git'
18    gitBranch: main
19    gitApplicationPath: /
20
21  # ── PostgreSQL via Helm ──
22  - kind: Helm
23    name: postgres
24    runnerImage: 'dtzar/helm-kubectl:3.8.2'
25    deploy:
26      - |
27        cat << EOF > pg_values.yaml
28          global:
29            storageClass: bns-network-sc
30          auth:
31            postgresPassword: {{ env.vars.DB_PASSWORD }}
32            database: {{ env.vars.POSTGRES_DB }}
33        EOF
34      - 'helm repo add bitnami https://charts.bitnami.com/bitnami'
35      - 'helm upgrade --install --namespace {{ env.k8s.namespace }}
36        --post-renderer /bns/helpers/helm/bns_post_renderer
37        -f pg_values.yaml postgres bitnami/postgresql --version 11.9.11'
38      - |
39        POSTGRES_HOST="postgres-postgresql.{{ env.k8s.namespace }}.svc.cluster.local"
40    destroy:
41      - 'helm uninstall postgres --namespace {{ env.k8s.namespace }}'
42    start:
43      - 'kubectl scale --replicas=1 --namespace {{ env.k8s.namespace }}
44        statefulset/postgres-postgresql'
45    stop:
46      - 'kubectl scale --replicas=0 --namespace {{ env.k8s.namespace }}
47        statefulset/postgres-postgresql'
48    exportVariables:
49      - POSTGRES_HOST
50
51  # ── Django App via Helm ──
52  - kind: Helm
53    name: django-app
54    runnerImage: 'dtzar/helm-kubectl:3.8.2'
55    deploy:
56      - |
57        cat << EOF > django_values.yaml
58          replicaCount: 1
59          image:
60            repository: {{ components.django-image.image }}
61          service:
62            port: 8000
63          ingress:
64            enabled: true
65            className: bns-nginx
66            host: app-{{ env.base_domain }}
67          env:
68            DJANGO_SECRET_KEY: '{{ env.vars.DJANGO_SECRET_KEY }}'
69            DB_HOST: '{{ components.postgres.exported.POSTGRES_HOST }}'
70            DB_NAME: '{{ env.vars.POSTGRES_DB }}'
71            DB_USER: '{{ env.vars.POSTGRES_USER }}'
72            DB_PASSWORD: '{{ env.vars.DB_PASSWORD }}'
73            REDIS_URL: 'redis://redis:6379/0'
74            ALLOWED_HOSTS: 'app-{{ env.base_domain }}'
75            CSRF_TRUSTED_ORIGINS: 'https://app-{{ env.base_domain }}'
76        EOF
77      - 'helm upgrade --install --namespace {{ env.k8s.namespace }}
78        --post-renderer /bns/helpers/helm/bns_post_renderer
79        -f django_values.yaml django-{{ env.unique }} ./helm/django'
80    destroy:
81      - 'helm uninstall django-{{ env.unique }} --namespace {{ env.k8s.namespace }}'
82    start:
83      - 'helm upgrade --namespace {{ env.k8s.namespace }}
84        --post-renderer /bns/helpers/helm/bns_post_renderer
85        --reuse-values --set replicaCount=1 django-{{ env.unique }} ./helm/django'
86    stop:
87      - 'helm upgrade --namespace {{ env.k8s.namespace }}
88        --post-renderer /bns/helpers/helm/bns_post_renderer
89        --reuse-values --set replicaCount=0 django-{{ env.unique }} ./helm/django'
90    gitRepo: 'https://github.com/your-org/your-django-repo.git'
91    gitBranch: main
92    gitApplicationPath: /helm/django
93
94  # ── Redis ──
95  - kind: Service
96    name: redis
97    dockerCompose:
98      image: 'redis:7-alpine'
99      ports:
100        - '6379:6379'

Key: Always include --post-renderer /bns/helpers/helm/bns_post_renderer in your helm commands. This adds labels so Bunnyshell can track resources, show logs, and manage component lifecycle.

Step 3: Deploy and Enable Preview Environments

Same flow: paste the config in Configuration, hit Deploy, then enable ephemeral environments in Settings.


Enabling Preview Environments (All Approaches)

Regardless of which approach you used, enabling automatic preview environments is the same:

  1. Ensure your primary environment has been deployed at least once (Running or Stopped status)
  2. Go to Settings in your environment
  3. Toggle "Create ephemeral environments on pull request" → ON
  4. Toggle "Destroy environment after merge or close pull request" → ON
  5. Select the target Kubernetes cluster

What happens next:

  • Bunnyshell adds a webhook to your Git provider automatically
  • When a developer opens a PR, Bunnyshell creates an ephemeral environment cloned from the primary, using the PR's branch
  • Bunnyshell posts a comment on the PR with a direct link to the running deployment
  • When the PR is merged or closed, the ephemeral environment is automatically destroyed

No GitHub Actions. No GitLab CI pipelines. No maintenance. It just works.

Optional: CI/CD Integration via CLI

If you prefer to control preview environments from your CI/CD pipeline (e.g., for custom migration or seed scripts), you can use the Bunnyshell CLI:

Bash
1# Install
2brew install bunnyshell/tap/bunnyshell-cli
3
4# Authenticate
5export BUNNYSHELL_TOKEN=your-api-token
6
7# Create, deploy, and run migrations in one flow
8bns environments create --from-path bunnyshell.yaml --name "pr-123" --project PROJECT_ID --k8s CLUSTER_ID
9bns environments deploy --id ENV_ID --wait
10bns exec COMPONENT_ID -- python manage.py migrate --noinput

Remote Development and Debugging

Bunnyshell makes it easy to develop and debug directly against any environment — primary or ephemeral:

Port Forwarding

Connect your local tools to the remote database:

Bash
1# Forward PostgreSQL to local port 15432
2bns port-forward 15432:5432 --component POSTGRES_COMPONENT_ID
3
4# Connect with psql, pgcli, or any DB tool
5psql -h localhost -p 15432 -U django django_db

Execute Django Commands

Bash
1bns exec COMPONENT_ID -- python manage.py migrate --noinput
2bns exec COMPONENT_ID -- python manage.py shell
3bns exec COMPONENT_ID -- python manage.py showmigrations
4bns exec COMPONENT_ID -- python manage.py dbshell
5bns exec COMPONENT_ID -- python manage.py loaddata fixtures/demo.json

Live Logs

Bash
1# Stream logs in real time
2bns logs --component COMPONENT_ID -f
3
4# Last 200 lines
5bns logs --component COMPONENT_ID --tail 200
6
7# Logs from the last 5 minutes
8bns logs --component COMPONENT_ID --since 5m

Live Code Sync

For active development, sync your local code changes to the remote container in real time:

Bash
1bns remote-development up --component COMPONENT_ID
2# Edit files locally — changes sync automatically
3# When done:
4bns remote-development down

Troubleshooting

IssueSolution
502 Bad GatewayDjango isn't listening on 0.0.0.0:8000. Check your CMD in Dockerfile and gunicorn bind address.
Mixed content / HTTPS errorsAdd SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https') to settings.py
CSRF verification failedAdd the Bunnyshell domain to CSRF_TRUSTED_ORIGINS
Static files not loadingEnsure collectstatic runs in Dockerfile. Consider WhiteNoise for serving static files.
Migrations failCheck that DB_HOST points to postgres (the component name), not localhost
Connection refused to RedisVerify REDIS_URL uses redis as hostname (the component name)
Service startup order issuesKubernetes doesn't guarantee depends_on ordering. Make your Django app retry DB connections on startup.
522 Connection timed outCluster may be behind a firewall. Verify Cloudflare IPs are whitelisted on the ingress controller.

What's Next?

  • Add Celery workers — Add another component for async task processing
  • Seed test data — Run bns exec <ID> -- python manage.py loaddata fixtures/demo.json post-deploy
  • Add Nginx sidecar — For production-like static file serving
  • Monitor with Sentry — Pass SENTRY_DSN as an environment variable