Preview Environments for Flask: Automated Per-PR Deployments with Bunnyshell
Why Preview Environments for Flask?
Every Flask team has been here: a PR adds a new route, an endpoint change, or a schema migration — tests pass, code review looks clean — but when it lands on the shared staging server, it conflicts with another branch that's already there. Or the FLASK_ENV variable is wrong, or the database hasn't been migrated, or the Redis cache is stale from a previous deploy.
Preview environments solve this. Every pull request gets its own isolated deployment — Flask app, PostgreSQL database, Redis cache — all running in Kubernetes with production-like configuration. Reviewers click a link and test the actual running application, not just read the diff.
With Bunnyshell, you get:
- Automatic deployment — A new environment spins up for every PR, no manual steps
- Production parity — Same Gunicorn configuration, same database engine, same infrastructure as prod
- Isolation — Each PR environment is fully independent; no shared staging conflicts
- Automatic cleanup — Environments are destroyed when the PR is merged or closed
Flask's lightweight nature makes it particularly well-suited for preview environments: the images are small, startup is fast (Gunicorn is up in seconds), and there's no complicated build pipeline. The result is preview environments that spin up in under two minutes.
Choose Your Approach
Bunnyshell supports three ways to set up preview environments for Flask. Pick the one that fits your workflow:
| Approach | Best for | Complexity | CI/CD maintenance |
|---|---|---|---|
| Approach A: Bunnyshell UI | Teams that want the fastest setup with zero pipeline maintenance | Easiest | None — Bunnyshell manages webhooks automatically |
| Approach B: Docker Compose Import | Teams already using docker-compose.yml for local development | Easy | None — import converts to Bunnyshell config automatically |
| Approach C: Helm Charts | Teams with existing Helm infrastructure or complex K8s needs | Advanced | Optional — can use CLI or Bunnyshell UI |
All three approaches end the same way: a toggle in Bunnyshell Settings that enables automatic preview environments for every PR. No GitHub Actions, no GitLab CI pipelines to maintain — Bunnyshell adds webhooks to your Git provider and listens for PR events.
Prerequisites: Prepare Your Flask App
Regardless of which approach you choose, your Flask app needs two things: a Dockerfile and the right configuration for running behind a Kubernetes ingress.
1. Create a Production-Ready Dockerfile
If your Flask project doesn't already have a Dockerfile:
1FROM python:3.12-slim AS base
2
3ENV PYTHONDONTWRITEBYTECODE=1 \
4 PYTHONUNBUFFERED=1 \
5 PIP_NO_CACHE_DIR=1
6
7WORKDIR /app
8
9# Install system dependencies (for psycopg2)
10RUN apt-get update && apt-get install -y --no-install-recommends \
11 libpq-dev gcc && \
12 rm -rf /var/lib/apt/lists/*
13
14# Install Python dependencies
15COPY requirements.txt .
16RUN pip install --no-cache-dir -r requirements.txt gunicorn
17
18# Copy application code
19COPY . .
20
21EXPOSE 8000
22CMD ["gunicorn", "-w", "4", "-b", "0.0.0.0:8000", "app:app"]Important: The app must listen on
0.0.0.0, not127.0.0.1orlocalhost. This is required for container networking in Kubernetes. Replaceapp:appwith your module and application object if they differ (e.g.,wsgi:application).
Your requirements.txt should include at minimum:
1Flask>=3.0
2Flask-SQLAlchemy
3Flask-Migrate
4psycopg2-binary
5Flask-Caching
6redis
7celery2. Configure Flask for Kubernetes
Flask running behind a Kubernetes ingress (which terminates TLS) needs ProxyFix middleware so it correctly handles X-Forwarded-Proto headers. Without this, url_for generates http:// URLs even when the user is on HTTPS, and cookie security flags may misbehave.
1# app.py (or your application factory)
2import os
3from flask import Flask
4from werkzeug.middleware.proxy_fix import ProxyFix
5
6def create_app():
7 app = Flask(__name__)
8
9 # Trust X-Forwarded-Proto and X-Forwarded-Host from the ingress
10 # This makes url_for() generate https:// URLs correctly
11 app.wsgi_app = ProxyFix(app.wsgi_app, x_proto=1, x_host=1)
12
13 # Flask config from environment variables
14 app.config['SECRET_KEY'] = os.environ.get('SECRET_KEY', 'change-me-in-production')
15 app.config['FLASK_ENV'] = os.environ.get('FLASK_ENV', 'production')
16
17 # SQLAlchemy database URL
18 app.config['SQLALCHEMY_DATABASE_URI'] = os.environ.get(
19 'DATABASE_URL',
20 'postgresql://flask:flask@localhost:5432/flask_db'
21 )
22 app.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = False
23
24 # Redis-backed caching (Flask-Caching)
25 redis_url = os.environ.get('REDIS_URL', 'redis://localhost:6379/0')
26 app.config['CACHE_TYPE'] = 'RedisCache'
27 app.config['CACHE_REDIS_URL'] = redis_url
28
29 # Celery broker
30 app.config['CELERY_BROKER_URL'] = redis_url
31 app.config['CELERY_RESULT_BACKEND'] = redis_url
32
33 from .extensions import db, migrate, cache
34 db.init_app(app)
35 migrate.init_app(app, db)
36 cache.init_app(app)
37
38 from .routes import main_bp
39 app.register_blueprint(main_bp)
40
41 return app
42
43app = create_app()If you use an application factory pattern (create_app()), make sure your CMD in the Dockerfile references the created app object: gunicorn -w 4 -b 0.0.0.0:8000 "app:create_app()". The quotes are required when using the factory pattern with Gunicorn.
Flask Deployment Checklist
-
ProxyFixmiddleware applied (x_proto=1, x_host=1) -
SECRET_KEYloaded from environment variable -
DATABASE_URLor individualDB_*vars configured -
REDIS_URLconfigured for caching and Celery broker - App listens on
0.0.0.0:8000(not localhost) -
FLASK_ENV=productionset in Kubernetes -
flask db upgraderuns on container startup (or as a lifecycle hook)
Approach A: Bunnyshell UI — Zero CI/CD Maintenance
This is the easiest approach. You connect your repo, paste a YAML config, deploy, and flip a toggle. No CI/CD pipelines to write or maintain — Bunnyshell automatically adds webhooks to your Git provider and creates/destroys preview environments when PRs are opened/closed.
Step 1: Create a Project and Environment
- Log into Bunnyshell
- Click Create project and name it (e.g., "Flask App")
- Inside the project, click Create environment and name it (e.g., "flask-main")
Step 2: Define the Environment Configuration
Click Configuration in your environment view and paste this bunnyshell.yaml:
1kind: Environment
2name: flask-preview
3type: primary
4
5environmentVariables:
6 SECRET_KEY: SECRET["your-flask-secret-key"]
7 DB_PASSWORD: SECRET["your-db-password"]
8
9components:
10 # ── Flask Application ──
11 - kind: Application
12 name: flask-app
13 gitRepo: 'https://github.com/your-org/your-flask-repo.git'
14 gitBranch: main
15 gitApplicationPath: /
16 dockerCompose:
17 build:
18 context: .
19 dockerfile: Dockerfile
20 environment:
21 FLASK_ENV: production
22 SECRET_KEY: '{{ env.vars.SECRET_KEY }}'
23 DATABASE_URL: 'postgresql://flask:{{ env.vars.DB_PASSWORD }}@postgres:5432/flask_db'
24 REDIS_URL: 'redis://redis:6379/0'
25 ports:
26 - '8000:8000'
27 hosts:
28 - hostname: 'app-{{ env.base_domain }}'
29 path: /
30 servicePort: 8000
31 dependsOn:
32 - postgres
33 - redis
34
35 # ── PostgreSQL Database ──
36 - kind: Database
37 name: postgres
38 dockerCompose:
39 image: 'postgres:16-alpine'
40 environment:
41 POSTGRES_DB: flask_db
42 POSTGRES_USER: flask
43 POSTGRES_PASSWORD: '{{ env.vars.DB_PASSWORD }}'
44 ports:
45 - '5432:5432'
46
47 # ── Redis Cache / Celery Broker ──
48 - kind: Service
49 name: redis
50 dockerCompose:
51 image: 'redis:7-alpine'
52 ports:
53 - '6379:6379'
54
55volumes:
56 - name: postgres-data
57 mount:
58 component: postgres
59 containerPath: /var/lib/postgresql/data
60 size: 1GiReplace your-org/your-flask-repo with your actual repository. Save the configuration.
Step 3: Deploy
Click the Deploy button, select your Kubernetes cluster, and click Deploy Environment. Bunnyshell will:
- Build your Flask Docker image from the Dockerfile
- Pull PostgreSQL and Redis images
- Deploy everything into an isolated Kubernetes namespace
- Generate HTTPS URLs automatically with DNS
Monitor the deployment in the environment detail page. When status shows Running, click Endpoints to access your live Flask app.
Step 4: Run Database Migrations
After deployment, run Flask-Migrate (Alembic) migrations via the component's terminal in the Bunnyshell UI, or via CLI:
1export BUNNYSHELL_TOKEN=your-api-token
2
3# Get component IDs
4bns components list --environment ENV_ID --output json | jq '._embedded.item[] | {id, name}'
5
6# Run migrations
7bns exec COMPONENT_ID -- flask db upgrade
8
9# Optionally seed initial data
10bns exec COMPONENT_ID -- flask seed-dataIf you want migrations to run automatically on every deploy, add an initContainers step or a startup command in your Dockerfile ENTRYPOINT script. A common pattern is an entrypoint.sh that runs flask db upgrade && exec gunicorn ....
Step 5: Enable Automatic Preview Environments
This is the magic step — no CI/CD configuration needed:
- In your environment, go to Settings
- Find the Ephemeral environments section
- Toggle "Create ephemeral environments on pull request" to ON
- Toggle "Destroy environment after merge or close pull request" to ON
- Select the Kubernetes cluster for ephemeral environments
That's it. Bunnyshell automatically adds a webhook to your Git provider (GitHub, GitLab, or Bitbucket). From now on:
- Open a PR → Bunnyshell creates an ephemeral environment with the PR's branch
- Push to PR → The environment redeploys with the latest changes
- Bunnyshell posts a comment on the PR with a link to the live deployment
- Merge or close the PR → The ephemeral environment is automatically destroyed
Note: The primary environment must be in Running or Stopped status before ephemeral environments can be created from it.
Approach B: Docker Compose Import
Already have a docker-compose.yml for local development? Bunnyshell can import it directly and convert it to its environment format. No manual YAML writing required.
Step 1: Add a docker-compose.yml to Your Repo
If you don't already have one, create docker-compose.yml in your repo root:
1version: '3.8'
2
3services:
4 flask-app:
5 build:
6 context: .
7 dockerfile: Dockerfile
8 ports:
9 - '8000:8000'
10 environment:
11 FLASK_ENV: development
12 SECRET_KEY: 'dev-secret-key'
13 DATABASE_URL: 'postgresql://flask:flask@postgres:5432/flask_db'
14 REDIS_URL: 'redis://redis:6379/0'
15 depends_on:
16 - postgres
17 - redis
18 volumes:
19 - .:/app
20
21 postgres:
22 image: postgres:16-alpine
23 environment:
24 POSTGRES_DB: flask_db
25 POSTGRES_USER: flask
26 POSTGRES_PASSWORD: flask
27 volumes:
28 - postgres-data:/var/lib/postgresql/data
29
30 redis:
31 image: redis:7-alpine
32
33volumes:
34 postgres-data:Step 2: Import into Bunnyshell
- Create a Project and Environment in Bunnyshell (same as Approach A, Step 1)
- Click Define environment
- Select your Git account and repository
- Set the branch (e.g.,
main) and the path todocker-compose.yml(use/if it's in the root) - Click Continue — Bunnyshell parses and validates your Docker Compose file
Bunnyshell automatically detects:
- All services (flask-app, postgres, redis)
- Exposed ports
- Build configurations (Dockerfiles)
- Volumes and volume mounts
- Environment variables
It converts everything into a bunnyshell.yaml environment definition.
Important: The
docker-compose.ymlis only read during the initial import. Subsequent changes to the file won't auto-propagate — edit the environment configuration in Bunnyshell instead.
Step 3: Adjust the Configuration
After import, go to Configuration in the environment view and update:
- Replace hardcoded secrets with
SECRET["..."]syntax - Remove the
volumesbind mount (.:/app) — this is for local dev only, not Kubernetes - Update
DATABASE_URLusing Bunnyshell interpolation:
1DATABASE_URL: 'postgresql://flask:{{ env.vars.DB_PASSWORD }}@postgres:5432/flask_db'
2REDIS_URL: 'redis://redis:6379/0'
3FLASK_ENV: production- Add the
hostsblock so Bunnyshell generates the ingress URL:
1hosts:
2 - hostname: 'app-{{ env.base_domain }}'
3 path: /
4 servicePort: 8000Step 4: Deploy and Enable Preview Environments
Same as Approach A — click Deploy, then go to Settings and toggle on ephemeral environments.
Best Practices for Docker Compose with Bunnyshell
- Use separate env files — Keep
.envfor local dev and configure production values in Bunnyshell's environment variables - Design for startup resilience — Kubernetes doesn't guarantee
depends_onordering. Make your Flask app retry database connections on startup. A simple retry loop in your entrypoint script handles this:
1#!/bin/sh
2# entrypoint.sh
3set -e
4
5echo "Waiting for postgres..."
6until flask db upgrade 2>/dev/null; do
7 echo "DB not ready yet, retrying in 2s..."
8 sleep 2
9done
10
11exec gunicorn -w 4 -b 0.0.0.0:8000 app:app- Use Bunnyshell interpolation for dynamic values like URLs:
1# Local docker-compose.yml
2BACKEND_URL: http://localhost:8000
3
4# Bunnyshell environment config (after import)
5BACKEND_URL: 'https://{{ components.flask-app.ingress.hosts[0] }}'Approach C: Helm Charts
For teams with existing Helm infrastructure or complex Kubernetes requirements (custom ingress, service mesh, advanced HPA). Helm gives you full control over every Kubernetes resource.
Step 1: Create a Helm Chart
Structure your Flask Helm chart in your repo:
1helm/flask/
2├── Chart.yaml
3├── values.yaml
4└── templates/
5 ├── deployment.yaml
6 ├── service.yaml
7 ├── ingress.yaml
8 └── configmap.yamlA minimal values.yaml:
1replicaCount: 1
2image:
3 repository: ""
4 tag: latest
5service:
6 port: 8000
7ingress:
8 enabled: true
9 className: bns-nginx
10 host: ""
11env:
12 FLASK_ENV: production
13 SECRET_KEY: ""
14 DATABASE_URL: ""
15 REDIS_URL: ""Step 2: Define the Bunnyshell Configuration
Create a bunnyshell.yaml using Helm components:
1kind: Environment
2name: flask-helm
3type: primary
4
5environmentVariables:
6 SECRET_KEY: SECRET["your-flask-secret-key"]
7 DB_PASSWORD: SECRET["your-db-password"]
8 POSTGRES_DB: flask_db
9 POSTGRES_USER: flask
10
11components:
12 # ── Docker Image Build ──
13 - kind: DockerImage
14 name: flask-image
15 context: /
16 dockerfile: Dockerfile
17 gitRepo: 'https://github.com/your-org/your-flask-repo.git'
18 gitBranch: main
19 gitApplicationPath: /
20
21 # ── PostgreSQL via Helm ──
22 - kind: Helm
23 name: postgres
24 runnerImage: 'dtzar/helm-kubectl:3.8.2'
25 deploy:
26 - |
27 cat << EOF > pg_values.yaml
28 global:
29 storageClass: bns-network-sc
30 auth:
31 postgresPassword: {{ env.vars.DB_PASSWORD }}
32 database: {{ env.vars.POSTGRES_DB }}
33 EOF
34 - 'helm repo add bitnami https://charts.bitnami.com/bitnami'
35 - 'helm upgrade --install --namespace {{ env.k8s.namespace }}
36 --post-renderer /bns/helpers/helm/bns_post_renderer
37 -f pg_values.yaml postgres bitnami/postgresql --version 11.9.11'
38 - |
39 POSTGRES_HOST="postgres-postgresql.{{ env.k8s.namespace }}.svc.cluster.local"
40 destroy:
41 - 'helm uninstall postgres --namespace {{ env.k8s.namespace }}'
42 start:
43 - 'kubectl scale --replicas=1 --namespace {{ env.k8s.namespace }}
44 statefulset/postgres-postgresql'
45 stop:
46 - 'kubectl scale --replicas=0 --namespace {{ env.k8s.namespace }}
47 statefulset/postgres-postgresql'
48 exportVariables:
49 - POSTGRES_HOST
50
51 # ── Flask App via Helm ──
52 - kind: Helm
53 name: flask-app
54 runnerImage: 'dtzar/helm-kubectl:3.8.2'
55 deploy:
56 - |
57 cat << EOF > flask_values.yaml
58 replicaCount: 1
59 image:
60 repository: {{ components.flask-image.image }}
61 service:
62 port: 8000
63 ingress:
64 enabled: true
65 className: bns-nginx
66 host: app-{{ env.base_domain }}
67 env:
68 FLASK_ENV: production
69 SECRET_KEY: '{{ env.vars.SECRET_KEY }}'
70 DATABASE_URL: 'postgresql://{{ env.vars.POSTGRES_USER }}:{{ env.vars.DB_PASSWORD }}@{{ components.postgres.exported.POSTGRES_HOST }}:5432/{{ env.vars.POSTGRES_DB }}'
71 REDIS_URL: 'redis://redis:6379/0'
72 EOF
73 - 'helm upgrade --install --namespace {{ env.k8s.namespace }}
74 --post-renderer /bns/helpers/helm/bns_post_renderer
75 -f flask_values.yaml flask-{{ env.unique }} ./helm/flask'
76 destroy:
77 - 'helm uninstall flask-{{ env.unique }} --namespace {{ env.k8s.namespace }}'
78 start:
79 - 'helm upgrade --namespace {{ env.k8s.namespace }}
80 --post-renderer /bns/helpers/helm/bns_post_renderer
81 --reuse-values --set replicaCount=1 flask-{{ env.unique }} ./helm/flask'
82 stop:
83 - 'helm upgrade --namespace {{ env.k8s.namespace }}
84 --post-renderer /bns/helpers/helm/bns_post_renderer
85 --reuse-values --set replicaCount=0 flask-{{ env.unique }} ./helm/flask'
86 gitRepo: 'https://github.com/your-org/your-flask-repo.git'
87 gitBranch: main
88 gitApplicationPath: /helm/flask
89
90 # ── Redis ──
91 - kind: Service
92 name: redis
93 dockerCompose:
94 image: 'redis:7-alpine'
95 ports:
96 - '6379:6379'Key: Always include
--post-renderer /bns/helpers/helm/bns_post_rendererin your helm commands. This adds labels so Bunnyshell can track resources, show logs, and manage component lifecycle.
Step 3: Deploy and Enable Preview Environments
Same flow: paste the config in Configuration, hit Deploy, then enable ephemeral environments in Settings.
Enabling Preview Environments (All Approaches)
Regardless of which approach you used, enabling automatic preview environments is the same:
- Ensure your primary environment has been deployed at least once (Running or Stopped status)
- Go to Settings in your environment
- Toggle "Create ephemeral environments on pull request" → ON
- Toggle "Destroy environment after merge or close pull request" → ON
- Select the target Kubernetes cluster
What happens next:
- Bunnyshell adds a webhook to your Git provider automatically
- When a developer opens a PR, Bunnyshell creates an ephemeral environment cloned from the primary, using the PR's branch
- Bunnyshell posts a comment on the PR with a direct link to the running deployment
- When the PR is merged or closed, the ephemeral environment is automatically destroyed
No GitHub Actions. No GitLab CI pipelines. No maintenance. It just works.
Optional: CI/CD Integration via CLI
If you prefer to control preview environments from your CI/CD pipeline (e.g., to run flask db upgrade or load fixtures after deploy), use the Bunnyshell CLI:
1# Install
2brew install bunnyshell/tap/bunnyshell-cli
3
4# Authenticate
5export BUNNYSHELL_TOKEN=your-api-token
6
7# Create, deploy, and run migrations in one flow
8bns environments create --from-path bunnyshell.yaml --name "pr-123" --project PROJECT_ID --k8s CLUSTER_ID
9bns environments deploy --id ENV_ID --wait
10bns exec COMPONENT_ID -- flask db upgrade
11bns exec COMPONENT_ID -- flask seed-fixturesRemote Development and Debugging
Bunnyshell makes it easy to develop and debug directly against any environment — primary or ephemeral.
Port Forwarding
Connect your local tools to the remote database or Redis:
1# Forward PostgreSQL to local port 15432
2bns port-forward 15432:5432 --component POSTGRES_COMPONENT_ID
3
4# Connect with psql or any DB tool
5psql -h localhost -p 15432 -U flask flask_db
6
7# Forward Redis to local port 16379
8bns port-forward 16379:6379 --component REDIS_COMPONENT_ID
9
10# Connect with redis-cli
11redis-cli -p 16379Execute Flask Commands
1# Run database migrations
2bns exec COMPONENT_ID -- flask db upgrade
3
4# Show current migration state
5bns exec COMPONENT_ID -- flask db current
6
7# Open an interactive shell
8bns exec COMPONENT_ID -- flask shell
9
10# Create a superuser (if you have a CLI command for it)
11bns exec COMPONENT_ID -- flask create-admin --email admin@example.com
12
13# Load fixture data
14bns exec COMPONENT_ID -- flask seed-fixturesLive Logs
1# Stream logs in real time
2bns logs --component COMPONENT_ID -f
3
4# Last 200 lines
5bns logs --component COMPONENT_ID --tail 200
6
7# Logs from the last 5 minutes
8bns logs --component COMPONENT_ID --since 5mLive Code Sync
For active development, sync your local code changes to the remote container in real time:
1bns remote-development up --component COMPONENT_ID
2# Edit files locally — changes sync automatically to the running container
3# Gunicorn will detect changes if you add --reload flag
4# When done:
5bns remote-development downFor live code sync to trigger a Gunicorn reload, add --reload to your Gunicorn command during development: gunicorn -w 1 --reload -b 0.0.0.0:8000 app:app. Use a single worker (-w 1) with --reload to avoid race conditions.
Troubleshooting
| Issue | Solution |
|---|---|
| 502 Bad Gateway | Flask/Gunicorn isn't listening on 0.0.0.0:8000. Check your CMD in Dockerfile — must use -b 0.0.0.0:8000, not 127.0.0.1. |
| Mixed content / HTTPS errors | Apply ProxyFix middleware: app.wsgi_app = ProxyFix(app.wsgi_app, x_proto=1, x_host=1) |
| url_for() generates http:// URLs | Same fix — ProxyFix with x_proto=1 is required behind Kubernetes ingress |
flask db upgrade fails | Ensure DB_HOST points to postgres (the Bunnyshell component name), not localhost |
| OperationalError: could not connect to server | PostgreSQL isn't ready yet. Add a retry loop in your entrypoint script |
| Redis connection refused | Verify REDIS_URL uses redis as hostname (the component name), not localhost |
| Celery workers can't reach broker | Pass REDIS_URL to your Celery worker component as CELERY_BROKER_URL |
| Service startup order issues | Kubernetes doesn't guarantee depends_on ordering. Add a startup retry in your entrypoint |
| SECRET_KEY not set | Always use SECRET["name"] syntax in bunnyshell.yaml to avoid committing secrets |
| 522 Connection timed out | Cluster may be behind a firewall. Verify Cloudflare IPs are whitelisted on the ingress controller. |
What's Next?
- Add Celery workers — Add another
Applicationcomponent that runscelery -A app.celery worker, sharing the same Redis component as the broker - Seed test data — Run
bns exec <ID> -- flask seed-fixturesafter deploy for realistic preview data - Add background beat scheduler — Add a Celery beat component for periodic tasks
- Monitor with Sentry — Pass
SENTRY_DSNas an environment variable to capture errors in preview environments - Add Nginx sidecar — For production-like static file serving alongside Gunicorn
Related Resources
- Bunnyshell Quickstart Guide
- Docker Compose with Bunnyshell
- Helm with Bunnyshell
- Bunnyshell CLI Reference
- Ephemeral Environments — Learn more about the concept
- Preview Environments for Django — Django-specific guide
- All Guides — More technical guides
Ship faster starting today.
14-day full-feature trial. No credit card required. Pay-as-you-go from $0.007/min per environment.