Preview Environments with Docker Compose: Import, Convert, and Deploy with Bunnyshell
Why Docker Compose for Preview Environments?
Most development teams already have a docker-compose.yml. It defines your frontend, API, database, cache, and any other services your application needs. It works great locally -- docker compose up and you're running.
But local-only means no one else can see your work. QA can't test your feature branch. Designers can't review the UI. Product managers can't verify the acceptance criteria. You end up deploying to a shared staging server and immediately running into the problems that shared environments always cause: merge conflicts, stale data, and "it works on my machine."
What if your Docker Compose setup could power automatic preview environments? Every pull request gets its own isolated deployment -- same services, same architecture -- but running in Kubernetes with a unique URL that anyone on your team can access.
That's exactly what Bunnyshell does. You import your docker-compose.yml, Bunnyshell converts it to a Kubernetes-native environment configuration, and from that point forward every PR automatically gets a full deployment.
You don't need to know Kubernetes. Bunnyshell handles the conversion from Docker Compose services to Kubernetes deployments, ingresses, and persistent volumes. You keep thinking in Compose terms.
How Bunnyshell Imports Docker Compose
When you point Bunnyshell at a docker-compose.yml in your repository, it reads every service definition and translates it into a bunnyshell.yaml environment configuration. Here's what happens under the hood:
- Service discovery -- Each
services:entry becomes a Bunnyshell component (Application,Service, orDatabasekind) - Build detection -- Services with
build:get linked to your Git repository so Bunnyshell can build Docker images from your Dockerfiles - Image passthrough -- Services using
image:(likemysql:8.0orredis:7-alpine) are pulled directly from the registry - Port mapping -- Exposed ports are converted to Kubernetes service ports, and the first HTTP port gets an automatic ingress with a generated HTTPS URL
- Volume conversion -- Named volumes become Kubernetes persistent volume claims (PVCs), and bind mounts are flagged for review
- Environment variables -- Passed through directly, with an opportunity to convert secrets to Bunnyshell's
SECRET["..."]syntax - Dependencies --
depends_onbecomesdependsOnin the Bunnyshell config, controlling deployment order
The import is a one-time operation. After Bunnyshell generates the bunnyshell.yaml, you work with that configuration going forward. Changes to your docker-compose.yml won't auto-propagate -- but that's actually a good thing, because the Bunnyshell config will diverge from your local Compose file as you optimize for Kubernetes.
Prerequisites: Your docker-compose.yml
You need a working docker-compose.yml committed to your Git repository. Here's a real-world example -- a typical web application with a React frontend, Node.js API, PostgreSQL database, and Redis cache:
1version: '3.8'
2
3services:
4 frontend:
5 build:
6 context: ./frontend
7 dockerfile: Dockerfile
8 ports:
9 - '3000:3000'
10 environment:
11 REACT_APP_API_URL: 'http://localhost:4000'
12 depends_on:
13 - api
14
15 api:
16 build:
17 context: ./api
18 dockerfile: Dockerfile
19 ports:
20 - '4000:4000'
21 environment:
22 NODE_ENV: production
23 DATABASE_URL: 'postgresql://appuser:secretpass@db:5432/myapp'
24 REDIS_URL: 'redis://redis:6379'
25 JWT_SECRET: 'my-jwt-secret'
26 CORS_ORIGIN: 'http://localhost:3000'
27 depends_on:
28 db:
29 condition: service_healthy
30 redis:
31 condition: service_started
32
33 db:
34 image: postgres:16-alpine
35 environment:
36 POSTGRES_DB: myapp
37 POSTGRES_USER: appuser
38 POSTGRES_PASSWORD: secretpass
39 volumes:
40 - pg-data:/var/lib/postgresql/data
41 - ./init.sql:/docker-entrypoint-initdb.d/init.sql
42 ports:
43 - '5432:5432'
44 healthcheck:
45 test: ['CMD-SHELL', 'pg_isready -U appuser -d myapp']
46 interval: 5s
47 timeout: 5s
48 retries: 5
49
50 redis:
51 image: redis:7-alpine
52 command: redis-server --maxmemory 128mb --maxmemory-policy allkeys-lru
53 ports:
54 - '6379:6379'
55
56volumes:
57 pg-data:This is a standard four-service stack. The frontend talks to the API, the API talks to PostgreSQL and Redis. Nothing exotic -- but it represents the kind of setup that teams actually use in production.
Your Dockerfiles must produce production-ready images (not just dev servers with hot reload). Bunnyshell builds the images and deploys them to Kubernetes -- there's no live-mounted source code by default.
Step-by-Step: Import Docker Compose into Bunnyshell
Step 1: Create a Project and Environment
- Log into Bunnyshell
- Click Create project and name it (e.g., "My Web App")
- Inside the project, click Create environment and name it (e.g., "main-branch")
Step 2: Import Your Docker Compose File
- In the new environment, click Define environment
- Select Import from Docker Compose
- Connect your Git account (GitHub, GitLab, or Bitbucket) if you haven't already
- Select your repository and branch (e.g.,
main) - Set the path to your
docker-compose.yml(use/if it's in the repository root, or specify a subdirectory like/infra/) - Click Continue
Bunnyshell parses the file and shows you a preview of the detected services:
- frontend -- build from
./frontend/Dockerfile, exposes port 3000 - api -- build from
./api/Dockerfile, exposes port 4000 - db -- image
postgres:16-alpine, exposes port 5432 - redis -- image
redis:7-alpine, exposes port 6379
Review the detection and click Import.
Step 3: Review the Generated Configuration
After import, navigate to Configuration in the environment view. Bunnyshell has generated a bunnyshell.yaml that looks approximately like this:
1kind: Environment
2name: my-web-app
3type: primary
4
5components:
6 - kind: Application
7 name: frontend
8 gitRepo: 'https://github.com/your-org/your-repo.git'
9 gitBranch: main
10 gitApplicationPath: /frontend
11 dockerCompose:
12 build:
13 context: ./frontend
14 dockerfile: Dockerfile
15 ports:
16 - '3000:3000'
17 environment:
18 REACT_APP_API_URL: 'http://localhost:4000'
19 hosts:
20 - hostname: 'frontend-{{ env.base_domain }}'
21 path: /
22 servicePort: 3000
23
24 - kind: Application
25 name: api
26 gitRepo: 'https://github.com/your-org/your-repo.git'
27 gitBranch: main
28 gitApplicationPath: /api
29 dockerCompose:
30 build:
31 context: ./api
32 dockerfile: Dockerfile
33 ports:
34 - '4000:4000'
35 environment:
36 NODE_ENV: production
37 DATABASE_URL: 'postgresql://appuser:secretpass@db:5432/myapp'
38 REDIS_URL: 'redis://redis:6379'
39 JWT_SECRET: 'my-jwt-secret'
40 CORS_ORIGIN: 'http://localhost:3000'
41 dependsOn:
42 - db
43 - redis
44 hosts:
45 - hostname: 'api-{{ env.base_domain }}'
46 path: /
47 servicePort: 4000
48
49 - kind: Database
50 name: db
51 dockerCompose:
52 image: 'postgres:16-alpine'
53 environment:
54 POSTGRES_DB: myapp
55 POSTGRES_USER: appuser
56 POSTGRES_PASSWORD: secretpass
57 ports:
58 - '5432:5432'
59
60 - kind: Service
61 name: redis
62 dockerCompose:
63 image: 'redis:7-alpine'
64 command: redis-server --maxmemory 128mb --maxmemory-policy allkeys-lru
65 ports:
66 - '6379:6379'
67
68volumes:
69 - name: pg-data
70 mount:
71 component: db
72 containerPath: /var/lib/postgresql/data
73 size: 1GiThis is a working starting point, but it needs adjustments. The next section covers exactly what to change.
Step 4: Adjust the Configuration
Several things from your local Compose file don't translate directly to a Kubernetes deployment. Here are the mandatory changes:
1. Replace hardcoded secrets with Bunnyshell secrets:
1environmentVariables:
2 DB_PASSWORD: SECRET["your-db-password"]
3 JWT_SECRET: SECRET["your-jwt-secret"]Then reference them in components:
1environment:
2 DATABASE_URL: 'postgresql://appuser:{{ env.vars.DB_PASSWORD }}@db:5432/myapp'
3 JWT_SECRET: '{{ env.vars.JWT_SECRET }}'2. Replace localhost URLs with Bunnyshell interpolation:
1# Before (won't work -- each service gets its own hostname)
2REACT_APP_API_URL: 'http://localhost:4000'
3CORS_ORIGIN: 'http://localhost:3000'
4
5# After (dynamic URLs from Bunnyshell ingress)
6REACT_APP_API_URL: 'https://{{ components.api.ingress.hosts[0] }}'
7CORS_ORIGIN: 'https://{{ components.frontend.ingress.hosts[0] }}'3. Update database connection strings:
1# Before
2DATABASE_URL: 'postgresql://appuser:secretpass@db:5432/myapp'
3
4# After (with secret interpolation -- "db" hostname still works)
5DATABASE_URL: 'postgresql://appuser:{{ env.vars.DB_PASSWORD }}@db:5432/myapp'Service names from Docker Compose (like db, redis, api) still work as hostnames in Bunnyshell. Internally, Bunnyshell creates Kubernetes services with matching names, so db:5432 resolves correctly within the environment's namespace.
4. Remove local bind mounts:
1# Remove this -- it's for local development only
2volumes:
3 - ./init.sql:/docker-entrypoint-initdb.d/init.sqlIf you need init scripts, bake them into your Docker image instead:
COPY init.sql /docker-entrypoint-initdb.d/init.sqlStep 5: Deploy
Click Deploy, select your Kubernetes cluster, and click Deploy Environment. Bunnyshell will:
- Build Docker images for
frontendandapifrom your Dockerfiles - Pull
postgres:16-alpineandredis:7-alpinefrom Docker Hub - Create a Kubernetes namespace and deploy all four services
- Provision persistent storage for the PostgreSQL data volume
- Generate HTTPS URLs for
frontendandapiwith automatic TLS certificates
When the status shows Running, click Endpoints to see your live URLs.
What Gets Converted
Here's a detailed breakdown of how Docker Compose concepts map to Bunnyshell/Kubernetes:
| Docker Compose | Bunnyshell / Kubernetes | Notes |
|---|---|---|
services: with build: | kind: Application component | Linked to your Git repo for automatic image builds |
services: with image: (stateless) | kind: Service component | Pulls image directly from registry |
services: with image: (database) | kind: Database component | Same as Service but categorized for clarity |
ports: | Kubernetes Service + Ingress | First HTTP port automatically gets a public HTTPS URL |
volumes: (named) | PersistentVolumeClaim (PVC) | Configurable size (default 1Gi) |
volumes: (bind mount) | Not converted | Bind mounts are local-only; bake files into images |
environment: | Pod environment variables | Passed through; add SECRET["..."] for sensitive values |
depends_on: | dependsOn: | Controls deployment order |
networks: | Kubernetes namespace networking | All components share a namespace; service names resolve automatically |
healthcheck: | Kubernetes readiness/liveness probes | May need manual configuration |
command: / entrypoint: | Pod command: | Passed through directly |
build.args: | build.args: | Passed through directly |
Adapting Your Compose File for Bunnyshell
Environment Variables and Interpolation
Docker Compose uses ${VARIABLE} syntax and .env files. Bunnyshell uses its own interpolation engine with {{ }} syntax:
1# Docker Compose style (local)
2environment:
3 API_URL: ${API_URL:-http://localhost:4000}
4
5# Bunnyshell style (cloud)
6environment:
7 API_URL: 'https://{{ components.api.ingress.hosts[0] }}'Common interpolation patterns:
1# Reference another component's URL
2'https://{{ components.frontend.ingress.hosts[0] }}'
3
4# Reference an environment-level variable
5'{{ env.vars.DB_PASSWORD }}'
6
7# Reference the environment's base domain
8'app-{{ env.base_domain }}'
9
10# Reference an exported variable from a Helm component
11'{{ components.mysql.exported.MYSQL_HOST }}'Exposed Ports and Ingress
In Docker Compose, ports: ['3000:3000'] maps a container port to your localhost. In Bunnyshell, exposed ports become Kubernetes services, and you define hosts: to create an ingress (public HTTPS URL):
1hosts:
2 - hostname: 'frontend-{{ env.base_domain }}'
3 path: /
4 servicePort: 3000Only services that need external access need hosts:. Internal services (databases, caches) communicate via their component name as hostname -- no ingress required.
If your frontend and API run on different hostnames (which they will in Bunnyshell), you need proper CORS configuration. Update your API's CORS origin to use the Bunnyshell-interpolated frontend URL.
Build Context
Docker Compose build.context is relative to the Compose file. Bunnyshell's gitApplicationPath serves the same purpose but is relative to the repository root:
1# Docker Compose
2build:
3 context: ./frontend
4 dockerfile: Dockerfile
5
6# Bunnyshell (equivalent)
7gitApplicationPath: /frontend
8dockerCompose:
9 build:
10 context: ./frontend
11 dockerfile: DockerfileAdvanced Patterns
Multi-Stage Builds
Multi-stage Dockerfiles work exactly as expected. Bunnyshell builds the full Dockerfile and uses the final stage:
1# Stage 1: Build
2FROM node:20-alpine AS builder
3WORKDIR /app
4COPY package*.json ./
5RUN npm ci
6COPY . .
7RUN npm run build
8
9# Stage 2: Production
10FROM node:20-alpine AS production
11WORKDIR /app
12COPY /app/dist ./dist
13COPY /app/node_modules ./node_modules
14COPY /app/package.json ./
15EXPOSE 4000
16CMD ["node", "dist/main.js"]If you need to target a specific stage, use build.target:
1dockerCompose:
2 build:
3 context: ./api
4 dockerfile: Dockerfile
5 target: productiondepends_on and Deployment Order
Docker Compose depends_on with conditions (service_healthy, service_started) translates to Bunnyshell's dependsOn, but Kubernetes doesn't guarantee the same startup ordering guarantees that Compose provides. Your application should handle connection retries gracefully:
1# Bunnyshell config
2dependsOn:
3 - db
4 - redisDesign for resilience: add connection retry logic to your application startup. Most frameworks (Express, Django, Rails, Spring) support this natively or via middleware. Don't rely on deployment order for correctness.
Healthchecks
Docker Compose healthchecks can be converted to Kubernetes probes. While Bunnyshell doesn't auto-convert these, you can add them manually:
1dockerCompose:
2 image: 'postgres:16-alpine'
3 deploy:
4 resources:
5 limits:
6 memory: 512MFor custom health checks, configure readiness and liveness probes in the advanced component settings within the Bunnyshell UI.
Shared Volumes Between Services
In Docker Compose, two services can share a named volume. In Bunnyshell, you achieve this with shared_paths on sidecar containers, or by using a shared PVC:
1volumes:
2 - name: shared-uploads
3 mount:
4 component: api
5 containerPath: /app/uploads
6 size: 5GiIf another service needs the same data, consider using an object storage service (like MinIO) instead of shared volumes -- it's more reliable in Kubernetes.
Init Containers and Startup Scripts
Docker Compose doesn't have a native init container concept. In Bunnyshell, you can run initialization commands using deploy hooks or by wrapping your entrypoint:
1# entrypoint.sh
2#!/bin/sh
3echo "Running migrations..."
4npx prisma migrate deploy
5echo "Starting server..."
6exec node dist/main.js1dockerCompose:
2 build:
3 context: ./api
4 dockerfile: Dockerfile
5 command: ['sh', '/app/entrypoint.sh']Environment Files (.env)
Docker Compose supports env_file: directives. Bunnyshell doesn't read .env files directly. Instead, define all variables explicitly in the configuration:
1# Don't use this in Bunnyshell
2env_file:
3 - .env.production
4
5# Do this instead
6environment:
7 NODE_ENV: production
8 DATABASE_URL: 'postgresql://appuser:{{ env.vars.DB_PASSWORD }}@db:5432/myapp'
9 REDIS_URL: 'redis://redis:6379'This is actually better for preview environments because every variable is visible in the configuration, making debugging easier.
Enabling Preview Environments
Once your primary environment is deployed and running, enabling automatic preview environments takes 30 seconds:
- In your environment, go to Settings
- Find Ephemeral environments
- Toggle "Create ephemeral environments on pull request" to ON
- Toggle "Destroy environment after merge or close pull request" to ON
- Select the Kubernetes cluster for preview deployments
What happens next:
- Bunnyshell automatically adds a webhook to your Git provider
- When a developer opens a PR, Bunnyshell clones your primary environment configuration, swaps the branch to the PR's branch, and deploys a fully isolated environment
- A comment is posted on the PR with links to every service endpoint
- When the PR is merged or closed, the environment is destroyed automatically
No GitHub Actions. No GitLab CI pipelines. No Jenkinsfiles. The Git provider webhook triggers Bunnyshell directly.
The primary environment must be in Running or Stopped status before ephemeral environments can be created. Deploy at least once before enabling the toggle.
Optional: CLI-Driven Preview Environments
If you want to trigger preview environments from your CI/CD pipeline (for custom post-deploy scripts, database seeding, etc.):
1# Install the Bunnyshell CLI
2brew install bunnyshell/tap/bunnyshell-cli
3
4# Authenticate
5export BUNNYSHELL_TOKEN=your-api-token
6
7# Create an environment from configuration
8bns environments create \
9 --from-path bunnyshell.yaml \
10 --name "pr-${PR_NUMBER}" \
11 --project PROJECT_ID \
12 --k8s CLUSTER_ID
13
14# Deploy and wait
15bns environments deploy --id ENV_ID --wait
16
17# Run post-deploy scripts
18bns exec COMPONENT_ID -- npx prisma migrate deploy
19bns exec COMPONENT_ID -- node scripts/seed.jsCommon Pitfalls and Solutions
| Pitfall | Why it happens | Solution |
|---|---|---|
localhost references between services | In Compose, services share a Docker network. In K8s, each service has its own hostname. | Use {{ components.X.ingress.hosts[0] }} for public URLs, or the component name (e.g., api, db) for internal communication. |
Bind mounts (./src:/app/src) | Bind mounts reference your local filesystem. There's no local filesystem in Kubernetes. | Remove bind mounts. Your Dockerfile should COPY all necessary files into the image. |
.env files not loaded | Bunnyshell doesn't support env_file: directives. | Define all environment variables explicitly in the environment: block. |
| Build context outside repo root | Compose allows context: ../shared. Bunnyshell builds from the Git repository root. | Restructure so all build contexts are within the repo, or use a monorepo approach. |
| Hardcoded passwords in config | Secrets in plain text are visible to anyone with environment access. | Use SECRET["..."] and reference via {{ env.vars.X }}. |
| Large images / slow builds | No layer caching between builds by default. | Use multi-stage builds, .dockerignore, and minimize layers. Bunnyshell supports build caching -- enable it in component settings. |
| Port conflicts | Two services claiming the same port. | Each component gets its own pod in K8s -- port conflicts between services are impossible. Only within a single pod (sidecars) do ports need to be unique. |
| Healthcheck not converted | Bunnyshell doesn't auto-convert Compose healthchecks. | Add Kubernetes readiness/liveness probes manually in the component's advanced settings. |
| Named volumes too small | Default PVC size is 1Gi. | Adjust the size: field in the volumes: section of your bunnyshell.yaml. |
| Database data lost on redeploy | PVC not configured for the database component. | Ensure a volume is mounted at the database's data directory (e.g., /var/lib/postgresql/data). |
What's Next?
- Add more services -- Need Elasticsearch, RabbitMQ, or a worker process? Add them as
kind: Servicecomponents with their Docker Hub images - Set up remote development -- Use
bns remote-development upto sync local code changes to a running preview environment in real time - Configure auto-seeding -- Add post-deploy scripts to populate preview environments with test data
- Explore Helm charts -- For advanced Kubernetes needs (custom ingress rules, HPA, service mesh), check out the Helm approach
- Read about monorepos -- If your Docker Compose spans a monorepo, see Preview Environments for Monorepos
Related Resources
- Bunnyshell Docker Compose Quickstart
- Bunnyshell CLI Reference
- Ephemeral Environments -- Learn more about the concept
- Who Broke Staging? -- Why shared staging environments fail
- All Guides -- More technical guides
Ship faster starting today.
14-day full-feature trial. No credit card required. Pay-as-you-go from $0.007/min per environment.