Preview Environments for Express/Node.js: Automated Per-PR Deployments with Bunnyshell
GuideMarch 20, 202612 min read

Preview Environments for Express/Node.js: Automated Per-PR Deployments with Bunnyshell

Why Preview Environments for Express/Node.js?

Every Node.js team has lived through it: you add a new Prisma migration, test it locally, push to the shared staging server — and the migration fails because another engineer pushed a half-finished schema change an hour before. Or someone's background worker is running on staging and causing cascading errors that have nothing to do with your PR. Or the integration tests pass locally but fail in CI because the staging database has stale seed data from last week.

Preview environments solve this. Every pull request gets its own isolated deployment — Express app, PostgreSQL database, Redis for caching — running in Kubernetes with production-like configuration. Reviewers click a link and see the actual running app, not just the diff.

With Bunnyshell, you get:

  • Automatic deployment — A new environment spins up for every PR
  • Production parity — Same Docker images, same database engine, same environment variables
  • Isolation — Each PR environment is fully independent, no shared staging conflicts
  • Automatic cleanup — Environments are destroyed when the PR is merged or closed

Choose Your Approach

Bunnyshell supports three ways to set up preview environments for Express/Node.js. Pick the one that fits your workflow:

ApproachBest forComplexityCI/CD maintenance
Approach A: Bunnyshell UITeams that want the fastest setup with zero pipeline maintenanceEasiestNone — Bunnyshell manages webhooks automatically
Approach B: Docker Compose ImportTeams already using docker-compose.yml for local developmentEasyNone — import converts to Bunnyshell config automatically
Approach C: Helm ChartsTeams with existing Helm infrastructure or complex K8s needsAdvancedOptional — can use CLI or Bunnyshell UI

All three approaches end the same way: a toggle in Bunnyshell Settings that enables automatic preview environments for every PR. No GitHub Actions, no GitLab CI pipelines to maintain — Bunnyshell adds webhooks to your Git provider and listens for PR events.

Prerequisites: Prepare Your Express App

Regardless of which approach you choose, your Express app needs a proper Docker setup and the right configuration for running behind a Kubernetes ingress.

1. Create a Production-Ready Dockerfile

Use a multi-stage build to keep the production image lean — build dependencies (TypeScript compiler, dev tools) are excluded from the final image:

Dockerfile
1# ── Stage 1: Build ──
2FROM node:20-alpine AS builder
3
4WORKDIR /app
5
6# Install dependencies (including devDependencies for build)
7COPY package.json package-lock.json ./
8RUN npm ci
9
10# Copy source and build
11COPY . .
12RUN npm run build
13
14# Prune dev dependencies
15RUN npm prune --production
16
17# ── Stage 2: Production ──
18FROM node:20-alpine AS production
19
20# Add non-root user for security
21RUN addgroup -g 1001 -S nodejs && \
22    adduser -S nodejs -u 1001
23
24WORKDIR /app
25
26# Copy built artifacts and production dependencies
27COPY --from=builder --chown=nodejs:nodejs /app/dist ./dist
28COPY --from=builder --chown=nodejs:nodejs /app/node_modules ./node_modules
29COPY --from=builder --chown=nodejs:nodejs /app/package.json ./
30
31USER nodejs
32
33EXPOSE 3000
34
35# Healthcheck — used by Kubernetes liveness/readiness probes
36HEALTHCHECK --interval=30s --timeout=3s --start-period=10s --retries=3 \
37  CMD node -e "require('http').get('http://localhost:3000/health', (r) => process.exit(r.statusCode === 200 ? 0 : 1))"
38
39CMD ["node", "dist/server.js"]

Dockerfile notes: The multi-stage build keeps the final image under 150MB for a typical Express app. npm ci is used instead of npm install for deterministic, reproducible installs. The non-root user follows container security best practices for Kubernetes.

2. Configure Express for Kubernetes

Express needs specific settings to work correctly behind a Kubernetes ingress (which terminates TLS):

TypeScript
1// src/app.ts
2import express from 'express';
3import helmet from 'helmet';
4
5const app = express();
6
7// Trust the ingress proxy — required for correct IP, protocol, and host headers
8// In K8s, TLS is terminated at the ingress before reaching your app
9app.set('trust proxy', true);
10
11// Security headers (works correctly with trust proxy)
12app.use(helmet({
13  contentSecurityPolicy: {
14    directives: {
15      defaultSrc: ["'self'"],
16      scriptSrc: ["'self'"],
17    },
18  },
19}));
20
21app.use(express.json({ limit: '10mb' }));
22app.use(express.urlencoded({ extended: true }));
23
24// Health check endpoint — required for K8s liveness/readiness probes
25app.get('/health', (req, res) => {
26  res.status(200).json({
27    status: 'ok',
28    timestamp: new Date().toISOString(),
29    uptime: process.uptime(),
30  });
31});
32
33// Your routes here
34import apiRouter from './routes/api';
35app.use('/api', apiRouter);
36
37export default app;
TypeScript
1// src/server.ts
2import app from './app';
3
4const PORT = parseInt(process.env.PORT || '3000', 10);
5const HOST = '0.0.0.0'; // Bind to all interfaces — required for K8s
6
7const server = app.listen(PORT, HOST, () => {
8  console.log(`Server running on ${HOST}:${PORT}`);
9  console.log(`Environment: ${process.env.NODE_ENV}`);
10});
11
12// Graceful shutdown — important for Kubernetes pod termination
13process.on('SIGTERM', () => {
14  console.log('SIGTERM received. Closing server...');
15  server.close(() => {
16    console.log('Server closed');
17    process.exit(0);
18  });
19});
20
21process.on('SIGINT', () => {
22  server.close(() => process.exit(0));
23});

app.set('trust proxy', true) is required for Kubernetes deployments. Without it, Express sees all requests as coming from 127.0.0.1 (the ingress pod), breaks rate limiters that rely on IP, and may generate incorrect redirect URLs. The trust proxy setting tells Express to read the real client IP and protocol from the X-Forwarded-* headers set by the ingress.

3. Add a package.json with the Right Scripts

JSON
1{
2  "name": "my-express-app",
3  "version": "1.0.0",
4  "scripts": {
5    "build": "tsc",
6    "start": "node dist/server.js",
7    "dev": "ts-node-dev --respawn --transpile-only src/server.ts",
8    "migrate": "npx prisma migrate deploy",
9    "migrate:dev": "npx prisma migrate dev",
10    "db:seed": "npx ts-node prisma/seed.ts"
11  },
12  "dependencies": {
13    "express": "^4.18.2",
14    "helmet": "^7.1.0",
15    "pg": "^8.11.3",
16    "redis": "^4.6.12",
17    "@prisma/client": "^5.8.0"
18  },
19  "devDependencies": {
20    "typescript": "^5.3.3",
21    "prisma": "^5.8.0",
22    "@types/express": "^4.17.21",
23    "@types/node": "^20.11.0",
24    "ts-node-dev": "^2.0.0"
25  }
26}

4. Environment Variables

Update your .env.example to include Bunnyshell-friendly defaults:

.env
1NODE_ENV=production
2PORT=3000
3
4DATABASE_URL=postgresql://express:password@postgres:5432/express_production
5
6REDIS_URL=redis://redis:6379/0
7
8# Optional: individual DB vars (if not using DATABASE_URL)
9DB_HOST=postgres
10DB_PORT=5432
11DB_NAME=express_production
12DB_USER=express
13DB_PASSWORD=
14
15# App secrets
16SESSION_SECRET=
17JWT_SECRET=

Express Deployment Checklist

  • Multi-stage Dockerfile with Node 20 Alpine — builder + production stages
  • npm ci for deterministic installs
  • npm run build compiles TypeScript in builder stage
  • Production image only contains dist/ + production node_modules
  • app.set('trust proxy', true) for K8s ingress TLS termination
  • Server listens on 0.0.0.0 (not 127.0.0.1)
  • PORT read from environment variable (default 3000)
  • /health endpoint returns 200 OK for K8s probes
  • SIGTERM handler for graceful Kubernetes pod shutdown
  • DATABASE_URL or individual DB vars configured
  • npx prisma migrate deploy will be run post-deploy (if using Prisma)

Approach A: Bunnyshell UI — Zero CI/CD Maintenance

This is the easiest approach. You connect your repo, paste a YAML config, deploy, and flip a toggle. No CI/CD pipelines to write or maintain — Bunnyshell automatically adds webhooks to your Git provider and creates/destroys preview environments when PRs are opened/closed.

Step 1: Create a Project and Environment

  1. Log into Bunnyshell
  2. Click Create project and name it (e.g., "Express App")
  3. Inside the project, click Create environment and name it (e.g., "express-main")

Step 2: Define the Environment Configuration

Click Configuration in your environment view and paste this bunnyshell.yaml:

YAML
1kind: Environment
2name: express-preview
3type: primary
4
5environmentVariables:
6  DB_PASSWORD: SECRET["your-db-password"]
7  SESSION_SECRET: SECRET["your-session-secret"]
8  JWT_SECRET: SECRET["your-jwt-secret"]
9
10components:
11  # ── Express Application ──
12  - kind: Application
13    name: express-app
14    gitRepo: 'https://github.com/your-org/your-express-repo.git'
15    gitBranch: main
16    gitApplicationPath: /
17    dockerCompose:
18      build:
19        context: .
20        dockerfile: Dockerfile
21      environment:
22        NODE_ENV: production
23        PORT: '3000'
24        DATABASE_URL: 'postgresql://express:{{ env.vars.DB_PASSWORD }}@postgres:5432/express_production'
25        REDIS_URL: 'redis://redis:6379/0'
26        SESSION_SECRET: '{{ env.vars.SESSION_SECRET }}'
27        JWT_SECRET: '{{ env.vars.JWT_SECRET }}'
28      ports:
29        - '3000:3000'
30    hosts:
31      - hostname: 'app-{{ env.base_domain }}'
32        path: /
33        servicePort: 3000
34    dependsOn:
35      - postgres
36      - redis
37
38  # ── PostgreSQL Database ──
39  - kind: Database
40    name: postgres
41    dockerCompose:
42      image: 'postgres:16-alpine'
43      environment:
44        POSTGRES_USER: express
45        POSTGRES_PASSWORD: '{{ env.vars.DB_PASSWORD }}'
46        POSTGRES_DB: express_production
47      ports:
48        - '5432:5432'
49
50  # ── Redis (Cache + Sessions) ──
51  - kind: Service
52    name: redis
53    dockerCompose:
54      image: 'redis:7-alpine'
55      command: redis-server --appendonly yes
56      ports:
57        - '6379:6379'
58
59volumes:
60  - name: postgres-data
61    mount:
62      component: postgres
63      containerPath: /var/lib/postgresql/data
64    size: 2Gi
65  - name: redis-data
66    mount:
67      component: redis
68      containerPath: /data
69    size: 512Mi

Key architecture notes:

  • DATABASE_URL — Prisma, Sequelize, TypeORM, and Knex all support DATABASE_URL natively. It's simpler than configuring individual host/port/name vars
  • dependsOn — Ensures Bunnyshell starts PostgreSQL and Redis before the Express app. In Kubernetes this means the pods start in order (though you should still handle connection retries in your app)
  • Volumes — PostgreSQL and Redis data are persisted on persistent volumes (PVCs) in Kubernetes, so data survives pod restarts

Replace your-org/your-express-repo with your actual repository. Save the configuration.

Step 3: Deploy

Click the Deploy button, select your Kubernetes cluster, and click Deploy Environment. Bunnyshell will:

  1. Build your Express Docker image from the multi-stage Dockerfile
  2. Pull PostgreSQL and Redis images
  3. Deploy everything into an isolated Kubernetes namespace
  4. Generate HTTPS URLs automatically with DNS

Monitor the deployment in the environment detail page. When status shows Running, click Endpoints to access your live Express app.

Step 4: Run Post-Deploy Commands

After deployment, run database migrations via the component's terminal in the Bunnyshell UI, or via CLI:

Bash
1export BUNNYSHELL_TOKEN=your-api-token
2bns components list --environment ENV_ID --output json | jq '._embedded.item[] | {id, name}'
3
4# Run Prisma migrations
5bns exec COMPONENT_ID -- npx prisma migrate deploy
6
7# Seed initial data (if applicable)
8bns exec COMPONENT_ID -- node dist/prisma/seed.js
9
10# Verify the app is healthy
11bns exec COMPONENT_ID -- node -e "require('http').get('http://localhost:3000/health', r => { let d=''; r.on('data',c=>d+=c); r.on('end',()=>console.log(d)); })"

For ephemeral environments, consider automating migrations in your Docker entrypoint or as a Kubernetes init container. For simpler setups, running npx prisma migrate deploy in the Dockerfile CMD (before starting the server) also works well for preview environments where startup time is acceptable.

Step 5: Enable Automatic Preview Environments

This is the magic step — no CI/CD configuration needed:

  1. In your environment, go to Settings
  2. Find the Ephemeral environments section
  3. Toggle "Create ephemeral environments on pull request" to ON
  4. Toggle "Destroy environment after merge or close pull request" to ON
  5. Select the Kubernetes cluster for ephemeral environments

That's it. Bunnyshell automatically adds a webhook to your Git provider (GitHub, GitLab, or Bitbucket). From now on:

  • Open a PR → Bunnyshell creates an ephemeral environment with the PR's branch
  • Push to PR → The environment redeploys with the latest changes
  • Bunnyshell posts a comment on the PR with a link to the live deployment
  • Merge or close the PR → The ephemeral environment is automatically destroyed

Note: The primary environment must be in Running or Stopped status before ephemeral environments can be created from it.


Approach B: Docker Compose Import

Already have a docker-compose.yml for local development? Most Express projects do. Bunnyshell can import it directly and convert it to its environment format. No manual YAML writing required.

Step 1: Add a docker-compose.yml to Your Repo

If you don't already have one, create docker-compose.yml in your repo root:

YAML
1version: '3.8'
2
3services:
4  express-app:
5    build:
6      context: .
7      dockerfile: Dockerfile
8    ports:
9      - '3000:3000'
10    environment:
11      NODE_ENV: development
12      PORT: '3000'
13      DATABASE_URL: postgresql://express:secret@postgres:5432/express_development
14      REDIS_URL: redis://redis:6379/0
15      SESSION_SECRET: dev-session-secret-not-for-production
16      JWT_SECRET: dev-jwt-secret-not-for-production
17    volumes:
18      - .:/app
19      - /app/node_modules
20    depends_on:
21      postgres:
22        condition: service_healthy
23      redis:
24        condition: service_started
25    command: npm run dev
26
27  postgres:
28    image: postgres:16-alpine
29    environment:
30      POSTGRES_USER: express
31      POSTGRES_PASSWORD: secret
32      POSTGRES_DB: express_development
33    volumes:
34      - postgres-data:/var/lib/postgresql/data
35    ports:
36      - '5432:5432'
37    healthcheck:
38      test: ['CMD', 'pg_isready', '-U', 'express']
39      interval: 5s
40      retries: 5
41
42  redis:
43    image: redis:7-alpine
44    command: redis-server --appendonly yes
45    volumes:
46      - redis-data:/data
47    ports:
48      - '6379:6379'
49
50  # Optional: Adminer for database inspection in dev
51  adminer:
52    image: adminer:4
53    ports:
54      - '8080:8080'
55    depends_on:
56      - postgres
57
58volumes:
59  postgres-data:
60  redis-data:

Step 2: Import into Bunnyshell

  1. Create a Project and Environment in Bunnyshell (same as Approach A, Step 1)
  2. Click Define environment
  3. Select your Git account and repository
  4. Set the branch (e.g., main) and the path to docker-compose.yml (use / if it's in the root)
  5. Click Continue — Bunnyshell parses and validates your Docker Compose file

Bunnyshell automatically detects:

  • All services (express-app, postgres, redis, adminer)
  • Exposed ports
  • Build configurations (Dockerfiles)
  • Volumes
  • Environment variables

It converts everything into a bunnyshell.yaml environment definition.

Important: The docker-compose.yml is only read during the initial import. Subsequent changes to the file won't auto-propagate — edit the environment configuration in Bunnyshell instead.

Step 3: Adjust the Configuration

After import, go to Configuration in the environment view and update:

Replace hardcoded secrets with SECRET["..."] syntax:

YAML
1environmentVariables:
2  DB_PASSWORD: SECRET["your-db-password"]
3  SESSION_SECRET: SECRET["your-session-secret"]
4  JWT_SECRET: SECRET["your-jwt-secret"]

Add dynamic URLs using Bunnyshell interpolation:

YAML
1environment:
2  APP_URL: 'https://{{ components.express-app.ingress.hosts[0] }}'
3  ALLOWED_ORIGINS: 'https://{{ components.express-app.ingress.hosts[0] }}'

Switch to production mode:

YAML
1environment:
2  NODE_ENV: production
3  DATABASE_URL: 'postgresql://express:{{ env.vars.DB_PASSWORD }}@postgres:5432/express_production'

Remove dev volumes and tools — Remove the adminer service (not needed in preview environments) and the volumes: ['.:/app'] bind mount (the Docker image contains the built code):

YAML
1# Remove:
2# volumes:
3#   - .:/app
4#   - /app/node_modules
5# Also remove the adminer service component

Switch from dev command to production:

YAML
1# The import will have captured 'npm run dev' from docker-compose.yml
2# Update the command (or remove it to use the Dockerfile CMD):
3# command: npm start
4# Or just remove command: and use the Dockerfile's CMD

Step 4: Deploy and Enable Preview Environments

Same as Approach A — click Deploy, then go to Settings and toggle on ephemeral environments.

Best Practices for Docker Compose with Bunnyshell

  • Remove dev bind mountsvolumes: ['.:/app'] is for live code reload. In Bunnyshell, the built image contains the code
  • Remove dev-only services — Adminer, Mailhog, and similar tools add unnecessary complexity to preview environments (unless you deliberately want them)
  • Use Bunnyshell interpolation for dynamic values like URLs and CORS origins
  • Switch NODE_ENV from development to production after import
  • Design for startup resilience — Kubernetes doesn't guarantee depends_on ordering. Add connection retry logic in your database initialization:
TypeScript
1// src/db.ts
2import { PrismaClient } from '@prisma/client';
3
4const prisma = new PrismaClient();
5
6export async function connectWithRetry(retries = 5, delay = 2000): Promise<void> {
7  for (let i = 0; i < retries; i++) {
8    try {
9      await prisma.$connect();
10      console.log('Database connected');
11      return;
12    } catch (error) {
13      console.log(`DB connection attempt ${i + 1}/${retries} failed. Retrying in ${delay}ms...`);
14      if (i < retries - 1) await new Promise(r => setTimeout(r, delay));
15    }
16  }
17  throw new Error('Failed to connect to database after multiple retries');
18}

Approach C: Helm Charts

For teams with existing Helm infrastructure or complex Kubernetes requirements (custom ingress, service mesh, multiple replicas, advanced scaling). Helm gives you full control over every Kubernetes resource.

Step 1: Create a Helm Chart

Structure your Express Helm chart in your repo:

Text
1helm/express/
2├── Chart.yaml
3├── values.yaml
4└── templates/
5    ├── deployment.yaml
6    ├── service.yaml
7    ├── ingress.yaml
8    ├── configmap.yaml
9    ├── secret.yaml
10    └── migration-job.yaml

A minimal values.yaml:

YAML
1replicaCount: 1
2
3image:
4  repository: ""
5  tag: latest
6  pullPolicy: IfNotPresent
7
8service:
9  port: 3000
10
11ingress:
12  enabled: true
13  className: bns-nginx
14  host: ""
15
16env:
17  NODE_ENV: production
18  PORT: "3000"
19  DATABASE_URL: ""
20  REDIS_URL: ""
21  SESSION_SECRET: ""
22  JWT_SECRET: ""
23
24resources:
25  requests:
26    memory: "128Mi"
27    cpu: "100m"
28  limits:
29    memory: "512Mi"
30    cpu: "500m"
31
32livenessProbe:
33  httpGet:
34    path: /health
35    port: 3000
36  initialDelaySeconds: 15
37  periodSeconds: 20
38
39readinessProbe:
40  httpGet:
41    path: /health
42    port: 3000
43  initialDelaySeconds: 5
44  periodSeconds: 10

Step 2: Define the Bunnyshell Configuration

Create a bunnyshell.yaml using Helm components:

YAML
1kind: Environment
2name: express-helm
3type: primary
4
5environmentVariables:
6  DB_PASSWORD: SECRET["your-db-password"]
7  SESSION_SECRET: SECRET["your-session-secret"]
8  JWT_SECRET: SECRET["your-jwt-secret"]
9
10components:
11  # ── Docker Image Build ──
12  - kind: DockerImage
13    name: express-image
14    context: /
15    dockerfile: Dockerfile
16    gitRepo: 'https://github.com/your-org/your-express-repo.git'
17    gitBranch: main
18    gitApplicationPath: /
19
20  # ── PostgreSQL via Helm (Bitnami) ──
21  - kind: Helm
22    name: postgres
23    runnerImage: 'dtzar/helm-kubectl:3.8.2'
24    deploy:
25      - |
26        cat << EOF > pg_values.yaml
27          global:
28            storageClass: bns-network-sc
29          auth:
30            username: express
31            password: {{ env.vars.DB_PASSWORD }}
32            database: express_production
33        EOF
34      - 'helm repo add bitnami https://charts.bitnami.com/bitnami'
35      - 'helm upgrade --install --namespace {{ env.k8s.namespace }}
36        --post-renderer /bns/helpers/helm/bns_post_renderer
37        -f pg_values.yaml postgres bitnami/postgresql --version 12.12.10'
38      - |
39        PG_HOST="postgres-postgresql.{{ env.k8s.namespace }}.svc.cluster.local"
40    destroy:
41      - 'helm uninstall postgres --namespace {{ env.k8s.namespace }}'
42    start:
43      - 'kubectl scale --replicas=1 --namespace {{ env.k8s.namespace }}
44        statefulset/postgres-postgresql'
45    stop:
46      - 'kubectl scale --replicas=0 --namespace {{ env.k8s.namespace }}
47        statefulset/postgres-postgresql'
48    exportVariables:
49      - PG_HOST
50
51  # ── Express App via Helm ──
52  - kind: Helm
53    name: express-app
54    runnerImage: 'dtzar/helm-kubectl:3.8.2'
55    deploy:
56      - |
57        cat << EOF > express_values.yaml
58          replicaCount: 1
59          image:
60            repository: {{ components.express-image.image }}
61          service:
62            port: 3000
63          ingress:
64            enabled: true
65            className: bns-nginx
66            host: app-{{ env.base_domain }}
67          env:
68            NODE_ENV: production
69            PORT: '3000'
70            DATABASE_URL: 'postgresql://express:{{ env.vars.DB_PASSWORD }}@{{ components.postgres.exported.PG_HOST }}/express_production'
71            REDIS_URL: 'redis://redis:6379/0'
72            SESSION_SECRET: '{{ env.vars.SESSION_SECRET }}'
73            JWT_SECRET: '{{ env.vars.JWT_SECRET }}'
74        EOF
75      - 'helm upgrade --install --namespace {{ env.k8s.namespace }}
76        --post-renderer /bns/helpers/helm/bns_post_renderer
77        -f express_values.yaml express-{{ env.unique }} ./helm/express'
78    destroy:
79      - 'helm uninstall express-{{ env.unique }} --namespace {{ env.k8s.namespace }}'
80    start:
81      - 'helm upgrade --namespace {{ env.k8s.namespace }}
82        --post-renderer /bns/helpers/helm/bns_post_renderer
83        --reuse-values --set replicaCount=1 express-{{ env.unique }} ./helm/express'
84    stop:
85      - 'helm upgrade --namespace {{ env.k8s.namespace }}
86        --post-renderer /bns/helpers/helm/bns_post_renderer
87        --reuse-values --set replicaCount=0 express-{{ env.unique }} ./helm/express'
88    gitRepo: 'https://github.com/your-org/your-express-repo.git'
89    gitBranch: main
90    gitApplicationPath: /helm/express
91
92  # ── Redis ──
93  - kind: Service
94    name: redis
95    dockerCompose:
96      image: 'redis:7-alpine'
97      command: redis-server --appendonly yes
98      ports:
99        - '6379:6379'

Key: Always include --post-renderer /bns/helpers/helm/bns_post_renderer in your helm commands. This adds labels so Bunnyshell can track resources, show logs, and manage component lifecycle.

Step 3: Deploy and Enable Preview Environments

Same flow: paste the config in Configuration, hit Deploy, then enable ephemeral environments in Settings.


Enabling Preview Environments (All Approaches)

Regardless of which approach you used, enabling automatic preview environments is the same:

  1. Ensure your primary environment has been deployed at least once (Running or Stopped status)
  2. Go to Settings in your environment
  3. Toggle "Create ephemeral environments on pull request" → ON
  4. Toggle "Destroy environment after merge or close pull request" → ON
  5. Select the target Kubernetes cluster

What happens next:

  • Bunnyshell adds a webhook to your Git provider automatically
  • When a developer opens a PR, Bunnyshell creates an ephemeral environment cloned from the primary, using the PR's branch
  • Bunnyshell posts a comment on the PR with a direct link to the running deployment
  • When the PR is merged or closed, the ephemeral environment is automatically destroyed

No GitHub Actions. No GitLab CI pipelines. No maintenance. It just works.

Optional: CI/CD Integration via CLI

If you prefer to control preview environments from your CI/CD pipeline (e.g., for running migrations or populating test fixtures before notifying reviewers), you can use the Bunnyshell CLI:

Bash
1# Install
2brew install bunnyshell/tap/bunnyshell-cli
3
4# Authenticate
5export BUNNYSHELL_TOKEN=your-api-token
6
7# Create, deploy, and run migrations in one flow
8bns environments create --from-path bunnyshell.yaml --name "pr-123" --project PROJECT_ID --k8s CLUSTER_ID
9bns environments deploy --id ENV_ID --wait
10bns exec COMPONENT_ID -- npx prisma migrate deploy
11bns exec COMPONENT_ID -- node dist/prisma/seed.js

Remote Development and Debugging

Bunnyshell makes it easy to develop and debug directly against any environment — primary or ephemeral:

Port Forwarding

Connect your local tools to the remote database:

Bash
1# Forward PostgreSQL to local port 15432
2bns port-forward 15432:5432 --component POSTGRES_COMPONENT_ID
3
4# Connect with psql, TablePlus, or any DB tool
5psql -h 127.0.0.1 -p 15432 -U express express_production
6
7# Forward Redis to local port 16379
8bns port-forward 16379:6379 --component REDIS_COMPONENT_ID
9redis-cli -p 16379
10
11# Forward the Express app itself for local proxying
12bns port-forward 13000:3000 --component EXPRESS_COMPONENT_ID
13curl http://localhost:13000/health

Execute Node Commands

Bash
1# Run Prisma migrations
2bns exec COMPONENT_ID -- npx prisma migrate deploy
3bns exec COMPONENT_ID -- npx prisma migrate status
4
5# Introspect the database schema
6bns exec COMPONENT_ID -- npx prisma db pull
7
8# Open a Node.js REPL with app context
9bns exec COMPONENT_ID -- node
10
11# Run a one-off script
12bns exec COMPONENT_ID -- node dist/scripts/cleanup-old-sessions.js
13
14# Check environment variables
15bns exec COMPONENT_ID -- node -e "console.log(JSON.stringify(process.env, null, 2))"
16
17# Verify database connection
18bns exec COMPONENT_ID -- node -e "
19const { PrismaClient } = require('@prisma/client');
20const prisma = new PrismaClient();
21prisma.\$queryRaw\`SELECT 1 AS ok\`.then(r => { console.log('DB OK:', r); prisma.\$disconnect(); });
22"

The Node.js REPL via bns exec COMPONENT_ID -- node runs inside the production container connected to the real preview environment database. This is useful for debugging data issues or running one-off administrative scripts without writing a full maintenance endpoint.

Live Logs

Bash
1# Stream logs in real time
2bns logs --component COMPONENT_ID -f
3
4# Last 200 lines
5bns logs --component COMPONENT_ID --tail 200
6
7# Logs from the last 5 minutes
8bns logs --component COMPONENT_ID --since 5m

Live Code Sync

For active development, sync your local code changes to the remote container in real time:

Bash
1bns remote-development up --component COMPONENT_ID
2# Edit files locally — changes sync automatically
3# When done:
4bns remote-development down

This is especially useful for debugging issues that only reproduce in the Kubernetes environment — you get the fast feedback loop of local development with the infrastructure of production.


Advanced: Worker Processes with BullMQ

For background job processing, add a worker component to your bunnyshell.yaml:

YAML
1  # ── BullMQ Worker ──
2  - kind: Service
3    name: worker
4    gitRepo: 'https://github.com/your-org/your-express-repo.git'
5    gitBranch: main
6    gitApplicationPath: /
7    dockerCompose:
8      build:
9        context: .
10        dockerfile: Dockerfile
11      command: ['node', 'dist/worker.js']
12      environment:
13        NODE_ENV: production
14        DATABASE_URL: 'postgresql://express:{{ env.vars.DB_PASSWORD }}@postgres:5432/express_production'
15        REDIS_URL: 'redis://redis:6379/0'
16      ports: []
17    dependsOn:
18      - postgres
19      - redis

Your worker file (src/worker.ts) would look like:

TypeScript
1import { Worker } from 'bullmq';
2import Redis from 'ioredis';
3
4const connection = new Redis(process.env.REDIS_URL!, { maxRetriesPerRequest: null });
5
6const worker = new Worker('email-queue', async job => {
7  console.log(`Processing job ${job.id}: ${job.name}`);
8  // ... job processing logic
9}, { connection });
10
11worker.on('completed', job => console.log(`Job ${job.id} completed`));
12worker.on('failed', (job, err) => console.error(`Job ${job?.id} failed:`, err));
13
14process.on('SIGTERM', async () => {
15  await worker.close();
16  process.exit(0);
17});

Advanced: Session Management with Redis

For session-based Express apps, configure express-session with a Redis store:

TypeScript
1import session from 'express-session';
2import RedisStore from 'connect-redis';
3import { createClient } from 'redis';
4
5const redisClient = createClient({ url: process.env.REDIS_URL });
6redisClient.connect().catch(console.error);
7
8app.use(session({
9  store: new RedisStore({ client: redisClient }),
10  secret: process.env.SESSION_SECRET!,
11  resave: false,
12  saveUninitialized: false,
13  cookie: {
14    secure: process.env.NODE_ENV === 'production', // HTTPS in production
15    httpOnly: true,
16    maxAge: 1000 * 60 * 60 * 24 * 7, // 7 days
17    sameSite: 'lax',
18  },
19}));

The REDIS_URL env var set in your Bunnyshell config will be automatically picked up.


Troubleshooting

IssueSolution
App not reachable / 502Check container logs: bns logs --component COMPONENT_ID. Verify the app is listening on 0.0.0.0 not 127.0.0.1. Check PORT env var matches the servicePort in hosts.
ECONNREFUSED to PostgreSQLDATABASE_URL likely points to localhost instead of the postgres component. Check the URL format: postgresql://user:pass@postgres:5432/dbname.
Trust proxy / wrong IP in logsAdd app.set('trust proxy', true) in Express. Without it, all requests appear to come from the ingress pod IP instead of the real client.
CORS errorsUpdate ALLOWED_ORIGINS to include the Bunnyshell preview URL. Use 'https://{{ components.express-app.ingress.hosts[0] }}' interpolation.
Prisma migration failsCheck DATABASE_URL is set. Use postgresql:// not postgres://. Run bns exec COMPONENT_ID -- npx prisma migrate status to see pending migrations.
Cannot find module 'dist/server.js'TypeScript build didn't run. Ensure RUN npm run build is in the Dockerfile builder stage. Check tsconfig.json outDir is ./dist.
Health check failingEnsure /health route returns 200 before DB connects (use a simple res.json({status:'ok'}) that doesn't depend on DB). K8s readiness probe may kill the container before it's ready.
Session not persistingIf using in-memory sessions, they reset on every pod restart. Switch to Redis-backed sessions with connect-redis.
BullMQ jobs not processingCheck REDIS_URL is accessible from the worker component. Verify the worker subscribes to the correct queue name.
MODULE_NOT_FOUND errorsDev dependencies not installed in production image. Move required packages from devDependencies to dependencies in package.json.
OOM / container restartNode.js uses unbounded memory by default. Set --max-old-space-size=512 in CMD: CMD ["node", "--max-old-space-size=512", "dist/server.js"].
522 Connection timed outCluster may be behind a firewall. Verify Cloudflare IPs are whitelisted on the ingress controller.

What's Next?

  • Add BullMQ Board — Monitor your background jobs with a web UI (@bull-board/express as middleware)
  • Add Mailhog — Test email sending in preview environments with mailhog/mailhog as a Service component
  • Add Swagger UI — Expose swagger-ui-express at /api-docs — reviewers can test endpoints directly from the preview URL
  • Add MinIO — S3-compatible object storage for file uploads (minio/minio as a Service component)
  • Add pgAdmin — Database inspection UI (dpage/pgadmin4 as a Service component)

Ship faster starting today.

14-day full-feature trial. No credit card required. Pay-as-you-go from $0.007/min per environment.