Preview Environments for Fiber (Go): Automated Per-PR Deployments with Bunnyshell
Why Preview Environments for Fiber?
Every Go team has been here: a PR looks clean, go test ./... passes, but when the new endpoint hits staging something breaks. Maybe the JSON serialization doesn't match what the frontend expects, or a goroutine leak only shows under concurrent load, or the golang-migrate migration on this branch conflicts with another open PR.
Preview environments solve this. Every pull request gets its own isolated deployment — Fiber application, PostgreSQL database, the works — running in Kubernetes with production-like configuration. Reviewers click a link and test the actual running service against real data, not just read a diff.
With Bunnyshell, you get:
- Automatic deployment — A new environment spins up for every PR
- Production parity — Same multi-stage Docker image, same database engine, same infrastructure
- Isolation — Each PR environment lives in its own Kubernetes namespace, no shared staging conflicts
- Automatic cleanup — Environments are destroyed when the PR is merged or closed
Go Fiber is built on Fasthttp, which is famously fast and low-overhead. Combined with the sub-20 MB Docker images you get from an Alpine-based multi-stage build, ephemeral environments start in seconds — making the preview-on-PR workflow feel nearly instant.
Fiber v2 is inspired by Express.js, so the API will feel familiar to JavaScript developers on your team. Unlike Gin, Fiber uses fiber.Ctx instead of *gin.Context, and middleware is registered with app.Use(). If you are migrating from Gin, the Fiber migration guide covers the differences.
Choose Your Approach
Bunnyshell supports three ways to set up preview environments for Fiber. Pick the one that fits your workflow:
| Approach | Best for | Complexity | CI/CD maintenance |
|---|---|---|---|
| Approach A: Bunnyshell UI | Teams that want the fastest setup with zero pipeline maintenance | Easiest | None — Bunnyshell manages webhooks automatically |
| Approach B: Docker Compose Import | Teams already using docker-compose.yml for local development | Easy | None — import converts to Bunnyshell config automatically |
| Approach C: Helm Charts | Teams with existing Helm infrastructure or complex K8s needs | Advanced | Optional — can use CLI or Bunnyshell UI |
All three approaches end the same way: a toggle in Bunnyshell Settings that enables automatic preview environments for every PR. No GitHub Actions, no GitLab CI pipelines to maintain — Bunnyshell adds webhooks to your Git provider and listens for PR events.
Prerequisites: Prepare Your Fiber App
Regardless of which approach you choose, your Fiber app needs two things: a multi-stage Dockerfile and the right configuration.
1. Create a Production-Ready Multi-Stage Dockerfile
1# ── Stage 1: Build ──────────────────────────────────────────────────────────
2FROM golang:1.22-alpine AS builder
3
4WORKDIR /app
5
6# Install git (needed for some go modules that use git tags)
7RUN apk add --no-cache git
8
9# Cache Go module dependencies
10COPY go.mod go.sum ./
11RUN go mod download
12
13# Copy source and build a statically-linked binary
14COPY . .
15RUN CGO_ENABLED=0 GOOS=linux go build -ldflags="-w -s" -o server ./cmd/server
16
17# ── Stage 2: Runtime ─────────────────────────────────────────────────────────
18FROM alpine:3.19 AS runtime
19
20# Install ca-certificates for HTTPS calls from the app, and wget for health probes
21RUN apk add --no-cache ca-certificates wget
22
23WORKDIR /app
24
25# Copy only the compiled binary
26COPY /app/server /app/server
27
28EXPOSE 8080
29ENV PORT=8080
30CMD ["/app/server"]Important: Replace
./cmd/serverwith the path to yourmainpackage. The app must listen on0.0.0.0, notlocalhost— this is required for container networking in Kubernetes.
CGO_ENABLED=0 produces a fully statically-linked binary that runs on the minimal alpine:3.19 base without any C runtime. The resulting image is typically 15–20 MB total — fast to pull on every PR deployment.
2. Configure Fiber for Kubernetes
Fiber v2 needs a few settings to work correctly behind Kubernetes ingress (which terminates TLS and forwards X-Forwarded-* headers):
1// cmd/server/main.go
2package main
3
4import (
5 "fmt"
6 "log"
7 "os"
8
9 "github.com/gofiber/fiber/v2"
10 "github.com/gofiber/fiber/v2/middleware/logger"
11 "github.com/gofiber/fiber/v2/middleware/recover"
12 "github.com/jackc/pgx/v5/pgxpool"
13 "context"
14)
15
16func main() {
17 port := os.Getenv("PORT")
18 if port == "" {
19 port = "8080"
20 }
21
22 databaseURL := os.Getenv("DATABASE_URL")
23 if databaseURL == "" {
24 log.Fatal("DATABASE_URL environment variable is required")
25 }
26
27 // Build connection pool
28 pool, err := pgxpool.New(context.Background(), databaseURL)
29 if err != nil {
30 log.Fatalf("Unable to connect to database: %v\n", err)
31 }
32 defer pool.Close()
33
34 // Configure Fiber to trust proxy headers from Kubernetes ingress.
35 // ProxyHeader tells Fiber which header carries the real client IP.
36 app := fiber.New(fiber.Config{
37 ProxyHeader: fiber.HeaderXForwardedFor,
38 DisableStartupMessage: false,
39 Prefork: false, // Prefork doesn't work well in containers
40 })
41
42 // Recovery middleware — turns panics into 500 responses instead of crashing
43 app.Use(recover.New())
44
45 // Logger middleware — logs method, path, status, latency
46 app.Use(logger.New())
47
48 // Health check — used by Kubernetes liveness and readiness probes
49 app.Get("/health", func(c *fiber.Ctx) error {
50 return c.JSON(fiber.Map{"status": "ok"})
51 })
52
53 // Your application routes
54 api := app.Group("/api")
55 setupRoutes(api, pool)
56
57 addr := fmt.Sprintf(":%s", port)
58 log.Printf("Starting Fiber server on %s", addr)
59 if err := app.Listen(addr); err != nil {
60 log.Fatalf("Server error: %v", err)
61 }
62}1// internal/routes/routes.go
2package routes
3
4import (
5 "github.com/gofiber/fiber/v2"
6 "github.com/jackc/pgx/v5/pgxpool"
7)
8
9func Setup(api fiber.Router, pool *pgxpool.Pool) {
10 api.Get("/users", func(c *fiber.Ctx) error {
11 // Example handler
12 rows, err := pool.Query(c.Context(), "SELECT id, name FROM users LIMIT 20")
13 if err != nil {
14 return fiber.NewError(fiber.StatusInternalServerError, err.Error())
15 }
16 defer rows.Close()
17 // ... scan rows and return JSON
18 return c.JSON(fiber.Map{"users": []interface{}{}})
19 })
20}For database migrations, use golang-migrate:
# Install golang-migrate (add to your Dockerfile builder stage if running at startup)
go install -tags 'postgres' github.com/golang-migrate/migrate/v4/cmd/migrate@latest1// Run migrations at startup (before app.Listen)
2import "github.com/golang-migrate/migrate/v4"
3import _ "github.com/golang-migrate/migrate/v4/database/postgres"
4import _ "github.com/golang-migrate/migrate/v4/source/file"
5
6m, err := migrate.New("file://migrations", databaseURL)
7if err != nil {
8 log.Fatalf("Migration init failed: %v", err)
9}
10if err := m.Up(); err != nil && err != migrate.ErrNoChange {
11 log.Fatalf("Migration failed: %v", err)
12}
13log.Println("Migrations applied successfully")Fiber Deployment Checklist
- App listens on
0.0.0.0:$PORT(notlocalhost) -
ProxyHeader: fiber.HeaderXForwardedForset infiber.Config -
recover.New()middleware registered (prevents crashes on panics) -
DATABASE_URLloaded from environment variable -
APP_SECRETand other secrets loaded from environment variables -
PORTenv var respected (defaults to8080) - Health check endpoint at
/healthfor Kubernetes probes - Multi-stage Dockerfile with
alpine:3.19runtime stage -
Prefork: false— prefork mode does not work in containers
Approach A: Bunnyshell UI — Zero CI/CD Maintenance
This is the easiest approach. You connect your repo, paste a YAML config, deploy, and flip a toggle. No CI/CD pipelines to write or maintain — Bunnyshell automatically adds webhooks to your Git provider and creates/destroys preview environments when PRs are opened/closed.
Step 1: Create a Project and Environment
- Log into Bunnyshell
- Click Create project and name it (e.g., "Fiber App")
- Inside the project, click Create environment and name it (e.g., "fiber-main")
Step 2: Define the Environment Configuration
Click Configuration in your environment view and paste this bunnyshell.yaml:
1kind: Environment
2name: fiber-preview
3type: primary
4
5environmentVariables:
6 APP_SECRET: SECRET["your-app-secret-here"]
7 DB_PASSWORD: SECRET["your-db-password"]
8
9components:
10 # ── Fiber Application ──
11 - kind: Application
12 name: fiber-app
13 gitRepo: 'https://github.com/your-org/your-fiber-repo.git'
14 gitBranch: main
15 gitApplicationPath: /
16 dockerCompose:
17 build:
18 context: .
19 dockerfile: Dockerfile
20 environment:
21 DATABASE_URL: 'postgres://fiber:{{ env.vars.DB_PASSWORD }}@postgres:5432/fiber_db?sslmode=disable'
22 APP_SECRET: '{{ env.vars.APP_SECRET }}'
23 PORT: '8080'
24 ports:
25 - '8080:8080'
26 hosts:
27 - hostname: 'app-{{ env.base_domain }}'
28 path: /
29 servicePort: 8080
30 dependsOn:
31 - postgres
32
33 # ── PostgreSQL Database ──
34 - kind: Database
35 name: postgres
36 dockerCompose:
37 image: 'postgres:16-alpine'
38 environment:
39 POSTGRES_DB: fiber_db
40 POSTGRES_USER: fiber
41 POSTGRES_PASSWORD: '{{ env.vars.DB_PASSWORD }}'
42 ports:
43 - '5432:5432'
44
45volumes:
46 - name: postgres-data
47 mount:
48 component: postgres
49 containerPath: /var/lib/postgresql/data
50 size: 1GiReplace your-org/your-fiber-repo with your actual repository. Save the configuration.
Note the sslmode=disable in the DATABASE_URL. Inside the Kubernetes namespace, the connection between your Fiber app and the PostgreSQL component is on a private cluster network — no TLS is required for that hop. TLS is terminated at the ingress for external traffic.
Step 3: Deploy
Click the Deploy button, select your Kubernetes cluster, and click Deploy Environment. Bunnyshell will:
- Build your Fiber Docker image from the multi-stage Dockerfile (typically 1–2 minutes for Go, much faster than Rust)
- Pull the PostgreSQL image
- Deploy everything into an isolated Kubernetes namespace
- Generate HTTPS URLs automatically with DNS
Monitor the deployment in the environment detail page. When status shows Running, click Endpoints to access your live Fiber service.
Step 4: Run Migrations
If you handle migrations as a separate step rather than at application startup:
1export BUNNYSHELL_TOKEN=your-api-token
2bns components list --environment ENV_ID --output json | jq '._embedded.item[] | {id, name}'
3# Run golang-migrate
4bns exec COMPONENT_ID -- migrate -path ./migrations -database "$DATABASE_URL" upStep 5: Enable Automatic Preview Environments
This is the magic step — no CI/CD configuration needed:
- In your environment, go to Settings
- Find the Ephemeral environments section
- Toggle "Create ephemeral environments on pull request" to ON
- Toggle "Destroy environment after merge or close pull request" to ON
- Select the Kubernetes cluster for ephemeral environments
That's it. Bunnyshell automatically adds a webhook to your Git provider (GitHub, GitLab, or Bitbucket). From now on:
- Open a PR → Bunnyshell creates an ephemeral environment with the PR's branch
- Push to PR → The environment redeploys with the latest changes
- Bunnyshell posts a comment on the PR with a link to the live deployment
- Merge or close the PR → The ephemeral environment is automatically destroyed
Note: The primary environment must be in Running or Stopped status before ephemeral environments can be created from it.
Approach B: Docker Compose Import
Already have a docker-compose.yml for local development? Bunnyshell can import it directly and convert it to its environment format. No manual YAML writing required.
Step 1: Add a docker-compose.yml to Your Repo
If you don't already have one, create docker-compose.yml in your repo root:
1version: '3.8'
2
3services:
4 fiber-app:
5 build:
6 context: .
7 dockerfile: Dockerfile
8 ports:
9 - '8080:8080'
10 environment:
11 DATABASE_URL: 'postgres://fiber:fiber@postgres:5432/fiber_db?sslmode=disable'
12 APP_SECRET: 'dev-secret-key-change-in-prod'
13 PORT: '8080'
14 depends_on:
15 postgres:
16 condition: service_healthy
17
18 postgres:
19 image: postgres:16-alpine
20 environment:
21 POSTGRES_DB: fiber_db
22 POSTGRES_USER: fiber
23 POSTGRES_PASSWORD: fiber
24 volumes:
25 - postgres-data:/var/lib/postgresql/data
26 healthcheck:
27 test: ["CMD-SHELL", "pg_isready -U fiber -d fiber_db"]
28 interval: 5s
29 timeout: 5s
30 retries: 5
31
32volumes:
33 postgres-data:Step 2: Import into Bunnyshell
- Create a Project and Environment in Bunnyshell (same as Approach A, Step 1)
- Click Define environment
- Select your Git account and repository
- Set the branch (e.g.,
main) and the path todocker-compose.yml(use/if it's in the root) - Click Continue — Bunnyshell parses and validates your Docker Compose file
Bunnyshell automatically detects:
- All services (
fiber-app,postgres) - Exposed ports
- Build configurations (multi-stage Dockerfile)
- Volumes
- Environment variables
It converts everything into a bunnyshell.yaml environment definition.
Important: The
docker-compose.ymlis only read during the initial import. Subsequent changes to the file won't auto-propagate — edit the environment configuration in Bunnyshell instead.
Step 3: Adjust the Configuration
After import, go to Configuration in the environment view and update:
- Replace hardcoded secrets with
SECRET["..."]syntax - Replace the hardcoded
DATABASE_URLwith Bunnyshell interpolation so it uses the correct password secret:
DATABASE_URL: 'postgres://fiber:{{ env.vars.DB_PASSWORD }}@postgres:5432/fiber_db?sslmode=disable'
APP_SECRET: '{{ env.vars.APP_SECRET }}'- Add a
hostsblock so your app gets an HTTPS URL:
1hosts:
2 - hostname: 'app-{{ env.base_domain }}'
3 path: /
4 servicePort: 8080Step 4: Deploy and Enable Preview Environments
Same as Approach A — click Deploy, then go to Settings and toggle on ephemeral environments.
Best Practices for Docker Compose with Bunnyshell
- Design for startup resilience — Kubernetes doesn't guarantee
depends_onordering like Docker Compose does. Make your Fiber app retry database connections on startup:
1// Retry database connection with exponential backoff
2var pool *pgxpool.Pool
3var connectErr error
4for attempt := 1; attempt <= 5; attempt++ {
5 pool, connectErr = pgxpool.New(context.Background(), databaseURL)
6 if connectErr == nil {
7 // Verify the connection is actually alive
8 if pingErr := pool.Ping(context.Background()); pingErr == nil {
9 break
10 }
11 pool.Close()
12 }
13 log.Printf("DB not ready (attempt %d/5), retrying in 2s...", attempt)
14 time.Sleep(2 * time.Second)
15}
16if connectErr != nil {
17 log.Fatalf("Could not connect to database after 5 attempts: %v", connectErr)
18}- Use Bunnyshell interpolation for dynamic values like URLs:
# Bunnyshell environment config (after import)
DATABASE_URL: 'postgres://fiber:{{ env.vars.DB_PASSWORD }}@postgres:5432/fiber_db?sslmode=disable'Approach C: Helm Charts
For teams with existing Helm infrastructure or complex Kubernetes requirements (custom ingress, service mesh, advanced scaling). Helm gives you full control over every Kubernetes resource.
Step 1: Create a Helm Chart
Structure your Fiber Helm chart in your repo:
1helm/fiber/
2├── Chart.yaml
3├── values.yaml
4└── templates/
5 ├── deployment.yaml
6 ├── service.yaml
7 ├── ingress.yaml
8 └── configmap.yamlA minimal values.yaml:
1replicaCount: 1
2image:
3 repository: ""
4 tag: latest
5service:
6 port: 8080
7ingress:
8 enabled: true
9 className: bns-nginx
10 host: ""
11env:
12 DATABASE_URL: ""
13 APP_SECRET: ""
14 PORT: "8080"A minimal templates/deployment.yaml:
1apiVersion: apps/v1
2kind: Deployment
3metadata:
4 name: {{ .Release.Name }}-fiber
5spec:
6 replicas: {{ .Values.replicaCount }}
7 selector:
8 matchLabels:
9 app: {{ .Release.Name }}-fiber
10 template:
11 metadata:
12 labels:
13 app: {{ .Release.Name }}-fiber
14 spec:
15 containers:
16 - name: fiber
17 image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
18 ports:
19 - containerPort: 8080
20 env:
21 {{- range $key, $val := .Values.env }}
22 - name: {{ $key }}
23 value: {{ $val | quote }}
24 {{- end }}
25 readinessProbe:
26 httpGet:
27 path: /health
28 port: 8080
29 initialDelaySeconds: 5
30 periodSeconds: 10
31 livenessProbe:
32 httpGet:
33 path: /health
34 port: 8080
35 initialDelaySeconds: 15
36 periodSeconds: 20Step 2: Define the Bunnyshell Configuration
Create a bunnyshell.yaml using Helm components:
1kind: Environment
2name: fiber-helm
3type: primary
4
5environmentVariables:
6 APP_SECRET: SECRET["your-app-secret"]
7 DB_PASSWORD: SECRET["your-db-password"]
8 POSTGRES_DB: fiber_db
9 POSTGRES_USER: fiber
10
11components:
12 # ── Docker Image Build ──
13 - kind: DockerImage
14 name: fiber-image
15 context: /
16 dockerfile: Dockerfile
17 gitRepo: 'https://github.com/your-org/your-fiber-repo.git'
18 gitBranch: main
19 gitApplicationPath: /
20
21 # ── PostgreSQL via Helm ──
22 - kind: Helm
23 name: postgres
24 runnerImage: 'dtzar/helm-kubectl:3.8.2'
25 deploy:
26 - |
27 cat << EOF > pg_values.yaml
28 global:
29 storageClass: bns-network-sc
30 auth:
31 postgresPassword: {{ env.vars.DB_PASSWORD }}
32 database: {{ env.vars.POSTGRES_DB }}
33 username: {{ env.vars.POSTGRES_USER }}
34 EOF
35 - 'helm repo add bitnami https://charts.bitnami.com/bitnami'
36 - 'helm upgrade --install --namespace {{ env.k8s.namespace }}
37 --post-renderer /bns/helpers/helm/bns_post_renderer
38 -f pg_values.yaml postgres bitnami/postgresql --version 11.9.11'
39 - |
40 POSTGRES_HOST="postgres-postgresql.{{ env.k8s.namespace }}.svc.cluster.local"
41 destroy:
42 - 'helm uninstall postgres --namespace {{ env.k8s.namespace }}'
43 start:
44 - 'kubectl scale --replicas=1 --namespace {{ env.k8s.namespace }}
45 statefulset/postgres-postgresql'
46 stop:
47 - 'kubectl scale --replicas=0 --namespace {{ env.k8s.namespace }}
48 statefulset/postgres-postgresql'
49 exportVariables:
50 - POSTGRES_HOST
51
52 # ── Fiber App via Helm ──
53 - kind: Helm
54 name: fiber-app
55 runnerImage: 'dtzar/helm-kubectl:3.8.2'
56 deploy:
57 - |
58 cat << EOF > fiber_values.yaml
59 replicaCount: 1
60 image:
61 repository: {{ components.fiber-image.image }}
62 service:
63 port: 8080
64 ingress:
65 enabled: true
66 className: bns-nginx
67 host: app-{{ env.base_domain }}
68 env:
69 DATABASE_URL: 'postgres://{{ env.vars.POSTGRES_USER }}:{{ env.vars.DB_PASSWORD }}@{{ components.postgres.exported.POSTGRES_HOST }}:5432/{{ env.vars.POSTGRES_DB }}?sslmode=disable'
70 APP_SECRET: '{{ env.vars.APP_SECRET }}'
71 PORT: '8080'
72 EOF
73 - 'helm upgrade --install --namespace {{ env.k8s.namespace }}
74 --post-renderer /bns/helpers/helm/bns_post_renderer
75 -f fiber_values.yaml fiber-{{ env.unique }} ./helm/fiber'
76 destroy:
77 - 'helm uninstall fiber-{{ env.unique }} --namespace {{ env.k8s.namespace }}'
78 start:
79 - 'helm upgrade --namespace {{ env.k8s.namespace }}
80 --post-renderer /bns/helpers/helm/bns_post_renderer
81 --reuse-values --set replicaCount=1 fiber-{{ env.unique }} ./helm/fiber'
82 stop:
83 - 'helm upgrade --namespace {{ env.k8s.namespace }}
84 --post-renderer /bns/helpers/helm/bns_post_renderer
85 --reuse-values --set replicaCount=0 fiber-{{ env.unique }} ./helm/fiber'
86 gitRepo: 'https://github.com/your-org/your-fiber-repo.git'
87 gitBranch: main
88 gitApplicationPath: /helm/fiber
89 dependsOn:
90 - postgres
91 - fiber-imageKey: Always include
--post-renderer /bns/helpers/helm/bns_post_rendererin your helm commands. This adds labels so Bunnyshell can track resources, show logs, and manage component lifecycle.
Step 3: Deploy and Enable Preview Environments
Same flow: paste the config in Configuration, hit Deploy, then enable ephemeral environments in Settings.
Enabling Preview Environments (All Approaches)
Regardless of which approach you used, enabling automatic preview environments is the same:
- Ensure your primary environment has been deployed at least once (Running or Stopped status)
- Go to Settings in your environment
- Toggle "Create ephemeral environments on pull request" → ON
- Toggle "Destroy environment after merge or close pull request" → ON
- Select the target Kubernetes cluster
What happens next:
- Bunnyshell adds a webhook to your Git provider automatically
- When a developer opens a PR, Bunnyshell creates an ephemeral environment cloned from the primary, using the PR's branch
- Bunnyshell posts a comment on the PR with a direct link to the running deployment
- When the PR is merged or closed, the ephemeral environment is automatically destroyed
No GitHub Actions. No GitLab CI pipelines. No maintenance. It just works.
Optional: CI/CD Integration via CLI
If you prefer to control preview environments from your CI/CD pipeline (e.g., for custom migration scripts or seeding), you can use the Bunnyshell CLI:
1# Install
2brew install bunnyshell/tap/bunnyshell-cli
3
4# Authenticate
5export BUNNYSHELL_TOKEN=your-api-token
6
7# Create, deploy, and run migrations in one flow
8bns environments create --from-path bunnyshell.yaml --name "pr-123" --project PROJECT_ID --k8s CLUSTER_ID
9bns environments deploy --id ENV_ID --wait
10bns exec COMPONENT_ID -- migrate -path ./migrations -database "$DATABASE_URL" upRemote Development and Debugging
Bunnyshell makes it easy to develop and debug directly against any environment — primary or ephemeral:
Port Forwarding
Connect your local tools to the remote database:
1# Forward PostgreSQL to local port 15432
2bns port-forward 15432:5432 --component POSTGRES_COMPONENT_ID
3
4# Connect with psql or any DB tool
5psql -h localhost -p 15432 -U fiber fiber_dbExecute Commands in the Container
1# Run golang-migrate manually
2bns exec COMPONENT_ID -- migrate -path ./migrations -database "$DATABASE_URL" up
3
4# Check migration version
5bns exec COMPONENT_ID -- migrate -path ./migrations -database "$DATABASE_URL" version
6
7# Open psql (if installed in the image — alpine has it via `apk add postgresql-client`)
8bns exec COMPONENT_ID -- psql "$DATABASE_URL"The Alpine runtime image is minimal by design. If you need additional tools for debugging (e.g., curl, wget, psql), install them in the running container via bns exec COMPONENT_ID -- apk add --no-cache postgresql-client, or add them to a separate debug image tag.
Live Logs
1# Stream logs in real time
2bns logs --component COMPONENT_ID -f
3
4# Last 200 lines
5bns logs --component COMPONENT_ID --tail 200
6
7# Logs from the last 5 minutes
8bns logs --component COMPONENT_ID --since 5mFiber's built-in logger middleware emits lines like:
110:04:55 | 200 | 234µs | 10.244.0.1 | GET | /api/users
210:04:56 | 201 | 891µs | 10.244.0.1 | POST | /api/users
310:04:57 | 404 | 45µs | 10.244.0.1 | GET | /api/missingFor more verbose output, configure the logger middleware with a custom format:
1app.Use(logger.New(logger.Config{
2 Format: "[${time}] ${status} - ${latency} ${method} ${path} | ${ip} | ${error}\n",
3}))Live Code Sync
For active development, sync your local code changes to the remote container in real time:
1bns remote-development up --component COMPONENT_ID
2# Edit files locally — changes sync automatically
3# When done:
4bns remote-development downGo requires a recompile for code changes to take effect. For live-reload development inside the container, consider using a development image with Air (cosmtrek/air) instead of the production Alpine image. You can maintain two Dockerfile targets — builder (with Air) and runtime (without) — and switch via a build argument.
Troubleshooting
| Issue | Solution |
|---|---|
| 502 Bad Gateway | Fiber isn't listening on 0.0.0.0:8080. Check your app.Listen(":8080") — the colon prefix means all interfaces, which is correct. Verify PORT env var is being read. |
| X-Forwarded-For not trusted | Add ProxyHeader: fiber.HeaderXForwardedFor to fiber.New(fiber.Config{...}). Without this, c.IP() returns the cluster-internal IP, not the real client IP. |
| Panic in production | Add app.Use(recover.New()) as the first middleware. Without it, a single handler panic crashes the entire process. |
| Migration fails on startup | Ensure postgres component is in dependsOn. Add a connection retry loop before calling m.Up(). |
| Connection refused to PostgreSQL | Verify DATABASE_URL uses postgres (the component name) as the host, not localhost. Also verify sslmode=disable for in-cluster connections. |
go mod download fails during build | Some private modules need GONOSUMCHECK or GOFLAGS=-mod=vendor. Add the required env vars to the builder stage in your Dockerfile. |
| 522 Connection timed out | Cluster may be behind a firewall. Verify Cloudflare IPs are whitelisted on the ingress controller. |
| Prefork mode crashes | Remove Prefork: true from fiber.Config. Prefork uses SO_REUSEPORT which requires special kernel capabilities not available in standard Kubernetes pods. |
| Memory spike under load | Fasthttp reuses buffers aggressively. If you store c.Body() or c.Params() after the handler returns, copy the bytes first: body := make([]byte, len(c.Body())); copy(body, c.Body()). |
What's Next?
- Add background jobs — Use a goroutine pool or a queue library like
asynq(Redis-backed) for async processing; add a separate worker component in yourbunnyshell.yaml - Seed test data — Run
bns exec <ID> -- go run ./cmd/seedpost-deploy to populate the database with demo data - Add Redis for caching — Add a
redis:7-alpineService component and passREDIS_URL; usego-redisin your Fiber handlers - Monitor with Sentry — The
sentry-goSDK has a Fiber middleware; passSENTRY_DSNas an environment variable - Rate limiting — Add
github.com/gofiber/fiber/v2/middleware/limiterfor per-IP rate limiting in preview environments where you don't want automated scanners hitting your database
Related Resources
- Bunnyshell Quickstart Guide
- Docker Compose with Bunnyshell
- Helm with Bunnyshell
- Bunnyshell CLI Reference
- Ephemeral Environments — Learn more about the concept
- Preview Environments for Django — Similar guide for Python/Django
- All Guides — More technical guides
Ship faster starting today.
14-day full-feature trial. No credit card required. Pay-as-you-go from $0.007/min per environment.