Preview Environments with MongoDB: Per-PR Document Database Isolation with Bunnyshell
GuideMarch 20, 202612 min read

Preview Environments with MongoDB: Per-PR Document Database Isolation with Bunnyshell

Why Preview Environments Need Their Own Database

Every team that shares a staging MongoDB ends up with the same headaches: one developer's migration renames a field that another developer's aggregation pipeline depends on. Or test documents from PR #42 pollute the query results in PR #43. Or someone drops an entire collection during debugging and takes down the shared staging environment for everyone.

The fix is isolation. Each preview environment gets its own MongoDB instance — its own collections, its own documents, its own lifecycle. When the PR is merged, the database is destroyed along with everything else. No cleanup scripts, no orphaned test data, no conflicts.

With Bunnyshell, every preview environment automatically provisions:

  • A dedicated MongoDB instance — Running in the same Kubernetes namespace as your app
  • Isolated data — Each PR starts with a clean database seeded from your baseline
  • Automatic cleanup — The database is destroyed when the PR is merged or closed
  • Connection strings injected automatically — Your app connects without manual configuration

This guide covers three approaches to running MongoDB in Bunnyshell preview environments, from the simplest built-in component to production-grade Terraform-managed instances.


The Challenge: Database Per Environment at Scale

Running a separate MongoDB for every open pull request sounds expensive and complex. Here's why it's actually practical with Bunnyshell:

Resource efficiency: Preview databases are small. They run with minimal resources (256Mi RAM, 1Gi storage) and only exist while the PR is open. A team with 10 open PRs might use 2.5Gi of RAM total for all database instances — less than a single staging database.

Lifecycle management: Bunnyshell handles creation and destruction automatically. When a PR opens, a new MongoDB container starts. When the PR merges or closes, the container and its persistent volume are deleted. No orphaned databases accumulating over months.

Schema flexibility: MongoDB's schemaless nature means you don't need to coordinate migrations across PRs. Each PR can evolve its document structure independently. But you still need index management and seed data — this guide covers both.

ConcernShared Staging DBPer-PR Database (Bunnyshell)
Schema conflictsFrequent — field renames break other PRsNone — each PR has its own collections
Test data isolationImpossible — all PRs share the same documentsComplete — each PR starts clean
CleanupManual, error-proneAutomatic on PR merge/close
CostOne instance, always runningMany small instances, only while PRs are open
Index testingDangerous — index changes affect everyoneSafe — test index strategies per PR

Bunnyshell's Approach to Database Components

Bunnyshell offers three ways to provision MongoDB in preview environments. Choose based on your team's needs:

ApproachBest forComplexityProduction parity
Approach A: Built-in Database ComponentMost teams — fast setup, minimal configEasiestGood — same engine, standalone instance
Approach B: Helm ChartTeams with existing Helm infrastructureModerateBetter — Bitnami chart with replica set options
Approach C: Terraform-ManagedTeams needing managed databases (Atlas, DocumentDB)AdvancedBest — actual managed database instances

All three approaches work with Bunnyshell's automatic preview environment lifecycle. When a PR opens, the database is provisioned. When it closes, the database is destroyed.


Approach A: Built-in Database Component

The simplest way to add MongoDB to a preview environment. Use kind: Database in your bunnyshell.yaml and Bunnyshell handles the rest.

Minimal Configuration

YAML
1kind: Environment
2name: myapp-preview
3type: primary
4
5environmentVariables:
6  MONGO_PASSWORD: SECRET["your-secure-password"]
7  MONGO_USER: appuser
8  MONGO_DB: appdb
9
10components:
11  # ── Your Application ──
12  - kind: Application
13    name: api
14    gitRepo: 'https://github.com/your-org/your-app.git'
15    gitBranch: main
16    gitApplicationPath: /
17    dockerCompose:
18      build:
19        context: .
20        dockerfile: Dockerfile
21      environment:
22        MONGODB_URI: 'mongodb://{{ env.vars.MONGO_USER }}:{{ env.vars.MONGO_PASSWORD }}@db:27017/{{ env.vars.MONGO_DB }}?authSource=admin'
23        DB_HOST: db
24        DB_PORT: '27017'
25        DB_NAME: '{{ env.vars.MONGO_DB }}'
26        DB_USER: '{{ env.vars.MONGO_USER }}'
27        DB_PASSWORD: '{{ env.vars.MONGO_PASSWORD }}'
28      ports:
29        - '3000:3000'
30    dependsOn:
31      - db
32    hosts:
33      - hostname: 'api-{{ env.base_domain }}'
34        path: /
35        servicePort: 3000
36
37  # ── MongoDB Database ──
38  - kind: Database
39    name: db
40    dockerCompose:
41      image: 'mongo:7-jammy'
42      environment:
43        MONGO_INITDB_ROOT_USERNAME: '{{ env.vars.MONGO_USER }}'
44        MONGO_INITDB_ROOT_PASSWORD: '{{ env.vars.MONGO_PASSWORD }}'
45        MONGO_INITDB_DATABASE: '{{ env.vars.MONGO_DB }}'
46      ports:
47        - '27017:27017'
48
49volumes:
50  - name: mongo-data
51    mount:
52      component: db
53      containerPath: /data/db
54    size: 1Gi

Internal networking only. The MongoDB component does not need an ingress or public hostname. Your application connects to it via the Kubernetes service name (db in this example) on port 27017. The {{ components.db.ingress.hosts[0] }} interpolation is NOT used for databases — that's for HTTP services only.

Key Configuration Details

Image choice: mongo:7-jammy is the recommended image. The jammy variant is based on Ubuntu 22.04 and includes the mongosh shell. Use the major version tag (7) to get automatic minor and patch updates.

Environment variables: MongoDB's official Docker image reads three variables on first start:

VariablePurposeExample
MONGO_INITDB_ROOT_USERNAMEAdmin username (created in admin db)appuser
MONGO_INITDB_ROOT_PASSWORDAdmin passwordVia SECRET["..."]
MONGO_INITDB_DATABASEDatabase to create and run init scripts againstappdb

Volume mount: The mongo-data volume at /data/db persists data across container restarts within the same environment. Set size: 1Gi for preview environments.

Connection string format:

Text
mongodb://appuser:password@db:27017/appdb?authSource=admin

The authSource=admin parameter is critical. The MONGO_INITDB_ROOT_USERNAME creates the user in the admin database. Without authSource=admin in your connection string, MongoDB will look for the user in the application database and authentication will fail with "Authentication failed."

Replica Set vs. Standalone for Preview Environments

MongoDB can run in standalone mode (single instance) or as a replica set (one or more members with oplog). For preview environments, we recommend standalone for most teams:

FeatureStandaloneReplica Set
Resource usageLower — single processHigher — even a single-member RS uses more memory
TransactionsNot supportedSupported (multi-document ACID)
Change streamsNot supportedSupported
Startup timeFasterSlower (election process)
ComplexityMinimalRequires additional configuration

Use standalone unless your application requires transactions or change streams. Most CRUD applications work perfectly with standalone MongoDB.

If you need transactions, configure a single-member replica set:

YAML
1  - kind: Database
2    name: db
3    dockerCompose:
4      image: 'mongo:7-jammy'
5      command: ['mongod', '--replSet', 'rs0', '--bind_ip_all']
6      environment:
7        MONGO_INITDB_ROOT_USERNAME: '{{ env.vars.MONGO_USER }}'
8        MONGO_INITDB_ROOT_PASSWORD: '{{ env.vars.MONGO_PASSWORD }}'
9        MONGO_INITDB_DATABASE: '{{ env.vars.MONGO_DB }}'
10      ports:
11        - '27017:27017'

Then initiate the replica set post-deploy:

Bash
bns exec DB_COMPONENT_ID -- mongosh -u appuser -p password --authenticationDatabase admin --eval 'rs.initiate({_id: "rs0", members: [{_id: 0, host: "localhost:27017"}]})'

Update the connection string to include replicaSet:

Text
mongodb://appuser:password@db:27017/appdb?authSource=admin&replicaSet=rs0

Multiple App Components Sharing One Database

If your architecture has multiple services that connect to the same MongoDB instance, reference the same component name:

YAML
1components:
2  - kind: Application
3    name: api
4    dockerCompose:
5      environment:
6        MONGODB_URI: 'mongodb://{{ env.vars.MONGO_USER }}:{{ env.vars.MONGO_PASSWORD }}@db:27017/{{ env.vars.MONGO_DB }}?authSource=admin'
7    dependsOn:
8      - db
9
10  - kind: Service
11    name: worker
12    dockerCompose:
13      environment:
14        MONGODB_URI: 'mongodb://{{ env.vars.MONGO_USER }}:{{ env.vars.MONGO_PASSWORD }}@db:27017/{{ env.vars.MONGO_DB }}?authSource=admin'
15    dependsOn:
16      - db
17
18  - kind: Service
19    name: event-processor
20    dockerCompose:
21      environment:
22        MONGODB_URI: 'mongodb://{{ env.vars.MONGO_USER }}:{{ env.vars.MONGO_PASSWORD }}@db:27017/{{ env.vars.MONGO_DB }}?authSource=admin'
23    dependsOn:
24      - db
25
26  - kind: Database
27    name: db
28    dockerCompose:
29      image: 'mongo:7-jammy'
30      environment:
31        MONGO_INITDB_ROOT_USERNAME: '{{ env.vars.MONGO_USER }}'
32        MONGO_INITDB_ROOT_PASSWORD: '{{ env.vars.MONGO_PASSWORD }}'
33        MONGO_INITDB_DATABASE: '{{ env.vars.MONGO_DB }}'
34      ports:
35        - '27017:27017'

All three components (api, worker, event-processor) connect to the same db service. The dependsOn ensures MongoDB starts before any application component.

Index Creation in Init Scripts

MongoDB doesn't run .sql files, but it does run .js and .sh files placed in /docker-entrypoint-initdb.d/. Create docker/mongo/init-indexes.js:

JavaScript
1// docker/mongo/init-indexes.js
2// This script runs against the MONGO_INITDB_DATABASE on first start
3
4// Users collection
5db.createCollection('users');
6db.users.createIndex({ email: 1 }, { unique: true });
7db.users.createIndex({ createdAt: -1 });
8db.users.createIndex({ 'profile.location': '2dsphere' });
9
10// Projects collection
11db.createCollection('projects');
12db.projects.createIndex({ ownerId: 1 });
13db.projects.createIndex({ name: 'text', description: 'text' });
14db.projects.createIndex({ createdAt: -1 });
15db.projects.createIndex({ status: 1, updatedAt: -1 });
16
17// Events collection (TTL index for auto-expiry)
18db.createCollection('events');
19db.events.createIndex({ timestamp: 1 }, { expireAfterSeconds: 604800 }); // 7 days
20db.events.createIndex({ userId: 1, timestamp: -1 });
21db.events.createIndex({ type: 1, timestamp: -1 });
22
23// Seed data
24db.users.insertMany([
25  {
26    email: 'alice@example.com',
27    name: 'Alice Dev',
28    role: 'admin',
29    createdAt: new Date(),
30    profile: { location: { type: 'Point', coordinates: [-73.97, 40.77] } }
31  },
32  {
33    email: 'bob@example.com',
34    name: 'Bob Tester',
35    role: 'user',
36    createdAt: new Date(),
37    profile: { location: { type: 'Point', coordinates: [-0.12, 51.51] } }
38  }
39]);
40
41print('Indexes created and seed data inserted.');

To use this init script, build a custom image or mount the script directory:

Dockerfile
1# docker/mongo/Dockerfile
2FROM mongo:7-jammy
3COPY init-indexes.js /docker-entrypoint-initdb.d/

Then update the component:

YAML
1  - kind: Database
2    name: db
3    gitRepo: 'https://github.com/your-org/your-app.git'
4    gitBranch: main
5    gitApplicationPath: /docker/mongo
6    dockerCompose:
7      build:
8        context: docker/mongo
9        dockerfile: Dockerfile
10      environment:
11        MONGO_INITDB_ROOT_USERNAME: '{{ env.vars.MONGO_USER }}'
12        MONGO_INITDB_ROOT_PASSWORD: '{{ env.vars.MONGO_PASSWORD }}'
13        MONGO_INITDB_DATABASE: '{{ env.vars.MONGO_DB }}'
14      ports:
15        - '27017:27017'

Init scripts run against MONGO_INITDB_DATABASE. The db variable in init scripts refers to the database specified by MONGO_INITDB_DATABASE. This runs only once, when the data directory is empty (first container start). To re-run, delete the volume and redeploy.


Approach B: Helm Chart for MongoDB

For teams with existing Helm infrastructure or those who need more control over MongoDB configuration (authentication mechanisms, storage engine settings, monitoring), the Bitnami MongoDB Helm chart is an excellent option.

Bunnyshell Configuration with Bitnami Helm Chart

YAML
1kind: Environment
2name: myapp-helm
3type: primary
4
5environmentVariables:
6  MONGO_PASSWORD: SECRET["your-secure-password"]
7  MONGO_USER: appuser
8  MONGO_DB: appdb
9
10components:
11  # ── Docker Image Build ──
12  - kind: DockerImage
13    name: api-image
14    context: /
15    dockerfile: Dockerfile
16    gitRepo: 'https://github.com/your-org/your-app.git'
17    gitBranch: main
18    gitApplicationPath: /
19
20  # ── MongoDB via Bitnami Helm Chart ──
21  - kind: Helm
22    name: mongodb
23    runnerImage: 'dtzar/helm-kubectl:3.8.2'
24    deploy:
25      - |
26        cat << EOF > mongo_values.yaml
27          global:
28            storageClass: bns-network-sc
29          architecture: standalone
30          auth:
31            enabled: true
32            rootUser: root
33            rootPassword: {{ env.vars.MONGO_PASSWORD }}
34            usernames:
35              - {{ env.vars.MONGO_USER }}
36            passwords:
37              - {{ env.vars.MONGO_PASSWORD }}
38            databases:
39              - {{ env.vars.MONGO_DB }}
40          persistence:
41            size: 1Gi
42          resources:
43            requests:
44              memory: 256Mi
45              cpu: 100m
46            limits:
47              memory: 512Mi
48              cpu: 500m
49        EOF
50      - 'helm repo add bitnami https://charts.bitnami.com/bitnami'
51      - 'helm upgrade --install --namespace {{ env.k8s.namespace }}
52        --post-renderer /bns/helpers/helm/bns_post_renderer
53        -f mongo_values.yaml mongodb bitnami/mongodb --version 15.6.17'
54      - |
55        MONGO_HOST="mongodb.{{ env.k8s.namespace }}.svc.cluster.local"
56    destroy:
57      - 'helm uninstall mongodb --namespace {{ env.k8s.namespace }}'
58    start:
59      - 'kubectl scale --replicas=1 --namespace {{ env.k8s.namespace }}
60        statefulset/mongodb'
61    stop:
62      - 'kubectl scale --replicas=0 --namespace {{ env.k8s.namespace }}
63        statefulset/mongodb'
64    exportVariables:
65      - MONGO_HOST
66
67  # ── Application via Helm ──
68  - kind: Helm
69    name: api
70    runnerImage: 'dtzar/helm-kubectl:3.8.2'
71    deploy:
72      - |
73        cat << EOF > api_values.yaml
74          replicaCount: 1
75          image:
76            repository: {{ components.api-image.image }}
77          service:
78            port: 3000
79          ingress:
80            enabled: true
81            className: bns-nginx
82            host: api-{{ env.base_domain }}
83          env:
84            MONGODB_URI: 'mongodb://{{ env.vars.MONGO_USER }}:{{ env.vars.MONGO_PASSWORD }}@{{ components.mongodb.exported.MONGO_HOST }}:27017/{{ env.vars.MONGO_DB }}?authSource=admin'
85        EOF
86      - 'helm upgrade --install --namespace {{ env.k8s.namespace }}
87        --post-renderer /bns/helpers/helm/bns_post_renderer
88        -f api_values.yaml api-{{ env.unique }} ./helm/api'
89    destroy:
90      - 'helm uninstall api-{{ env.unique }} --namespace {{ env.k8s.namespace }}'
91    start:
92      - 'helm upgrade --namespace {{ env.k8s.namespace }}
93        --post-renderer /bns/helpers/helm/bns_post_renderer
94        --reuse-values --set replicaCount=1 api-{{ env.unique }} ./helm/api'
95    stop:
96      - 'helm upgrade --namespace {{ env.k8s.namespace }}
97        --post-renderer /bns/helpers/helm/bns_post_renderer
98        --reuse-values --set replicaCount=0 api-{{ env.unique }} ./helm/api'
99    gitRepo: 'https://github.com/your-org/your-app.git'
100    gitBranch: main
101    gitApplicationPath: /helm/api
102    dependsOn:
103      - mongodb

Always include --post-renderer /bns/helpers/helm/bns_post_renderer in your Helm commands. This adds Bunnyshell labels to all Kubernetes resources so the platform can track them, show logs, and manage lifecycle.

Helm Chart Configuration Explained

architecture: standalone — Use standalone for preview environments. The Bitnami chart also supports replicaset architecture, but standalone uses fewer resources and starts faster. Only use replicaset if your application requires transactions or change streams.

auth.usernames / auth.passwords / auth.databases — The chart creates application-level users in addition to the root user. These arrays are positionally matched: usernames[0] gets passwords[0] with access to databases[0].

Resource limits for preview environments:

ResourceRequestLimitRationale
Memory256Mi512MiMongoDB's WiredTiger cache defaults to 50% of available memory
CPU100m500mSufficient for light read/write workloads
Storage1GiEnough for test datasets

Approach C: Terraform-Managed MongoDB

For teams that need production-like managed MongoDB (MongoDB Atlas, AWS DocumentDB, Azure CosmosDB) in their preview environments. This approach creates real managed instances and destroys them when the PR closes.

Cost consideration: Managed MongoDB instances (even the smallest tiers like Atlas M0/M2 or DocumentDB t3.medium) cost more than in-cluster containers. Use this approach only when you need production parity that an in-cluster MongoDB cannot provide — for example, testing against Atlas Search, DocumentDB compatibility, or specific version features.

Terraform Configuration for MongoDB Atlas

Create terraform/preview-db/main.tf in your repo:

HCL
1terraform {
2  required_providers {
3    mongodbatlas = {
4      source  = "mongodb/mongodbatlas"
5      version = "~> 1.15"
6    }
7  }
8}
9
10variable "env_id" {
11  description = "Bunnyshell environment unique ID"
12  type        = string
13}
14
15variable "atlas_project_id" {
16  description = "MongoDB Atlas project ID"
17  type        = string
18}
19
20variable "db_password" {
21  description = "Database password"
22  type        = string
23  sensitive   = true
24}
25
26resource "mongodbatlas_cluster" "preview" {
27  project_id = var.atlas_project_id
28  name       = "preview-${var.env_id}"
29
30  # Free/Shared tier for preview environments
31  provider_name               = "TENANT"
32  backing_provider_name       = "AWS"
33  provider_region_name        = "US_EAST_1"
34  provider_instance_size_name = "M0"
35}
36
37resource "mongodbatlas_database_user" "app" {
38  project_id         = var.atlas_project_id
39  auth_database_name = "admin"
40  username           = "appuser"
41  password           = var.db_password
42
43  roles {
44    role_name     = "readWrite"
45    database_name = "appdb"
46  }
47
48  scopes {
49    name = mongodbatlas_cluster.preview.name
50    type = "CLUSTER"
51  }
52}
53
54output "connection_string" {
55  value     = mongodbatlas_cluster.preview.connection_strings[0].standard_srv
56  sensitive = true
57}

Bunnyshell Configuration with Terraform

YAML
1kind: Environment
2name: myapp-terraform
3type: primary
4
5environmentVariables:
6  MONGO_PASSWORD: SECRET["your-secure-password"]
7  ATLAS_PROJECT_ID: SECRET["your-atlas-project-id"]
8  MONGODB_ATLAS_PUBLIC_KEY: SECRET["your-atlas-public-key"]
9  MONGODB_ATLAS_PRIVATE_KEY: SECRET["your-atlas-private-key"]
10
11components:
12  # ── Terraform-Managed MongoDB ──
13  - kind: Terraform
14    name: mongodb
15    gitRepo: 'https://github.com/your-org/your-app.git'
16    gitBranch: main
17    gitApplicationPath: /terraform/preview-db
18    runnerImage: 'hashicorp/terraform:1.7'
19    deploy:
20      - 'cd /bns/repo/terraform/preview-db'
21      - 'terraform init'
22      - 'terraform apply -auto-approve
23        -var="env_id={{ env.unique }}"
24        -var="atlas_project_id={{ env.vars.ATLAS_PROJECT_ID }}"
25        -var="db_password={{ env.vars.MONGO_PASSWORD }}"'
26      - |
27        MONGO_CONNECTION=$(terraform output -raw connection_string)
28    destroy:
29      - 'cd /bns/repo/terraform/preview-db'
30      - 'terraform init'
31      - 'terraform destroy -auto-approve
32        -var="env_id={{ env.unique }}"
33        -var="atlas_project_id={{ env.vars.ATLAS_PROJECT_ID }}"
34        -var="db_password={{ env.vars.MONGO_PASSWORD }}"'
35    exportVariables:
36      - MONGO_CONNECTION
37
38  # ── Application ──
39  - kind: Application
40    name: api
41    gitRepo: 'https://github.com/your-org/your-app.git'
42    gitBranch: main
43    gitApplicationPath: /
44    dockerCompose:
45      build:
46        context: .
47        dockerfile: Dockerfile
48      environment:
49        MONGODB_URI: '{{ components.mongodb.exported.MONGO_CONNECTION }}/appdb?retryWrites=true&w=majority'
50      ports:
51        - '3000:3000'
52    dependsOn:
53      - mongodb
54    hosts:
55      - hostname: 'api-{{ env.base_domain }}'
56        path: /
57        servicePort: 3000

Data Seeding and Migrations

Every preview environment needs a consistent starting state. MongoDB's schemaless nature means you don't need DDL migrations, but you do need seed data and index creation.

Strategy 1: Application-Level Seeding

Most application frameworks have seeding tools. Run them post-deploy:

Bash
1# Node.js / Mongoose
2bns exec COMPONENT_ID -- node scripts/seed.js
3
4# Python / Motor or PyMongo
5bns exec COMPONENT_ID -- python manage.py seed
6
7# Go
8bns exec COMPONENT_ID -- go run cmd/seed/main.go
9
10# Generic — run a mongosh script
11bns exec DB_COMPONENT_ID -- mongosh -u appuser -p password \
12  --authenticationDatabase admin appdb /docker-entrypoint-initdb.d/seed.js

Strategy 2: mongodump / mongorestore Seed File

For larger datasets or when you need production-like data, create a seed dump:

Bash
1# Create a seed dump from your reference database
2mongodump \
3  --uri="mongodb://readonly:password@prod-replica.example.com:27017/appdb?authSource=admin" \
4  --excludeCollection=audit_logs \
5  --excludeCollection=sessions \
6  --gzip \
7  --archive=seed.archive

Create a restore script at docker/mongo/restore-seed.sh:

Bash
1#!/bin/bash
2set -e
3
4# Wait for MongoDB to be ready
5until mongosh --quiet --eval "db.adminCommand('ping')" \
6  -u "$MONGO_INITDB_ROOT_USERNAME" \
7  -p "$MONGO_INITDB_ROOT_PASSWORD" \
8  --authenticationDatabase admin; do
9  echo "Waiting for MongoDB..."
10  sleep 2
11done
12
13# Check if database has collections (skip if already seeded)
14COLL_COUNT=$(mongosh --quiet --eval "db.getCollectionNames().length" \
15  -u "$MONGO_INITDB_ROOT_USERNAME" \
16  -p "$MONGO_INITDB_ROOT_PASSWORD" \
17  --authenticationDatabase admin \
18  "$MONGO_INITDB_DATABASE")
19
20if [ "$COLL_COUNT" -lt 2 ]; then
21  echo "Restoring seed data..."
22  mongorestore \
23    --uri="mongodb://$MONGO_INITDB_ROOT_USERNAME:$MONGO_INITDB_ROOT_PASSWORD@localhost:27017/?authSource=admin" \
24    --nsFrom="appdb.*" \
25    --nsTo="$MONGO_INITDB_DATABASE.*" \
26    --gzip \
27    --archive=/seed/seed.archive \
28    --drop
29  echo "Seed restore complete."
30else
31  echo "Database already has collections, skipping seed."
32fi

Strategy 3: JavaScript Init Scripts

Place .js files in /docker-entrypoint-initdb.d/ for automatic execution:

JavaScript
1// docker/mongo/initdb.d/01-indexes.js
2db.users.createIndex({ email: 1 }, { unique: true });
3db.users.createIndex({ createdAt: -1 });
4db.projects.createIndex({ ownerId: 1, status: 1 });
5db.projects.createIndex({ name: 'text' });
6
7// docker/mongo/initdb.d/02-seed.js
8db.users.insertMany([
9  {
10    email: 'alice@example.com',
11    name: 'Alice Dev',
12    role: 'admin',
13    settings: { theme: 'dark', notifications: true },
14    createdAt: new Date()
15  },
16  {
17    email: 'bob@example.com',
18    name: 'Bob Tester',
19    role: 'user',
20    settings: { theme: 'light', notifications: false },
21    createdAt: new Date()
22  }
23]);
24
25db.projects.insertMany([
26  {
27    name: 'Demo Project',
28    ownerId: db.users.findOne({ email: 'alice@example.com' })._id,
29    status: 'active',
30    tags: ['demo', 'preview'],
31    createdAt: new Date()
32  }
33]);
34
35print('Indexes and seed data created successfully.');

Init scripts run only once. MongoDB's /docker-entrypoint-initdb.d/ scripts execute only when the data directory is empty (first container start). If you update your seed data, you need to destroy and recreate the environment — or use the mongorestore approach instead.


Connection Strings and Secrets

Connection String Formats

MongoDB connection strings vary based on the deployment type. Here's how to use them with Bunnyshell interpolation:

YAML
1# Standard format (in-cluster standalone)
2MONGODB_URI: 'mongodb://{{ env.vars.MONGO_USER }}:{{ env.vars.MONGO_PASSWORD }}@db:27017/{{ env.vars.MONGO_DB }}?authSource=admin'
3
4# Standard format with replica set
5MONGODB_URI: 'mongodb://{{ env.vars.MONGO_USER }}:{{ env.vars.MONGO_PASSWORD }}@db:27017/{{ env.vars.MONGO_DB }}?authSource=admin&replicaSet=rs0'
6
7# SRV format (for Atlas / managed services)
8MONGODB_URI: '{{ components.mongodb.exported.MONGO_CONNECTION }}/{{ env.vars.MONGO_DB }}?retryWrites=true&w=majority'
9
10# Separate parameters (for frameworks that prefer individual vars)
11DB_HOST: db
12DB_PORT: '27017'
13DB_NAME: '{{ env.vars.MONGO_DB }}'
14DB_USER: '{{ env.vars.MONGO_USER }}'
15DB_PASSWORD: '{{ env.vars.MONGO_PASSWORD }}'
16DB_AUTH_SOURCE: admin

Secret Management

Always use Bunnyshell's SECRET["..."] syntax for passwords:

YAML
environmentVariables:
  MONGO_PASSWORD: SECRET["your-password-here"]

Secrets are encrypted at rest and never exposed in logs or the Bunnyshell UI. They are injected into containers as environment variables at runtime.

Never hardcode passwords in your bunnyshell.yaml. Even for preview environments, use the SECRET["..."] syntax. Hardcoded passwords end up in Git history, Bunnyshell audit logs, and container inspect output.


Persistent Storage and Backup Considerations

Volume Configuration

For the built-in Database component, attach a persistent volume:

YAML
1volumes:
2  - name: mongo-data
3    mount:
4      component: db
5      containerPath: /data/db
6    size: 1Gi

Size guidelines for preview environments:

Dataset sizeRecommended volumeNotes
Small (< 100MB seed)1GiDefault for most projects
Medium (100MB - 1GB seed)2GiLarge seed data or file-heavy apps
Large (> 1GB seed)5GiConsider Approach C (Terraform) instead

Backup Strategy for Preview Environments

Preview environments are ephemeral — they're destroyed when the PR closes. In most cases, you don't need backups for preview databases. The seed data and init scripts are reproducible from your repository.

However, if your team needs to preserve preview database state (e.g., for debugging a complex issue), you can take a manual dump before destroying:

Bash
1# Dump before environment destruction
2bns exec DB_COMPONENT_ID -- mongodump \
3  -u appuser -p password --authenticationDatabase admin \
4  -d appdb --gzip --archive=/tmp/pr-123-debug.archive
5
6# Copy the dump locally
7bns cp DB_COMPONENT_ID:/tmp/pr-123-debug.archive ./pr-123-debug.archive
8
9# Or port-forward and dump locally
10bns port-forward 17017:27017 --component DB_COMPONENT_ID
11mongodump --host 127.0.0.1 --port 17017 \
12  -u appuser -p password --authenticationDatabase admin \
13  -d appdb --gzip --archive=pr-123-debug.archive

Performance Tuning for Preview Environments

Preview databases don't need production-level performance, but they should be fast enough that developers aren't waiting during testing.

Create a custom mongod.conf for preview instances:

YAML
1# docker/mongo/mongod.conf
2storage:
3  dbPath: /data/db
4  journal:
5    enabled: true
6  wiredTiger:
7    engineConfig:
8      cacheSizeGB: 0.25
9      journalCompressor: snappy
10    collectionConfig:
11      blockCompressor: snappy
12
13net:
14  port: 27017
15  bindIp: 0.0.0.0
16
17operationProfiling:
18  slowOpThresholdMs: 200
19  mode: slowOp
20
21setParameter:
22  diagnosticDataCollectionEnabled: false

Key settings for preview environments:

ParameterValueWhy
cacheSizeGB0.25WiredTiger defaults to 50% of RAM. Set explicitly to prevent OOM in small containers
journalCompressorsnappyReduces disk I/O with minimal CPU overhead
blockCompressorsnappySmaller storage footprint for preview data
slowOpThresholdMs200Log operations slower than 200ms for debugging
diagnosticDataCollectionEnabledfalseSaves disk space — FTDC not needed in preview

Applying Custom Configuration

With the built-in component, build a custom image:

Dockerfile
1# docker/mongo/Dockerfile
2FROM mongo:7-jammy
3COPY mongod.conf /etc/mongod.conf
4CMD ["mongod", "--config", "/etc/mongod.conf"]

With the Helm chart, use the extraFlags value:

YAML
1extraFlags:
2  - --wiredTigerCacheSizeGB=0.25
3  - --setParameter=diagnosticDataCollectionEnabled=false

Troubleshooting

IssueSolution
Connection refused on port 27017MongoDB container not ready. Check dependsOn ensures app waits for db. Add retry logic or a readiness check in your app startup.
Authentication failedMissing authSource=admin in connection string. The init user is created in the admin database, not the application database.
Database "appdb" not foundMONGO_INITDB_DATABASE creates the database only when init scripts insert data into it. MongoDB creates databases lazily on first write.
Transactions not supportedRunning in standalone mode. Switch to a single-member replica set if your app requires multi-document transactions (see Replica Set section).
Init scripts not runningScripts in /docker-entrypoint-initdb.d/ only run when /data/db is empty. Delete the volume and redeploy. Also check file permissions and .js extension.
WiredTiger cache too large / OOM killedSet --wiredTigerCacheSizeGB=0.25 explicitly. Without this, MongoDB uses 50% of system memory, which can exceed container limits.
Slow queries in previewEnable profiling with slowOpThresholdMs: 200 to identify slow operations. Check that indexes from init scripts are created correctly.
mongorestore version mismatchUse mongodump and mongorestore from the same MongoDB version. Version 7.x tools may not restore dumps from 4.x correctly.
Disk full / no space leftVolume too small. Increase size in the volumes section. For preview envs, 1Gi-2Gi is usually sufficient.
Change streams not workingRequires a replica set. Standalone mode does not support change streams. Configure a single-member replica set.

What's Next?

  • Add Mongo Express — Include a mongo-express Service component for visual database management in preview environments
  • Enable Atlas Search — Use Approach C (Terraform) with Atlas to test full-text search features
  • Add Redis for caching — Reduce MongoDB load by caching frequently accessed documents
  • Automate index creation — Use init scripts or application startup hooks to ensure indexes exist before traffic
  • Monitor with mongostat — Run bns exec DB_COMPONENT_ID -- mongostat --uri="..." 5 to monitor live query patterns

Ship faster starting today.

14-day full-feature trial. No credit card required. Pay-as-you-go from $0.007/min per environment.