Preview Environments with MySQL: Per-PR Database Isolation with Bunnyshell
GuideMarch 20, 202612 min read

Preview Environments with MySQL: Per-PR Database Isolation with Bunnyshell

Why Preview Environments Need Their Own Database

Every team that shares a staging MySQL database eventually runs into the same problem: one developer's migration drops a column that another developer's feature branch depends on. Or seed data from PR #42 pollutes the test results for PR #43. Or someone runs TRUNCATE on a table during a demo and takes down the shared staging environment for everyone.

The fix is isolation. Each preview environment gets its own MySQL instance — its own schema, its own data, its own lifecycle. When the PR is merged, the database is destroyed along with everything else. No cleanup scripts, no orphaned test data, no conflicts.

With Bunnyshell, every preview environment automatically provisions:

  • A dedicated MySQL instance — Running in the same Kubernetes namespace as your app
  • Isolated data — Each PR starts with a clean database seeded from your baseline
  • Automatic cleanup — The database is destroyed when the PR is merged or closed
  • Connection strings injected automatically — Your app connects without manual configuration

This guide covers three approaches to running MySQL in Bunnyshell preview environments, from the simplest built-in component to production-grade Terraform-managed instances.


The Challenge: Database Per Environment at Scale

Running a separate MySQL for every open pull request sounds expensive and complex. Here's why it's actually practical with Bunnyshell:

Resource efficiency: Preview databases are small. They run with minimal resources (256Mi RAM, 1Gi storage) and only exist while the PR is open. A team with 10 open PRs might use 2.5Gi of RAM total for all database instances — less than a single staging database.

Lifecycle management: Bunnyshell handles creation and destruction automatically. When a PR opens, a new MySQL container starts. When the PR merges or closes, the container and its persistent volume are deleted. No orphaned databases accumulating over months.

Configuration consistency: Every preview environment uses the same bunnyshell.yaml configuration. The MySQL version, character set, collation, init scripts, and seed data are defined once and reproduced exactly for every PR.

ConcernShared Staging DBPer-PR Database (Bunnyshell)
Migration conflictsFrequent — developers overwrite each otherNone — each PR has its own schema
Test data isolationImpossible — all PRs share the same rowsComplete — each PR starts clean
CleanupManual, error-proneAutomatic on PR merge/close
CostOne instance, always runningMany small instances, only while PRs are open
Character set driftCumulative over timeFresh from config every time

Bunnyshell's Approach to Database Components

Bunnyshell offers three ways to provision MySQL in preview environments. Choose based on your team's needs:

ApproachBest forComplexityProduction parity
Approach A: Built-in Database ComponentMost teams — fast setup, minimal configEasiestGood — same engine, lightweight instance
Approach B: Helm ChartTeams with existing Helm infrastructureModerateBetter — Bitnami chart with replication options
Approach C: Terraform-ManagedTeams needing managed databases (RDS, Cloud SQL)AdvancedBest — actual managed database instances

All three approaches work with Bunnyshell's automatic preview environment lifecycle. When a PR opens, the database is provisioned. When it closes, the database is destroyed.


Approach A: Built-in Database Component

The simplest way to add MySQL to a preview environment. Use kind: Database in your bunnyshell.yaml and Bunnyshell handles the rest.

Minimal Configuration

YAML
1kind: Environment
2name: myapp-preview
3type: primary
4
5environmentVariables:
6  DB_ROOT_PASSWORD: SECRET["your-root-password"]
7  DB_PASSWORD: SECRET["your-app-password"]
8  DB_USER: appuser
9  DB_NAME: appdb
10
11components:
12  # ── Your Application ──
13  - kind: Application
14    name: api
15    gitRepo: 'https://github.com/your-org/your-app.git'
16    gitBranch: main
17    gitApplicationPath: /
18    dockerCompose:
19      build:
20        context: .
21        dockerfile: Dockerfile
22      environment:
23        DATABASE_URL: 'mysql://{{ env.vars.DB_USER }}:{{ env.vars.DB_PASSWORD }}@db:3306/{{ env.vars.DB_NAME }}'
24        DB_HOST: db
25        DB_PORT: '3306'
26        DB_NAME: '{{ env.vars.DB_NAME }}'
27        DB_USER: '{{ env.vars.DB_USER }}'
28        DB_PASSWORD: '{{ env.vars.DB_PASSWORD }}'
29      ports:
30        - '3000:3000'
31    dependsOn:
32      - db
33    hosts:
34      - hostname: 'api-{{ env.base_domain }}'
35        path: /
36        servicePort: 3000
37
38  # ── MySQL Database ──
39  - kind: Database
40    name: db
41    dockerCompose:
42      image: 'mysql:8.0'
43      environment:
44        MYSQL_ROOT_PASSWORD: '{{ env.vars.DB_ROOT_PASSWORD }}'
45        MYSQL_DATABASE: '{{ env.vars.DB_NAME }}'
46        MYSQL_USER: '{{ env.vars.DB_USER }}'
47        MYSQL_PASSWORD: '{{ env.vars.DB_PASSWORD }}'
48      ports:
49        - '3306:3306'
50
51volumes:
52  - name: mysql-data
53    mount:
54      component: db
55      containerPath: /var/lib/mysql
56    size: 1Gi

Internal networking only. The MySQL component does not need an ingress or public hostname. Your application connects to it via the Kubernetes service name (db in this example) on port 3306. The {{ components.db.ingress.hosts[0] }} interpolation is NOT used for databases — that's for HTTP services only.

Key Configuration Details

Image choice: mysql:8.0 is the recommended image for most applications. MySQL 8.0 includes the default utf8mb4 character set, improved JSON support, and window functions. If you need MySQL 8.4 (the latest LTS), use mysql:8.4 — but verify your application's ORM and driver compatibility first.

Environment variables: MySQL's official Docker image reads four variables on first start:

VariablePurposeExample
MYSQL_ROOT_PASSWORDRoot user password (required)Via SECRET["..."]
MYSQL_DATABASEDatabase to create on initappdb
MYSQL_USERApplication user to createappuser
MYSQL_PASSWORDApplication user passwordVia SECRET["..."]

Volume mount: The mysql-data volume at /var/lib/mysql persists data across container restarts within the same environment. Set size: 1Gi for preview environments — this is plenty for test data and keeps costs low.

Connection string format:

Text
mysql://appuser:password@db:3306/appdb

Your application references the database by its component name (db), which resolves to the Kubernetes service. No IP addresses, no external DNS.

Character Set and Collation Configuration

MySQL 8.0 defaults to utf8mb4 with utf8mb4_0900_ai_ci collation, which handles most use cases correctly. However, if your production database uses a different collation (common for older MySQL installations), you should match it in preview environments to avoid subtle sorting and comparison differences.

To explicitly set character set and collation, pass MySQL startup arguments:

YAML
1  - kind: Database
2    name: db
3    dockerCompose:
4      image: 'mysql:8.0'
5      command: ['--character-set-encoding=utf8mb4', '--collation-server=utf8mb4_unicode_ci', '--default-authentication-plugin=mysql_native_password']
6      environment:
7        MYSQL_ROOT_PASSWORD: '{{ env.vars.DB_ROOT_PASSWORD }}'
8        MYSQL_DATABASE: '{{ env.vars.DB_NAME }}'
9        MYSQL_USER: '{{ env.vars.DB_USER }}'
10        MYSQL_PASSWORD: '{{ env.vars.DB_PASSWORD }}'
11      ports:
12        - '3306:3306'

Or use a custom my.cnf file:

INI
1# docker/mysql/my.cnf
2[mysqld]
3character-set-server = utf8mb4
4collation-server = utf8mb4_unicode_ci
5default-authentication-plugin = mysql_native_password
6
7# Explicitly set for all new tables
8default-storage-engine = InnoDB
9
10[client]
11default-character-set = utf8mb4

Collation mismatches cause real bugs. If your production uses utf8mb4_unicode_ci but your preview environment defaults to utf8mb4_0900_ai_ci, string comparisons and JOIN conditions on text columns can behave differently. Always match production's collation in your preview configuration.

Multiple App Components Sharing One Database

If your architecture has multiple services that connect to the same MySQL instance, reference the same component name:

YAML
1components:
2  - kind: Application
3    name: api
4    dockerCompose:
5      environment:
6        DATABASE_URL: 'mysql://{{ env.vars.DB_USER }}:{{ env.vars.DB_PASSWORD }}@db:3306/{{ env.vars.DB_NAME }}'
7    dependsOn:
8      - db
9
10  - kind: Service
11    name: worker
12    dockerCompose:
13      environment:
14        DATABASE_URL: 'mysql://{{ env.vars.DB_USER }}:{{ env.vars.DB_PASSWORD }}@db:3306/{{ env.vars.DB_NAME }}'
15    dependsOn:
16      - db
17
18  - kind: Service
19    name: scheduler
20    dockerCompose:
21      environment:
22        DATABASE_URL: 'mysql://{{ env.vars.DB_USER }}:{{ env.vars.DB_PASSWORD }}@db:3306/{{ env.vars.DB_NAME }}'
23    dependsOn:
24      - db
25
26  - kind: Database
27    name: db
28    dockerCompose:
29      image: 'mysql:8.0'
30      environment:
31        MYSQL_ROOT_PASSWORD: '{{ env.vars.DB_ROOT_PASSWORD }}'
32        MYSQL_DATABASE: '{{ env.vars.DB_NAME }}'
33        MYSQL_USER: '{{ env.vars.DB_USER }}'
34        MYSQL_PASSWORD: '{{ env.vars.DB_PASSWORD }}'
35      ports:
36        - '3306:3306'

All three components (api, worker, scheduler) connect to the same db service. The dependsOn ensures MySQL starts before any application component.

Custom Init Scripts

MySQL runs .sql, .sh, and .sql.gz files from /docker-entrypoint-initdb.d/ on first start. Create docker/mysql/initdb.d/01-schema.sql:

SQL
1-- docker/mysql/initdb.d/01-schema.sql
2-- Uses the database specified by MYSQL_DATABASE
3
4CREATE TABLE IF NOT EXISTS users (
5    id BIGINT UNSIGNED AUTO_INCREMENT PRIMARY KEY,
6    email VARCHAR(255) NOT NULL,
7    name VARCHAR(255) NOT NULL,
8    password_hash VARCHAR(255) NOT NULL,
9    created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
10    updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
11    UNIQUE KEY idx_users_email (email),
12    KEY idx_users_created (created_at)
13) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
14
15CREATE TABLE IF NOT EXISTS projects (
16    id BIGINT UNSIGNED AUTO_INCREMENT PRIMARY KEY,
17    name VARCHAR(255) NOT NULL,
18    owner_id BIGINT UNSIGNED NOT NULL,
19    status ENUM('active', 'archived', 'deleted') DEFAULT 'active',
20    created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
21    updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
22    KEY idx_projects_owner (owner_id),
23    KEY idx_projects_status_updated (status, updated_at),
24    FULLTEXT KEY idx_projects_name (name),
25    CONSTRAINT fk_projects_owner FOREIGN KEY (owner_id) REFERENCES users(id) ON DELETE CASCADE
26) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;

And docker/mysql/initdb.d/02-seed.sql:

SQL
1-- docker/mysql/initdb.d/02-seed.sql
2INSERT INTO users (email, name, password_hash) VALUES
3    ('alice@example.com', 'Alice Dev', '$2y$10$placeholder_hash_alice'),
4    ('bob@example.com', 'Bob Tester', '$2y$10$placeholder_hash_bob')
5ON DUPLICATE KEY UPDATE name = VALUES(name);
6
7INSERT INTO projects (name, owner_id, status) VALUES
8    ('Demo Project', 1, 'active'),
9    ('Test Project', 2, 'active')
10ON DUPLICATE KEY UPDATE name = VALUES(name);

Build a custom image to include these scripts:

Dockerfile
1# docker/mysql/Dockerfile
2FROM mysql:8.0
3COPY initdb.d/ /docker-entrypoint-initdb.d/
4COPY my.cnf /etc/mysql/conf.d/custom.cnf

Update the component:

YAML
1  - kind: Database
2    name: db
3    gitRepo: 'https://github.com/your-org/your-app.git'
4    gitBranch: main
5    gitApplicationPath: /docker/mysql
6    dockerCompose:
7      build:
8        context: docker/mysql
9        dockerfile: Dockerfile
10      environment:
11        MYSQL_ROOT_PASSWORD: '{{ env.vars.DB_ROOT_PASSWORD }}'
12        MYSQL_DATABASE: '{{ env.vars.DB_NAME }}'
13        MYSQL_USER: '{{ env.vars.DB_USER }}'
14        MYSQL_PASSWORD: '{{ env.vars.DB_PASSWORD }}'
15      ports:
16        - '3306:3306'

Init scripts run only once. MySQL's /docker-entrypoint-initdb.d/ scripts execute only when the data directory is empty (first container start). If you update your seed data, you need to destroy and recreate the environment — or use the mysqldump import approach instead.


Approach B: Helm Chart for MySQL

For teams with existing Helm infrastructure or those who need more control over MySQL configuration (replication, custom my.cnf, monitoring), the Bitnami MySQL Helm chart is an excellent option.

Bunnyshell Configuration with Bitnami Helm Chart

YAML
1kind: Environment
2name: myapp-helm
3type: primary
4
5environmentVariables:
6  DB_ROOT_PASSWORD: SECRET["your-root-password"]
7  DB_PASSWORD: SECRET["your-app-password"]
8  DB_USER: appuser
9  DB_NAME: appdb
10
11components:
12  # ── Docker Image Build ──
13  - kind: DockerImage
14    name: api-image
15    context: /
16    dockerfile: Dockerfile
17    gitRepo: 'https://github.com/your-org/your-app.git'
18    gitBranch: main
19    gitApplicationPath: /
20
21  # ── MySQL via Bitnami Helm Chart ──
22  - kind: Helm
23    name: mysql
24    runnerImage: 'dtzar/helm-kubectl:3.8.2'
25    deploy:
26      - |
27        cat << EOF > mysql_values.yaml
28          global:
29            storageClass: bns-network-sc
30          auth:
31            rootPassword: {{ env.vars.DB_ROOT_PASSWORD }}
32            database: {{ env.vars.DB_NAME }}
33            username: {{ env.vars.DB_USER }}
34            password: {{ env.vars.DB_PASSWORD }}
35          primary:
36            persistence:
37              size: 1Gi
38            resources:
39              requests:
40                memory: 256Mi
41                cpu: 100m
42              limits:
43                memory: 512Mi
44                cpu: 500m
45            configuration: |
46              [mysqld]
47              character-set-server = utf8mb4
48              collation-server = utf8mb4_unicode_ci
49              default-authentication-plugin = mysql_native_password
50              max_connections = 50
51              innodb_buffer_pool_size = 64M
52              innodb_log_file_size = 16M
53              innodb_flush_method = O_DIRECT
54              innodb_flush_log_at_trx_commit = 2
55              slow_query_log = 1
56              long_query_time = 0.5
57              [client]
58              default-character-set = utf8mb4
59        EOF
60      - 'helm repo add bitnami https://charts.bitnami.com/bitnami'
61      - 'helm upgrade --install --namespace {{ env.k8s.namespace }}
62        --post-renderer /bns/helpers/helm/bns_post_renderer
63        -f mysql_values.yaml mysql bitnami/mysql --version 9.14.4'
64      - |
65        MYSQL_HOST="mysql.{{ env.k8s.namespace }}.svc.cluster.local"
66    destroy:
67      - 'helm uninstall mysql --namespace {{ env.k8s.namespace }}'
68    start:
69      - 'kubectl scale --replicas=1 --namespace {{ env.k8s.namespace }}
70        statefulset/mysql'
71    stop:
72      - 'kubectl scale --replicas=0 --namespace {{ env.k8s.namespace }}
73        statefulset/mysql'
74    exportVariables:
75      - MYSQL_HOST
76
77  # ── Application via Helm ──
78  - kind: Helm
79    name: api
80    runnerImage: 'dtzar/helm-kubectl:3.8.2'
81    deploy:
82      - |
83        cat << EOF > api_values.yaml
84          replicaCount: 1
85          image:
86            repository: {{ components.api-image.image }}
87          service:
88            port: 3000
89          ingress:
90            enabled: true
91            className: bns-nginx
92            host: api-{{ env.base_domain }}
93          env:
94            DATABASE_URL: 'mysql://{{ env.vars.DB_USER }}:{{ env.vars.DB_PASSWORD }}@{{ components.mysql.exported.MYSQL_HOST }}:3306/{{ env.vars.DB_NAME }}'
95            DB_HOST: '{{ components.mysql.exported.MYSQL_HOST }}'
96            DB_PORT: '3306'
97            DB_NAME: '{{ env.vars.DB_NAME }}'
98            DB_USER: '{{ env.vars.DB_USER }}'
99            DB_PASSWORD: '{{ env.vars.DB_PASSWORD }}'
100        EOF
101      - 'helm upgrade --install --namespace {{ env.k8s.namespace }}
102        --post-renderer /bns/helpers/helm/bns_post_renderer
103        -f api_values.yaml api-{{ env.unique }} ./helm/api'
104    destroy:
105      - 'helm uninstall api-{{ env.unique }} --namespace {{ env.k8s.namespace }}'
106    start:
107      - 'helm upgrade --namespace {{ env.k8s.namespace }}
108        --post-renderer /bns/helpers/helm/bns_post_renderer
109        --reuse-values --set replicaCount=1 api-{{ env.unique }} ./helm/api'
110    stop:
111      - 'helm upgrade --namespace {{ env.k8s.namespace }}
112        --post-renderer /bns/helpers/helm/bns_post_renderer
113        --reuse-values --set replicaCount=0 api-{{ env.unique }} ./helm/api'
114    gitRepo: 'https://github.com/your-org/your-app.git'
115    gitBranch: main
116    gitApplicationPath: /helm/api
117    dependsOn:
118      - mysql

Always include --post-renderer /bns/helpers/helm/bns_post_renderer in your Helm commands. This adds Bunnyshell labels to all Kubernetes resources so the platform can track them, show logs, and manage lifecycle.

Helm Chart Configuration Explained

The Bitnami chart exposes many configuration options. Here are the most relevant for preview environments:

primary.persistence.size: 1Gi — Keep storage small for preview environments. 1Gi is sufficient for most test datasets. This saves costs when you have many concurrent PRs.

primary.configuration — MySQL server configuration injected as my.cnf. The values above are tuned for a small preview instance:

ParameterValueWhy
max_connections50Preview envs don't need 151 (the default)
innodb_buffer_pool_size64M~25% of available memory (256Mi limit). Default 128M may OOM
innodb_log_file_size16MSmaller redo logs, faster restart
innodb_flush_log_at_trx_commit2Flush once per second, not per commit — faster writes, safe for preview
innodb_flush_methodO_DIRECTBypass OS cache, let InnoDB manage its buffer pool
slow_query_log1Enable slow query logging for debugging
long_query_time0.5Log queries slower than 500ms

auth.rootPassword — Sets the root password. The chart also creates the application user (auth.username / auth.password) and database (auth.database) automatically.


Approach C: Terraform-Managed MySQL

For teams that need production-like managed databases (AWS RDS, GCP Cloud SQL, Azure Database for MySQL) in their preview environments. This approach creates real managed instances and destroys them when the PR closes.

Cost consideration: Managed database instances (even the smallest tiers) cost more than in-cluster containers. Use this approach only when you need production parity that an in-cluster MySQL cannot provide — for example, testing against specific RDS parameter groups, IAM authentication, or read replicas.

Terraform Configuration

Create terraform/preview-db/main.tf in your repo:

HCL
1terraform {
2  required_providers {
3    aws = {
4      source  = "hashicorp/aws"
5      version = "~> 5.0"
6    }
7  }
8}
9
10variable "env_id" {
11  description = "Bunnyshell environment unique ID"
12  type        = string
13}
14
15variable "db_password" {
16  description = "Database password"
17  type        = string
18  sensitive   = true
19}
20
21resource "aws_db_instance" "preview" {
22  identifier = "preview-${var.env_id}"
23  engine     = "mysql"
24  engine_version    = "8.0"
25  instance_class    = "db.t4g.micro"
26  allocated_storage = 20
27
28  db_name  = "appdb"
29  username = "appuser"
30  password = var.db_password
31
32  # Preview environment settings — cost optimization
33  skip_final_snapshot     = true
34  deletion_protection     = false
35  backup_retention_period = 0
36  multi_az                = false
37  publicly_accessible     = false
38
39  # Character set configuration
40  parameter_group_name = aws_db_parameter_group.preview.name
41
42  vpc_security_group_ids = [aws_security_group.preview_db.id]
43  db_subnet_group_name   = aws_db_subnet_group.preview.name
44
45  tags = {
46    Environment = "preview"
47    ManagedBy   = "bunnyshell-terraform"
48    EnvID       = var.env_id
49  }
50}
51
52resource "aws_db_parameter_group" "preview" {
53  name   = "preview-${var.env_id}"
54  family = "mysql8.0"
55
56  parameter {
57    name  = "character_set_server"
58    value = "utf8mb4"
59  }
60
61  parameter {
62    name  = "collation_server"
63    value = "utf8mb4_unicode_ci"
64  }
65
66  parameter {
67    name  = "max_connections"
68    value = "50"
69  }
70}
71
72output "db_host" {
73  value = aws_db_instance.preview.address
74}
75
76output "db_port" {
77  value = aws_db_instance.preview.port
78}

Bunnyshell Configuration with Terraform

YAML
1kind: Environment
2name: myapp-terraform
3type: primary
4
5environmentVariables:
6  DB_PASSWORD: SECRET["your-secure-password"]
7  AWS_ACCESS_KEY_ID: SECRET["your-aws-key"]
8  AWS_SECRET_ACCESS_KEY: SECRET["your-aws-secret"]
9
10components:
11  # ── Terraform-Managed MySQL ──
12  - kind: Terraform
13    name: mysql
14    gitRepo: 'https://github.com/your-org/your-app.git'
15    gitBranch: main
16    gitApplicationPath: /terraform/preview-db
17    runnerImage: 'hashicorp/terraform:1.7'
18    deploy:
19      - 'cd /bns/repo/terraform/preview-db'
20      - 'terraform init'
21      - 'terraform apply -auto-approve
22        -var="env_id={{ env.unique }}"
23        -var="db_password={{ env.vars.DB_PASSWORD }}"'
24      - |
25        MYSQL_HOST=$(terraform output -raw db_host)
26        MYSQL_PORT=$(terraform output -raw db_port)
27    destroy:
28      - 'cd /bns/repo/terraform/preview-db'
29      - 'terraform init'
30      - 'terraform destroy -auto-approve
31        -var="env_id={{ env.unique }}"
32        -var="db_password={{ env.vars.DB_PASSWORD }}"'
33    exportVariables:
34      - MYSQL_HOST
35      - MYSQL_PORT
36
37  # ── Application ──
38  - kind: Application
39    name: api
40    gitRepo: 'https://github.com/your-org/your-app.git'
41    gitBranch: main
42    gitApplicationPath: /
43    dockerCompose:
44      build:
45        context: .
46        dockerfile: Dockerfile
47      environment:
48        DATABASE_URL: 'mysql://appuser:{{ env.vars.DB_PASSWORD }}@{{ components.mysql.exported.MYSQL_HOST }}:{{ components.mysql.exported.MYSQL_PORT }}/appdb'
49      ports:
50        - '3000:3000'
51    dependsOn:
52      - mysql
53    hosts:
54      - hostname: 'api-{{ env.base_domain }}'
55        path: /
56        servicePort: 3000

Data Seeding and Migrations

Every preview environment needs a consistent starting state. There are three strategies for populating MySQL in preview environments.

Strategy 1: Application-Level Migrations

Most frameworks (Laravel, Rails, Django, Prisma, Sequelize, Flyway) have built-in migration tools. Run them post-deploy:

Bash
1# Laravel
2bns exec COMPONENT_ID -- php artisan migrate --force
3bns exec COMPONENT_ID -- php artisan db:seed
4
5# Rails
6bns exec COMPONENT_ID -- rails db:migrate db:seed
7
8# Django
9bns exec COMPONENT_ID -- python manage.py migrate
10bns exec COMPONENT_ID -- python manage.py loaddata fixtures/seed.json
11
12# Node.js with Prisma
13bns exec COMPONENT_ID -- npx prisma migrate deploy
14bns exec COMPONENT_ID -- npx prisma db seed
15
16# Node.js with Sequelize
17bns exec COMPONENT_ID -- npx sequelize-cli db:migrate
18bns exec COMPONENT_ID -- npx sequelize-cli db:seed:all
19
20# Java with Flyway
21bns exec COMPONENT_ID -- flyway migrate

Strategy 2: mysqldump / mysql Seed File

For larger datasets or when you need production-like data, create a seed dump from your reference database:

Bash
1# Create a seed dump from your reference database
2mysqldump \
3  --host=prod-replica.example.com \
4  --user=readonly \
5  --single-transaction \
6  --routines \
7  --triggers \
8  --set-gtid-purged=OFF \
9  --ignore-table=appdb.audit_logs \
10  --ignore-table=appdb.sessions \
11  appdb > seed.sql
12
13# Compress for faster transfers
14gzip seed.sql

Create a restore script at docker/mysql/restore-seed.sh:

Bash
1#!/bin/bash
2set -e
3
4# Wait for MySQL to be ready
5until mysqladmin ping -h localhost -u root -p"$MYSQL_ROOT_PASSWORD" --silent; do
6  echo "Waiting for MySQL..."
7  sleep 2
8done
9
10# Check if database has tables (skip if already seeded)
11TABLE_COUNT=$(mysql -u root -p"$MYSQL_ROOT_PASSWORD" -N -e \
12  "SELECT COUNT(*) FROM information_schema.tables WHERE table_schema='$MYSQL_DATABASE'")
13
14if [ "$TABLE_COUNT" -lt 2 ]; then
15  echo "Seeding database from dump..."
16  if [ -f /seed/seed.sql.gz ]; then
17    gunzip -c /seed/seed.sql.gz | mysql -u root -p"$MYSQL_ROOT_PASSWORD" "$MYSQL_DATABASE"
18  elif [ -f /seed/seed.sql ]; then
19    mysql -u root -p"$MYSQL_ROOT_PASSWORD" "$MYSQL_DATABASE" < /seed/seed.sql
20  fi
21  echo "Seed complete."
22else
23  echo "Database already has tables, skipping seed."
24fi

Strategy 3: SQL Init Scripts

For simpler setups, place .sql files in /docker-entrypoint-initdb.d/:

SQL
1-- docker/mysql/initdb.d/01-schema.sql
2CREATE TABLE IF NOT EXISTS users (
3    id BIGINT UNSIGNED AUTO_INCREMENT PRIMARY KEY,
4    email VARCHAR(255) NOT NULL,
5    name VARCHAR(255) NOT NULL,
6    created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
7    updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
8    UNIQUE KEY idx_users_email (email),
9    KEY idx_users_created (created_at)
10) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
11
12CREATE TABLE IF NOT EXISTS projects (
13    id BIGINT UNSIGNED AUTO_INCREMENT PRIMARY KEY,
14    name VARCHAR(255) NOT NULL,
15    owner_id BIGINT UNSIGNED NOT NULL,
16    status ENUM('active', 'archived', 'deleted') DEFAULT 'active',
17    created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
18    FULLTEXT KEY idx_projects_name (name),
19    KEY idx_projects_owner (owner_id),
20    CONSTRAINT fk_projects_owner FOREIGN KEY (owner_id)
21        REFERENCES users(id) ON DELETE CASCADE
22) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
23
24-- docker/mysql/initdb.d/02-seed.sql
25INSERT INTO users (email, name) VALUES
26    ('alice@example.com', 'Alice Dev'),
27    ('bob@example.com', 'Bob Tester')
28ON DUPLICATE KEY UPDATE name = VALUES(name);
29
30INSERT INTO projects (name, owner_id, status) VALUES
31    ('Demo Project', 1, 'active'),
32    ('Test Project', 2, 'active')
33ON DUPLICATE KEY UPDATE name = VALUES(name);

Init scripts run only once. MySQL's /docker-entrypoint-initdb.d/ scripts execute only when the data directory is empty (first container start). If you update your seed data, you need to destroy and recreate the environment — or use the mysql import approach instead. Scripts run in alphabetical order, so prefix with numbers (01-, 02-) to control execution order.


Connection Strings and Secrets

Connection String Formats

MySQL supports several connection string formats. Here's how to use them with Bunnyshell interpolation:

YAML
1# URI format (common in Node.js, Go, Python ORMs)
2DATABASE_URL: 'mysql://{{ env.vars.DB_USER }}:{{ env.vars.DB_PASSWORD }}@db:3306/{{ env.vars.DB_NAME }}'
3
4# URI with charset (explicit utf8mb4)
5DATABASE_URL: 'mysql://{{ env.vars.DB_USER }}:{{ env.vars.DB_PASSWORD }}@db:3306/{{ env.vars.DB_NAME }}?charset=utf8mb4'
6
7# URI with SSL (for Terraform-managed instances)
8DATABASE_URL: 'mysql://{{ env.vars.DB_USER }}:{{ env.vars.DB_PASSWORD }}@{{ components.mysql.exported.MYSQL_HOST }}:3306/{{ env.vars.DB_NAME }}?ssl-mode=REQUIRED'
9
10# Separate parameters (for frameworks that prefer individual vars, e.g., Laravel)
11DB_CONNECTION: mysql
12DB_HOST: db
13DB_PORT: '3306'
14DB_DATABASE: '{{ env.vars.DB_NAME }}'
15DB_USERNAME: '{{ env.vars.DB_USER }}'
16DB_PASSWORD: '{{ env.vars.DB_PASSWORD }}'
17
18# JDBC format (for Java/Spring Boot)
19SPRING_DATASOURCE_URL: 'jdbc:mysql://db:3306/{{ env.vars.DB_NAME }}?useSSL=false&characterEncoding=utf8mb4'
20SPRING_DATASOURCE_USERNAME: '{{ env.vars.DB_USER }}'
21SPRING_DATASOURCE_PASSWORD: '{{ env.vars.DB_PASSWORD }}'

Secret Management

Always use Bunnyshell's SECRET["..."] syntax for passwords:

YAML
1environmentVariables:
2  DB_ROOT_PASSWORD: SECRET["your-root-password"]
3  DB_PASSWORD: SECRET["your-app-password"]

Secrets are encrypted at rest and never exposed in logs or the Bunnyshell UI. They are injected into containers as environment variables at runtime.

Never hardcode passwords in your bunnyshell.yaml. Even for preview environments, use the SECRET["..."] syntax. Hardcoded passwords end up in Git history, Bunnyshell audit logs, and container inspect output.


Persistent Storage and Backup Considerations

Volume Configuration

For the built-in Database component, attach a persistent volume:

YAML
1volumes:
2  - name: mysql-data
3    mount:
4      component: db
5      containerPath: /var/lib/mysql
6    size: 1Gi

Size guidelines for preview environments:

Dataset sizeRecommended volumeNotes
Small (< 100MB seed)1GiDefault for most projects
Medium (100MB - 1GB seed)2GiLarge seed data or binary-heavy tables
Large (> 1GB seed)5GiConsider Approach C (Terraform) instead

Backup Strategy for Preview Environments

Preview environments are ephemeral — they're destroyed when the PR closes. In most cases, you don't need backups for preview databases. The seed data and migrations are reproducible from your repository.

However, if your team needs to preserve preview database state (e.g., for debugging a complex issue), you can take a manual dump before destroying:

Bash
1# Dump before environment destruction
2bns exec DB_COMPONENT_ID -- mysqldump -u root -p"$MYSQL_ROOT_PASSWORD" \
3  --single-transaction --routines --triggers appdb > pr-123-debug.sql
4
5# Or port-forward and dump locally
6bns port-forward 13306:3306 --component DB_COMPONENT_ID
7mysqldump -h 127.0.0.1 -P 13306 -u appuser -p \
8  --single-transaction --routines --triggers appdb > pr-123-debug.sql

Performance Tuning for Preview Environments

Preview databases don't need production-level performance, but they should be fast enough that developers aren't waiting on queries during testing.

INI
1# docker/mysql/my.cnf — tuned for preview environments
2[mysqld]
3# Character set
4character-set-server = utf8mb4
5collation-server = utf8mb4_unicode_ci
6
7# Connection settings
8max_connections = 50
9wait_timeout = 300
10interactive_timeout = 300
11
12# InnoDB — tuned for 256Mi-512Mi container limit
13innodb_buffer_pool_size = 64M
14innodb_log_file_size = 16M
15innodb_log_buffer_size = 8M
16innodb_flush_method = O_DIRECT
17innodb_flush_log_at_trx_commit = 2
18innodb_doublewrite = 0
19innodb_file_per_table = 1
20
21# Query cache (disabled in MySQL 8.0 by default, good)
22# Performance schema — disable to save memory
23performance_schema = OFF
24
25# Logging — more verbose for debugging
26slow_query_log = 1
27long_query_time = 0.5
28log_queries_not_using_indexes = 1
29
30# Temp tables
31tmp_table_size = 16M
32max_heap_table_size = 16M
33
34[client]
35default-character-set = utf8mb4

Key InnoDB settings for preview environments:

ParameterValueWhy
innodb_buffer_pool_size64MDefault is 128M — too high for 256Mi containers, causes OOM
innodb_flush_log_at_trx_commit2Flush once per second, not per commit — 10x faster writes
innodb_doublewrite0Disable double-write buffer — faster writes, safe for ephemeral data
performance_schemaOFFSaves ~100MB of RAM in small containers
innodb_log_file_size16MSmaller redo logs — faster crash recovery, less disk usage
log_queries_not_using_indexes1Catch missing indexes during preview testing

Do NOT use innodb_doublewrite=0 or innodb_flush_log_at_trx_commit=2 in production. These settings trade durability for speed. They're safe for preview environments because the data is ephemeral — if the container crashes, you just redeploy. Never apply these settings to production or staging databases.

Applying Custom Configuration

With the built-in component, create a custom my.cnf and build a custom image:

Dockerfile
1# docker/mysql/Dockerfile
2FROM mysql:8.0
3COPY my.cnf /etc/mysql/conf.d/custom.cnf
4COPY initdb.d/ /docker-entrypoint-initdb.d/

With the Helm chart, use primary.configuration:

YAML
1primary:
2  configuration: |
3    [mysqld]
4    innodb_buffer_pool_size = 64M
5    innodb_flush_log_at_trx_commit = 2
6    innodb_doublewrite = 0
7    performance_schema = OFF
8    max_connections = 50

Troubleshooting

IssueSolution
Connection refused on port 3306MySQL container not ready. Check dependsOn ensures app waits for db. MySQL takes 10-30 seconds to initialize on first start. Add retry logic in your app.
Access denied for user 'appuser'MYSQL_USER and MYSQL_PASSWORD only work on first init. If you changed credentials, delete the volume and redeploy.
Unknown database 'appdb'MYSQL_DATABASE only creates the database on first init. Delete the volume and redeploy, or create manually: CREATE DATABASE appdb.
Incorrect string value for columnCharacter set mismatch. Ensure utf8mb4 is set at server level, database level, and table level. Check the my.cnf configuration.
Collation mismatch in JOINColumns in a JOIN have different collations. Explicitly set COLLATE utf8mb4_unicode_ci on the JOIN condition, or ensure all tables use the same collation.
InnoDB: Cannot allocate memoryinnodb_buffer_pool_size too large for the container memory limit. Set to 25% of available memory. Disable performance_schema.
Slow queries in previewEnable slow_query_log and check /var/lib/mysql/slow.log. Apply the InnoDB tuning settings above. Check for missing indexes with log_queries_not_using_indexes.
Init scripts not runningScripts in /docker-entrypoint-initdb.d/ only run when /var/lib/mysql is empty. Delete the volume to re-trigger. Check file permissions and ensure .sql extension.
mysqldump restore errorsVersion mismatch between dump source and target. Use matching MySQL versions. Check for --set-gtid-purged=OFF if restoring from a GTID-enabled source.
SSL connection requiredTerraform-managed instances (RDS, Cloud SQL) may require SSL. Add ?ssl-mode=REQUIRED to the connection string and provide the CA certificate.
Too many connectionsDefault max_connections=151 allows too many idle connections. Set to 50 for preview environments and ensure your app uses connection pooling.

What's Next?

  • Add phpMyAdmin — Include an phpmyadmin/phpmyadmin Service component for visual database management in preview environments
  • Add ProxySQL — Use ProxySQL as a sidecar for connection pooling in apps with many microservices
  • Enable MySQL Shell — Use mysqlsh for JSON document store features and admin operations
  • Automate migrations — Add a Kubernetes init container that runs migrations before the app starts
  • Monitor with slow query log — Port-forward to the MySQL container and tail the slow query log during testing

Ship faster starting today.

14-day full-feature trial. No credit card required. Pay-as-you-go from $0.007/min per environment.