Preview Environments for Monorepos: Nx, Turborepo, and Multi-Service Deploys with Bunnyshell
GuideMarch 20, 202614 min read

Preview Environments for Monorepos: Nx, Turborepo, and Multi-Service Deploys with Bunnyshell

Why Preview Environments for Monorepos?

Monorepos concentrate all your services -- frontend, backend, workers, shared libraries -- in a single repository. This makes cross-service changes atomic: one PR can update the API contract, the shared types, and the frontend that consumes them. That's the whole point.

But monorepos make preview environments harder. A simple "deploy this branch" doesn't work when the repository contains five services, three shared packages, and a dozen Dockerfiles. Questions pile up: Which services changed? Do you rebuild everything or just what's affected? How do services discover each other's URLs in a preview environment?

Bunnyshell solves this. You define your monorepo's services as components in a bunnyshell.yaml, configure the build contexts and dependencies, and let Bunnyshell handle the rest. Every pull request gets a fully deployed copy of your entire stack -- or just the services that changed, if you prefer selective builds.

With Bunnyshell, monorepo preview environments give you:

  • Coordinated deployment -- All services from one PR deploy together in a single environment
  • Shared library handling -- Build contexts include shared packages, so type changes propagate correctly
  • Service discovery -- Each component gets a hostname; services find each other via {{ components.X.ingress.hosts[0] }}
  • Automatic lifecycle -- Environments spin up on PR open, update on push, and destroy on merge/close

Monorepo Structures Bunnyshell Supports

Bunnyshell doesn't care which monorepo tool you use. It works with the Git repository and Docker build contexts. Here are the common structures:

Nx Monorepo

Text
1my-monorepo/
2├── apps/
3│   ├── frontend/          # React/Next.js app
4│   │   ├── src/
5│   │   └── Dockerfile
6│   ├── api/               # Node.js/NestJS API
7│   │   ├── src/
8│   │   └── Dockerfile
9│   └── worker/            # Background job processor
10│       ├── src/
11│       └── Dockerfile
12├── packages/
13│   ├── shared-types/      # TypeScript interfaces
14│   ├── ui-components/     # Shared React components
15│   └── utils/             # Common utilities
16├── nx.json
17├── package.json
18└── docker-compose.yml

Turborepo

Text
1my-monorepo/
2├── apps/
3│   ├── web/
4│   │   └── Dockerfile
5│   └── api/
6│       └── Dockerfile
7├── packages/
8│   ├── config/
9│   ├── tsconfig/
10│   └── shared/
11├── turbo.json
12└── package.json

Lerna / Plain Workspaces

Text
1my-monorepo/
2├── services/
3│   ├── gateway/
4│   │   └── Dockerfile
5│   ├── auth-service/
6│   │   └── Dockerfile
7│   └── payment-service/
8│       └── Dockerfile
9├── libs/
10│   ├── common/
11│   └── proto/
12├── lerna.json          # or just "workspaces" in package.json
13└── package.json

The key requirement is that each deployable service has its own Dockerfile. Bunnyshell builds Docker images from Dockerfiles -- it doesn't run nx build or turbo run build directly. Your Dockerfile is responsible for installing dependencies, building, and producing a runnable image.

Prerequisites: Monorepo with Docker

Before configuring Bunnyshell, ensure each service in your monorepo has a working Dockerfile. The critical detail for monorepos is the build context -- most services need access to shared packages during the build.

Dockerfile Pattern for Monorepo Services

Here's a typical pattern for a service that depends on shared packages:

Dockerfile
1# apps/api/Dockerfile
2# Build context must be the REPO ROOT (not apps/api/)
3# so we can access packages/shared/
4
5FROM node:20-alpine AS builder
6WORKDIR /app
7
8# Copy root package files for workspace resolution
9COPY package.json package-lock.json ./
10COPY packages/shared/package.json ./packages/shared/
11COPY apps/api/package.json ./apps/api/
12
13# Install all dependencies (workspace-aware)
14RUN npm ci --workspace=apps/api --workspace=packages/shared
15
16# Copy shared packages first (they're dependencies)
17COPY packages/shared/ ./packages/shared/
18
19# Copy the service source
20COPY apps/api/ ./apps/api/
21
22# Build shared packages, then the service
23RUN npm run build --workspace=packages/shared
24RUN npm run build --workspace=apps/api
25
26# Production stage
27FROM node:20-alpine AS production
28WORKDIR /app
29
30COPY --from=builder /app/apps/api/dist ./dist
31COPY --from=builder /app/apps/api/node_modules ./node_modules
32COPY --from=builder /app/apps/api/package.json ./
33
34EXPOSE 4000
35CMD ["node", "dist/main.js"]
Dockerfile
1# apps/frontend/Dockerfile
2FROM node:20-alpine AS builder
3WORKDIR /app
4
5COPY package.json package-lock.json ./
6COPY packages/shared/package.json ./packages/shared/
7COPY packages/ui-components/package.json ./packages/ui-components/
8COPY apps/frontend/package.json ./apps/frontend/
9
10RUN npm ci --workspace=apps/frontend \
11           --workspace=packages/shared \
12           --workspace=packages/ui-components
13
14COPY packages/shared/ ./packages/shared/
15COPY packages/ui-components/ ./packages/ui-components/
16COPY apps/frontend/ ./apps/frontend/
17
18RUN npm run build --workspace=packages/shared
19RUN npm run build --workspace=packages/ui-components
20RUN npm run build --workspace=apps/frontend
21
22# Serve with nginx
23FROM nginx:1.25-alpine
24COPY --from=builder /app/apps/frontend/dist /usr/share/nginx/html
25COPY apps/frontend/nginx.conf /etc/nginx/conf.d/default.conf
26EXPOSE 3000
27CMD ["nginx", "-g", "daemon off;"]

The build context for monorepo Dockerfiles is almost always the repository root, not the service directory. This is because the Dockerfile needs COPY access to packages/shared/ and other workspace dependencies. Set build.context: . (the repo root) in your Bunnyshell config.

.dockerignore for Monorepos

A good .dockerignore at the repo root keeps build context small:

Text
1# .dockerignore
2**/node_modules
3**/.next
4**/dist
5**/build
6**/.turbo
7**/.nx
8.git
9.github
10*.md

Approach A: One bunnyshell.yaml per Service (Independent Deploys)

Best for: Large monorepos where services are loosely coupled and teams want independent deployment cadences.

In this approach, each service has its own bunnyshell.yaml configuration file. You create separate Bunnyshell projects (or environments) for each service. Each service gets its own preview environment lifecycle.

Configuration

Create a bunnyshell.yaml in each service directory:

YAML
1# apps/api/bunnyshell.yaml
2kind: Environment
3name: api-service
4type: primary
5
6environmentVariables:
7  DB_PASSWORD: SECRET["your-db-password"]
8  JWT_SECRET: SECRET["your-jwt-secret"]
9
10components:
11  - kind: Application
12    name: api
13    gitRepo: 'https://github.com/your-org/my-monorepo.git'
14    gitBranch: main
15    gitApplicationPath: /
16    dockerCompose:
17      build:
18        context: .
19        dockerfile: apps/api/Dockerfile
20      ports:
21        - '4000:4000'
22      environment:
23        NODE_ENV: production
24        DATABASE_URL: 'postgresql://appuser:{{ env.vars.DB_PASSWORD }}@db:5432/myapp'
25        JWT_SECRET: '{{ env.vars.JWT_SECRET }}'
26    dependsOn:
27      - db
28    hosts:
29      - hostname: 'api-{{ env.base_domain }}'
30        path: /
31        servicePort: 4000
32
33  - kind: Database
34    name: db
35    dockerCompose:
36      image: 'postgres:16-alpine'
37      environment:
38        POSTGRES_DB: myapp
39        POSTGRES_USER: appuser
40        POSTGRES_PASSWORD: '{{ env.vars.DB_PASSWORD }}'
41      ports:
42        - '5432:5432'
43
44volumes:
45  - name: pg-data
46    mount:
47      component: db
48      containerPath: /var/lib/postgresql/data
49    size: 1Gi

Pros and Cons

ProsCons
Each service deploys independentlyCross-service changes need multiple PRs
Teams own their own environment configFrontend can't easily reference the API's preview URL
Faster deploys (only one service rebuilds)More Bunnyshell environments to manage
Simpler configuration per serviceShared database changes require coordination

This approach works well when services communicate via stable APIs and don't change contracts often. If your PR frequently touches both frontend and backend, Approach B is a better fit.

Approach B: Single bunnyshell.yaml for All Services (Coordinated Deploy)

Best for: Most monorepos. All services deploy together from a single PR, with full service discovery between components.

This is the recommended approach for monorepos where a single PR often touches multiple services. One bunnyshell.yaml at the repository root defines all components.

Configuration

YAML
1# bunnyshell.yaml (repository root)
2kind: Environment
3name: monorepo-preview
4type: primary
5
6environmentVariables:
7  DB_PASSWORD: SECRET["your-db-password"]
8  JWT_SECRET: SECRET["your-jwt-secret"]
9  REDIS_URL: 'redis://redis:6379'
10
11components:
12  # ── Frontend (React/Next.js) ──
13  - kind: Application
14    name: frontend
15    gitRepo: 'https://github.com/your-org/my-monorepo.git'
16    gitBranch: main
17    gitApplicationPath: /
18    dockerCompose:
19      build:
20        context: .
21        dockerfile: apps/frontend/Dockerfile
22      ports:
23        - '3000:3000'
24      environment:
25        NEXT_PUBLIC_API_URL: 'https://{{ components.api.ingress.hosts[0] }}'
26        NODE_ENV: production
27    dependsOn:
28      - api
29    hosts:
30      - hostname: 'app-{{ env.base_domain }}'
31        path: /
32        servicePort: 3000
33
34  # ── API (NestJS/Express) ──
35  - kind: Application
36    name: api
37    gitRepo: 'https://github.com/your-org/my-monorepo.git'
38    gitBranch: main
39    gitApplicationPath: /
40    dockerCompose:
41      build:
42        context: .
43        dockerfile: apps/api/Dockerfile
44      ports:
45        - '4000:4000'
46      environment:
47        NODE_ENV: production
48        DATABASE_URL: 'postgresql://appuser:{{ env.vars.DB_PASSWORD }}@db:5432/myapp'
49        REDIS_URL: '{{ env.vars.REDIS_URL }}'
50        JWT_SECRET: '{{ env.vars.JWT_SECRET }}'
51        CORS_ORIGIN: 'https://{{ components.frontend.ingress.hosts[0] }}'
52    dependsOn:
53      - db
54      - redis
55    hosts:
56      - hostname: 'api-{{ env.base_domain }}'
57        path: /
58        servicePort: 4000
59
60  # ── Background Worker ──
61  - kind: Service
62    name: worker
63    gitRepo: 'https://github.com/your-org/my-monorepo.git'
64    gitBranch: main
65    gitApplicationPath: /
66    dockerCompose:
67      build:
68        context: .
69        dockerfile: apps/worker/Dockerfile
70      environment:
71        NODE_ENV: production
72        DATABASE_URL: 'postgresql://appuser:{{ env.vars.DB_PASSWORD }}@db:5432/myapp'
73        REDIS_URL: '{{ env.vars.REDIS_URL }}'
74    dependsOn:
75      - db
76      - redis
77
78  # ── PostgreSQL ──
79  - kind: Database
80    name: db
81    dockerCompose:
82      image: 'postgres:16-alpine'
83      environment:
84        POSTGRES_DB: myapp
85        POSTGRES_USER: appuser
86        POSTGRES_PASSWORD: '{{ env.vars.DB_PASSWORD }}'
87      ports:
88        - '5432:5432'
89
90  # ── Redis ──
91  - kind: Service
92    name: redis
93    dockerCompose:
94      image: 'redis:7-alpine'
95      command: redis-server --maxmemory 128mb --maxmemory-policy allkeys-lru
96      ports:
97        - '6379:6379'
98
99volumes:
100  - name: pg-data
101    mount:
102      component: db
103      containerPath: /var/lib/postgresql/data
104    size: 1Gi

Key Configuration Details

Build context is the repo root. Notice that every Application component has gitApplicationPath: / and build.context: .. This gives each Dockerfile access to the entire monorepo, including shared packages:

YAML
1gitApplicationPath: /
2dockerCompose:
3  build:
4    context: .                           # repo root
5    dockerfile: apps/api/Dockerfile      # path to this service's Dockerfile

Service discovery via interpolation. The frontend finds the API using Bunnyshell's interpolation:

YAML
NEXT_PUBLIC_API_URL: 'https://{{ components.api.ingress.hosts[0] }}'

And the API finds the frontend for CORS:

YAML
CORS_ORIGIN: 'https://{{ components.frontend.ingress.hosts[0] }}'

dependsOn controls build and deployment order. The frontend depends on the API (to resolve the interpolation), the API depends on the database and Redis:

YAML
1# Frontend deploys after API (needs its URL)
2dependsOn:
3  - api
4
5# API deploys after DB and Redis
6dependsOn:
7  - db
8  - redis

The worker has no hosts: block. It doesn't need a public URL -- it processes background jobs from the Redis queue.

The gitBranch and gitRepository Pattern

All components point to the same gitRepo and gitBranch. When Bunnyshell creates an ephemeral environment for a PR, it automatically swaps gitBranch to the PR's branch for all components. This means a single PR can change frontend, API, and worker code, and all three services will be rebuilt from the PR branch:

YAML
1# Primary environment
2gitBranch: main
3
4# Ephemeral environment (automatically set by Bunnyshell)
5gitBranch: feature/add-payment-flow

You don't configure this manually -- Bunnyshell handles it based on the PR's source branch.

Approach C: Docker Compose Import for Monorepos

Best for: Monorepos that already have a docker-compose.yml for local development.

If your monorepo already has a Docker Compose file that orchestrates all services locally, you can import it into Bunnyshell directly. The import process is identical to Preview Environments with Docker Compose -- Bunnyshell reads the Compose file and generates a bunnyshell.yaml.

Example Monorepo docker-compose.yml

YAML
1version: '3.8'
2
3services:
4  frontend:
5    build:
6      context: .
7      dockerfile: apps/frontend/Dockerfile
8    ports:
9      - '3000:3000'
10    environment:
11      NEXT_PUBLIC_API_URL: 'http://localhost:4000'
12    depends_on:
13      - api
14
15  api:
16    build:
17      context: .
18      dockerfile: apps/api/Dockerfile
19    ports:
20      - '4000:4000'
21    environment:
22      NODE_ENV: development
23      DATABASE_URL: 'postgresql://appuser:secret@db:5432/myapp'
24      REDIS_URL: 'redis://redis:6379'
25    depends_on:
26      - db
27      - redis
28
29  worker:
30    build:
31      context: .
32      dockerfile: apps/worker/Dockerfile
33    environment:
34      DATABASE_URL: 'postgresql://appuser:secret@db:5432/myapp'
35      REDIS_URL: 'redis://redis:6379'
36    depends_on:
37      - db
38      - redis
39
40  db:
41    image: postgres:16-alpine
42    environment:
43      POSTGRES_DB: myapp
44      POSTGRES_USER: appuser
45      POSTGRES_PASSWORD: secret
46    volumes:
47      - pg-data:/var/lib/postgresql/data
48    ports:
49      - '5432:5432'
50
51  redis:
52    image: redis:7-alpine
53    ports:
54      - '6379:6379'
55
56volumes:
57  pg-data:

Import Steps

  1. Create a Project and Environment in Bunnyshell
  2. Click Define environment and select Import from Docker Compose
  3. Point to your repo, branch, and the path to docker-compose.yml
  4. Review the detected services and click Import
  5. Adjust the generated configuration:
    • Replace localhost URLs with {{ components.X.ingress.hosts[0] }}
    • Move secrets to SECRET["..."] syntax
    • Set NODE_ENV: production
    • Remove any development-only bind mounts

The result will be essentially the same as Approach B's configuration.

Handling Shared Libraries and Build Dependencies

The biggest challenge with monorepo Docker builds is shared packages. Here are the patterns that work:

Pattern 1: Copy Shared Packages in Dockerfile

The most straightforward approach -- your Dockerfile explicitly copies shared packages:

Dockerfile
1# apps/api/Dockerfile (build context = repo root)
2FROM node:20-alpine AS builder
3WORKDIR /app
4
5# Workspace-aware dependency installation
6COPY package.json package-lock.json ./
7COPY packages/shared/package.json ./packages/shared/
8COPY apps/api/package.json ./apps/api/
9RUN npm ci
10
11# Copy shared packages
12COPY packages/shared/ ./packages/shared/
13RUN npm run build --workspace=packages/shared
14
15# Copy and build the service
16COPY apps/api/ ./apps/api/
17RUN npm run build --workspace=apps/api
18
19# Production image
20FROM node:20-alpine
21WORKDIR /app
22COPY --from=builder /app/apps/api/dist ./dist
23COPY --from=builder /app/node_modules ./node_modules
24COPY --from=builder /app/apps/api/package.json ./
25EXPOSE 4000
26CMD ["node", "dist/main.js"]

Pattern 2: Turborepo Prune for Smaller Contexts

Turborepo's prune command generates a minimal subset of the monorepo for a specific service:

Dockerfile
1# apps/api/Dockerfile
2FROM node:20-alpine AS pruner
3WORKDIR /app
4RUN npm install -g turbo
5COPY . .
6RUN turbo prune --scope=api --docker
7
8FROM node:20-alpine AS builder
9WORKDIR /app
10COPY --from=pruner /app/out/json/ .
11COPY --from=pruner /app/out/package-lock.json ./
12RUN npm ci
13COPY --from=pruner /app/out/full/ .
14RUN turbo run build --filter=api
15
16FROM node:20-alpine
17WORKDIR /app
18COPY --from=builder /app/apps/api/dist ./dist
19COPY --from=builder /app/apps/api/node_modules ./node_modules
20COPY --from=builder /app/apps/api/package.json ./
21EXPOSE 4000
22CMD ["node", "dist/main.js"]

turbo prune --docker generates an out/ directory with two subdirectories: json/ (just package.json files for dependency installation) and full/ (actual source code). This enables optimal Docker layer caching -- the dependency layer only rebuilds when package.json files change.

Pattern 3: Nx Affected for Build Optimization

Nx can determine which projects are affected by changes. While you can't use this directly in Bunnyshell (which builds Docker images per component), you can use it in a custom build script:

Dockerfile
1# apps/api/Dockerfile
2FROM node:20-alpine AS builder
3WORKDIR /app
4
5RUN npm install -g nx
6
7COPY package.json package-lock.json nx.json tsconfig.base.json ./
8COPY packages/ ./packages/
9COPY apps/api/ ./apps/api/
10
11RUN npm ci
12RUN nx build api --prod
13
14FROM node:20-alpine
15WORKDIR /app
16COPY --from=builder /app/dist/apps/api ./
17RUN npm ci --production
18EXPOSE 4000
19CMD ["node", "main.js"]

Cache Considerations

Docker layer caching is critical for monorepo builds. Without it, every build reinstalls all dependencies from scratch. Bunnyshell supports build caching -- enable it in the component's build settings.

For Turborepo and Nx, their remote caching features don't apply inside Docker builds by default (Docker doesn't have access to the remote cache). Options:

  1. Use multi-stage builds with explicit COPY ordering -- dependency layer caching handles most cases
  2. Mount a BuildKit cache -- for npm/yarn/pnpm caches:
Dockerfile
RUN --mount=type=cache,target=/root/.npm npm ci
  1. Pre-build shared packages as published npm packages -- If shared packages are stable, publish them to a private npm registry and install them as regular dependencies. This removes them from the Docker build context entirely.

Selective Builds (Only Rebuild Changed Services)

By default, Bunnyshell rebuilds all components when a preview environment is created or updated. For large monorepos, this can be slow if only one service changed.

Component-Level Git Path Filtering

Bunnyshell can be configured to only rebuild a component when files in its gitApplicationPath change. However, for monorepo components where the build context is the repo root (/), this means any file change triggers a rebuild.

To work around this, you can structure your configuration to use more specific paths:

YAML
1- kind: Application
2  name: api
3  gitRepo: 'https://github.com/your-org/my-monorepo.git'
4  gitBranch: main
5  gitApplicationPath: /apps/api
6  dockerCompose:
7    build:
8      context: .
9      dockerfile: apps/api/Dockerfile

Setting gitApplicationPath: /apps/api tells Bunnyshell to watch only the apps/api/ directory for changes. The build context (context: .) is still the repo root, so the Dockerfile can access shared packages.

If gitApplicationPath is set to /apps/api but the Dockerfile copies from packages/shared/, changes to packages/shared/ won't trigger a rebuild of the API component. You need to account for shared dependency paths. For most teams, rebuilding all services on every push is simpler and more reliable than managing path filters.

Manual Selective Rebuild via CLI

If you want fine-grained control, use the Bunnyshell CLI to redeploy specific components:

Bash
1# List components in an environment
2bns components list --environment ENV_ID --output json | jq '._embedded.item[] | {id, name}'
3
4# Redeploy only the API component
5bns components deploy --id API_COMPONENT_ID
6
7# Redeploy frontend and API (skip worker, db, redis)
8bns components deploy --id FRONTEND_COMPONENT_ID
9bns components deploy --id API_COMPONENT_ID

Enabling Preview Environments for Monorepos

The process is identical regardless of which approach you chose:

  1. Deploy your primary environment at least once (status must be Running or Stopped)
  2. Go to Settings in your environment
  3. Toggle "Create ephemeral environments on pull request" to ON
  4. Toggle "Destroy environment after merge or close pull request" to ON
  5. Select the Kubernetes cluster

When a developer opens a PR that touches any file in the monorepo:

  • Bunnyshell creates a full copy of the environment
  • All components switch to the PR's branch
  • All services are built and deployed
  • A comment with endpoint URLs appears on the PR
  • Merging or closing the PR destroys the environment

For Approach A (per-service configs), you need to enable ephemeral environments on each service's Bunnyshell environment separately. For Approaches B and C, a single toggle covers the entire monorepo stack.

Post-Deploy Scripts for Monorepos

After a preview environment deploys, you may need to run migrations or seed data:

Bash
1# Run API migrations
2bns exec API_COMPONENT_ID -- npx prisma migrate deploy
3
4# Seed the database
5bns exec API_COMPONENT_ID -- node scripts/seed.js
6
7# Verify all services are healthy
8bns components list --environment ENV_ID --output json | \
9  jq '._embedded.item[] | {name, status: .operationStatus}'

For automated post-deploy scripts, add them to your Dockerfile's entrypoint:

Dockerfile
1# apps/api/entrypoint.sh
2#!/bin/sh
3echo "Running database migrations..."
4npx prisma migrate deploy
5echo "Starting server..."
6exec node dist/main.js

Troubleshooting

IssueSolution
Build fails: COPY packages/shared/ -- not foundThe build context is set to the service directory instead of the repo root. Set build.context: . and gitApplicationPath: / in the Bunnyshell config.
Shared package not built before serviceYour Dockerfile must build shared packages before the service. Use RUN npm run build --workspace=packages/shared before building the app.
All services rebuild on every pushSet gitApplicationPath to the specific service directory (e.g., /apps/api). Note: shared package changes won't trigger rebuilds -- you may need to rebuild manually.
Frontend can't reach API (CORS error)Update the API's CORS configuration to allow the Bunnyshell-generated frontend URL: CORS_ORIGIN: 'https://{{ components.frontend.ingress.hosts[0] }}'
Environment variables not interpolatedEnsure you're using {{ }} syntax (double curly braces) and that referenced components exist in the same environment.
Docker build context too large (slow uploads)Add a .dockerignore at the repo root. Exclude node_modules/, .next/, dist/, .turbo/, .nx/, and .git/.
Turborepo/Nx cache not working in DockerRemote caching requires network access and auth tokens inside Docker. Use --mount=type=secret to pass cache tokens, or rely on Docker layer caching instead.
Worker processes jobs meant for another environmentEnsure each environment uses its own Redis instance (Bunnyshell isolates components per environment by default). Don't share external Redis clusters across preview environments.
Database migrations conflict between servicesIf multiple services run migrations on the same database, use dependsOn to enforce ordering, or consolidate migrations in a single service's entrypoint.
Port conflicts in preview environmentsKubernetes pods are isolated -- port conflicts between components are impossible. Within a sidecar pod, ensure containers use different ports.
Build succeeds locally but fails in BunnyshellLocal builds may use cached layers. Run docker build --no-cache locally to verify. Check that all required files are included (not in .dockerignore).

What's Next?

  • Add a database migration service -- Create a Kubernetes Job component that runs migrations before the API starts, using dependsOn for ordering
  • Set up remote development -- Use bns remote-development up --component API_COMPONENT_ID to sync local code changes to a running preview environment
  • Implement E2E tests -- Run Cypress or Playwright tests against preview environment URLs in your CI pipeline
  • Add monitoring -- Include a Grafana + Prometheus stack in your bunnyshell.yaml for observability in preview environments
  • Read about Docker Compose import -- If you have an existing docker-compose.yml, see Preview Environments with Docker Compose
  • Explore framework-specific guides -- Laravel, Django, NestJS, Rails

Ship faster starting today.

14-day full-feature trial. No credit card required. Pay-as-you-go from $0.007/min per environment.