Preview Environments for Spring Boot: Automated Per-PR Deployments with Bunnyshell
GuideMarch 20, 202612 min read

Preview Environments for Spring Boot: Automated Per-PR Deployments with Bunnyshell

Why Preview Environments for Spring Boot?

Every Spring Boot team has been here: a PR looks clean, unit and integration tests pass in CI, but when it touches staging — something breaks. Maybe a Flyway migration conflicts with another branch, or the new Redis-backed session logic behaves differently against the real cache than against the embedded test instance.

Preview environments solve this. Every pull request gets its own isolated deployment — Spring Boot application, PostgreSQL database, Redis, the works — running in Kubernetes with production-like configuration. Reviewers click a link and test the actual running service, not just the diff.

With Bunnyshell, you get:

  • Automatic deployment — A new environment spins up for every PR
  • Production parity — Same Docker images, same database engine, same infrastructure
  • Isolation — Each PR environment is fully independent, no shared staging conflicts
  • Automatic cleanup — Environments are destroyed when the PR is merged or closed

Choose Your Approach

Bunnyshell supports three ways to set up preview environments for Spring Boot. Pick the one that fits your workflow:

ApproachBest forComplexityCI/CD maintenance
Approach A: Bunnyshell UITeams that want the fastest setup with zero pipeline maintenanceEasiestNone — Bunnyshell manages webhooks automatically
Approach B: Docker Compose ImportTeams already using docker-compose.yml for local developmentEasyNone — import converts to Bunnyshell config automatically
Approach C: Helm ChartsTeams with existing Helm infrastructure or complex K8s needsAdvancedOptional — can use CLI or Bunnyshell UI

All three approaches end the same way: a toggle in Bunnyshell Settings that enables automatic preview environments for every PR. No GitHub Actions, no GitLab CI pipelines to maintain — Bunnyshell adds webhooks to your Git provider and listens for PR events.

Prerequisites: Prepare Your Spring Boot App

Regardless of which approach you choose, your Spring Boot app needs two things: a multi-stage Dockerfile and the right application configuration.

1. Create a Production-Ready Multi-Stage Dockerfile

A multi-stage build keeps the final image lean — the Maven/Gradle build layer stays separate from the JRE runtime layer:

Dockerfile
1# ── Stage 1: Build ──
2FROM eclipse-temurin:21-jdk AS build
3
4WORKDIR /workspace
5
6# Cache dependencies before copying source
7COPY mvnw pom.xml ./
8COPY .mvn .mvn
9RUN ./mvnw dependency:go-offline -B
10
11# Copy source and build
12COPY src ./src
13RUN ./mvnw package -DskipTests -B
14
15# ── Stage 2: Runtime ──
16FROM eclipse-temurin:21-jre AS runtime
17
18WORKDIR /app
19
20# Create non-root user
21RUN addgroup --system appgroup && adduser --system --ingroup appgroup appuser
22
23# Copy the built JAR
24COPY --from=build /workspace/target/*.jar app.jar
25
26USER appuser
27
28EXPOSE 8080
29
30ENTRYPOINT ["java", "-jar", "app.jar"]

For Gradle projects, replace the Maven commands:

Dockerfile
1# ── Stage 1: Build (Gradle) ──
2FROM eclipse-temurin:21-jdk AS build
3
4WORKDIR /workspace
5
6COPY gradlew build.gradle settings.gradle ./
7COPY gradle gradle
8RUN ./gradlew dependencies --no-daemon
9
10COPY src ./src
11RUN ./gradlew bootJar -x test --no-daemon

Eclipse Temurin (formerly AdoptOpenJDK) is the recommended OpenJDK distribution for containerized workloads. JDK 21 is the current LTS. If your project requires JDK 17, replace eclipse-temurin:21-jdk and eclipse-temurin:21-jre accordingly.

2. Configure Spring Boot for Kubernetes

Spring Boot needs a few settings to work correctly behind the Kubernetes ingress (which terminates TLS and forwards headers):

Properties
1# src/main/resources/application.properties
2
3# ── Server ──
4server.port=8080
5
6# Trust X-Forwarded-Proto / X-Forwarded-Host from the ingress
7# Required so Spring correctly reconstructs HTTPS redirect URLs
8server.forward-headers-strategy=framework
9
10# ── Database (overridden by environment variables at runtime) ──
11spring.datasource.url=${SPRING_DATASOURCE_URL:jdbc:postgresql://localhost:5432/springdb}
12spring.datasource.username=${SPRING_DATASOURCE_USERNAME:springuser}
13spring.datasource.password=${SPRING_DATASOURCE_PASSWORD:springpassword}
14spring.datasource.driver-class-name=org.postgresql.Driver
15
16# ── JPA ──
17spring.jpa.hibernate.ddl-auto=validate
18spring.jpa.open-in-view=false
19
20# ── Flyway migrations ──
21spring.flyway.enabled=true
22spring.flyway.locations=classpath:db/migration
23
24# ── Redis ──
25spring.data.redis.url=${SPRING_REDIS_URL:redis://localhost:6379}
26
27# ── Actuator health check ──
28management.endpoints.web.exposure.include=health,info
29management.endpoint.health.show-details=when_authorized

For the production profile (activated via SPRING_PROFILES_ACTIVE=production), override in application-production.properties:

Properties
1# src/main/resources/application-production.properties
2spring.jpa.show-sql=false
3spring.flyway.out-of-order=false
4logging.level.root=WARN
5logging.level.com.yourcompany=INFO

server.forward-headers-strategy=framework is the Spring Boot equivalent of Django's SECURE_PROXY_SSL_HEADER. Without it, Spring generates redirect URLs using http:// even when the client is connected over HTTPS, causing mixed-content errors.

Spring Boot Deployment Checklist

  • Multi-stage Dockerfile: Maven/Gradle build + JRE runtime
  • App listens on 0.0.0.0:8080 (Spring Boot default — all interfaces)
  • server.forward-headers-strategy=framework set for TLS behind ingress
  • SPRING_DATASOURCE_URL, SPRING_DATASOURCE_USERNAME, SPRING_DATASOURCE_PASSWORD read from env
  • SPRING_PROFILES_ACTIVE=production set in the container environment
  • Flyway migrations configured (spring.flyway.enabled=true)
  • Spring Actuator enabled for health checks (/actuator/health)

Approach A: Bunnyshell UI — Zero CI/CD Maintenance

This is the easiest approach. You connect your repo, paste a YAML config, deploy, and flip a toggle. No CI/CD pipelines to write or maintain — Bunnyshell automatically adds webhooks to your Git provider and creates/destroys preview environments when PRs are opened/closed.

Step 1: Create a Project and Environment

  1. Log into Bunnyshell
  2. Click Create project and name it (e.g., "Spring Boot App")
  3. Inside the project, click Create environment and name it (e.g., "springboot-main")

Step 2: Define the Environment Configuration

Click Configuration in your environment view and paste this bunnyshell.yaml:

YAML
1kind: Environment
2name: springboot-preview
3type: primary
4
5environmentVariables:
6  DB_PASSWORD: SECRET["your-db-password"]
7  SPRING_SECRET: SECRET["your-app-secret"]
8
9components:
10  # ── Spring Boot Application ──
11  - kind: Application
12    name: springboot-app
13    gitRepo: 'https://github.com/your-org/your-springboot-repo.git'
14    gitBranch: main
15    gitApplicationPath: /
16    dockerCompose:
17      build:
18        context: .
19        dockerfile: Dockerfile
20      environment:
21        SPRING_PROFILES_ACTIVE: production
22        SPRING_DATASOURCE_URL: 'jdbc:postgresql://postgres:5432/springdb'
23        SPRING_DATASOURCE_USERNAME: springuser
24        SPRING_DATASOURCE_PASSWORD: '{{ env.vars.DB_PASSWORD }}'
25        SPRING_REDIS_URL: 'redis://redis:6379'
26        APP_SECRET: '{{ env.vars.SPRING_SECRET }}'
27      ports:
28        - '8080:8080'
29    hosts:
30      - hostname: 'app-{{ env.base_domain }}'
31        path: /
32        servicePort: 8080
33    dependsOn:
34      - postgres
35      - redis
36
37  # ── PostgreSQL Database ──
38  - kind: Database
39    name: postgres
40    dockerCompose:
41      image: 'postgres:16-alpine'
42      environment:
43        POSTGRES_DB: springdb
44        POSTGRES_USER: springuser
45        POSTGRES_PASSWORD: '{{ env.vars.DB_PASSWORD }}'
46      ports:
47        - '5432:5432'
48
49  # ── Redis Cache / Sessions ──
50  - kind: Service
51    name: redis
52    dockerCompose:
53      image: 'redis:7-alpine'
54      ports:
55        - '6379:6379'
56
57volumes:
58  - name: postgres-data
59    mount:
60      component: postgres
61      containerPath: /var/lib/postgresql/data
62    size: 1Gi

Replace your-org/your-springboot-repo with your actual repository. Save the configuration.

Step 3: Deploy

Click the Deploy button, select your Kubernetes cluster, and click Deploy Environment. Bunnyshell will:

  1. Build your Spring Boot Docker image using the multi-stage Dockerfile
  2. Pull PostgreSQL and Redis images
  3. Deploy everything into an isolated Kubernetes namespace
  4. Generate HTTPS URLs automatically with DNS

Monitor the deployment in the environment detail page. Spring Boot startup time is typically 20–60 seconds depending on context size. When status shows Running, click Endpoints to access your live API.

Flyway migrations run automatically on startup when spring.flyway.enabled=true. The Spring Boot app will apply any pending migrations in src/main/resources/db/migration/ before accepting traffic. This means the first deployment may take slightly longer — this is expected.

Step 4: Verify the Health Check

After deployment, confirm the application is healthy via the Actuator endpoint:

Bash
curl https://app-<your-env-domain>/actuator/health
# Expected response: {"status":"UP"}

Or via Bunnyshell CLI:

Bash
1export BUNNYSHELL_TOKEN=your-api-token
2bns components list --environment ENV_ID --output json | jq '._embedded.item[] | {id, name}'
3bns exec COMPONENT_ID -- curl -s http://localhost:8080/actuator/health

Step 5: Enable Automatic Preview Environments

This is the magic step — no CI/CD configuration needed:

  1. In your environment, go to Settings
  2. Find the Ephemeral environments section
  3. Toggle "Create ephemeral environments on pull request" to ON
  4. Toggle "Destroy environment after merge or close pull request" to ON
  5. Select the Kubernetes cluster for ephemeral environments

That's it. Bunnyshell automatically adds a webhook to your Git provider (GitHub, GitLab, or Bitbucket). From now on:

  • Open a PR → Bunnyshell creates an ephemeral environment with the PR's branch
  • Push to PR → The environment redeploys with the latest changes
  • Bunnyshell posts a comment on the PR with a link to the live deployment
  • Merge or close the PR → The ephemeral environment is automatically destroyed

The primary environment must be in Running or Stopped status before ephemeral environments can be created from it.


Approach B: Docker Compose Import

Already have a docker-compose.yml for local development? Bunnyshell can import it directly and convert it to its environment format. No manual YAML writing required.

Step 1: Add a docker-compose.yml to Your Repo

If you don't already have one, create docker-compose.yml in your repo root:

YAML
1version: '3.8'
2
3services:
4  springboot-app:
5    build:
6      context: .
7      dockerfile: Dockerfile
8    ports:
9      - '8080:8080'
10    environment:
11      SPRING_PROFILES_ACTIVE: development
12      SPRING_DATASOURCE_URL: jdbc:postgresql://postgres:5432/springdb
13      SPRING_DATASOURCE_USERNAME: springuser
14      SPRING_DATASOURCE_PASSWORD: springpassword
15      SPRING_REDIS_URL: redis://redis:6379
16    depends_on:
17      - postgres
18      - redis
19
20  postgres:
21    image: postgres:16-alpine
22    environment:
23      POSTGRES_DB: springdb
24      POSTGRES_USER: springuser
25      POSTGRES_PASSWORD: springpassword
26    volumes:
27      - postgres-data:/var/lib/postgresql/data
28
29  redis:
30    image: redis:7-alpine
31
32volumes:
33  postgres-data:

Step 2: Import into Bunnyshell

  1. Create a Project and Environment in Bunnyshell (same as Approach A, Step 1)
  2. Click Define environment
  3. Select your Git account and repository
  4. Set the branch (e.g., main) and the path to docker-compose.yml (use / if it's in the root)
  5. Click Continue — Bunnyshell parses and validates your Docker Compose file

Bunnyshell automatically detects:

  • All services (springboot-app, postgres, redis)
  • Exposed ports
  • Build configurations (Dockerfiles)
  • Volumes
  • Environment variables

It converts everything into a bunnyshell.yaml environment definition.

The docker-compose.yml is only read during the initial import. Subsequent changes to the file won't auto-propagate — edit the environment configuration in Bunnyshell instead.

Step 3: Adjust the Configuration

After import, go to Configuration in the environment view and update:

  • Replace hardcoded secrets with SECRET["..."] syntax
  • Switch SPRING_PROFILES_ACTIVE from development to production
  • Update SPRING_DATASOURCE_PASSWORD to use Bunnyshell interpolation:
YAML
SPRING_PROFILES_ACTIVE: production
SPRING_DATASOURCE_PASSWORD: '{{ env.vars.DB_PASSWORD }}'

Step 4: Deploy and Enable Preview Environments

Same as Approach A — click Deploy, then go to Settings and toggle on ephemeral environments.

Best Practices for Docker Compose with Bunnyshell

  • Use separate profiles — Keep application-development.properties for local dev and application-production.properties for Bunnyshell
  • Design for startup resilience — Kubernetes doesn't guarantee depends_on ordering. Spring Boot's spring.datasource will retry connection attempts, but consider adding a wait loop in your entrypoint for slower clusters
  • Use Bunnyshell interpolation for dynamic values like the public URL:
YAML
1# Local docker-compose.yml
2FRONTEND_CALLBACK_URL: http://localhost:8080
3
4# Bunnyshell environment config (after import)
5FRONTEND_CALLBACK_URL: 'https://{{ components.springboot-app.ingress.hosts[0] }}'

Approach C: Helm Charts

For teams with existing Helm infrastructure or complex Kubernetes requirements (custom ingress, service mesh, advanced scaling). Helm gives you full control over every Kubernetes resource.

Step 1: Create a Helm Chart

Structure your Spring Boot Helm chart in your repo:

Text
1helm/springboot/
2├── Chart.yaml
3├── values.yaml
4└── templates/
5    ├── deployment.yaml
6    ├── service.yaml
7    ├── ingress.yaml
8    └── configmap.yaml

A minimal values.yaml:

YAML
1replicaCount: 1
2image:
3  repository: ""
4  tag: latest
5service:
6  port: 8080
7ingress:
8  enabled: true
9  className: bns-nginx
10  host: ""
11env:
12  SPRING_PROFILES_ACTIVE: production
13  SPRING_DATASOURCE_URL: ""
14  SPRING_DATASOURCE_USERNAME: ""
15  SPRING_DATASOURCE_PASSWORD: ""
16  SPRING_REDIS_URL: ""
17livenessProbe:
18  httpGet:
19    path: /actuator/health
20    port: 8080
21  initialDelaySeconds: 60
22  periodSeconds: 15
23readinessProbe:
24  httpGet:
25    path: /actuator/health
26    port: 8080
27  initialDelaySeconds: 45
28  periodSeconds: 10

Set initialDelaySeconds generously for Spring Boot — the JVM and application context startup time means the container is not immediately ready to serve traffic. Kubernetes will restart the pod if the liveness probe fails too early.

Step 2: Define the Bunnyshell Configuration

Create a bunnyshell.yaml using Helm components:

YAML
1kind: Environment
2name: springboot-helm
3type: primary
4
5environmentVariables:
6  DB_PASSWORD: SECRET["your-db-password"]
7  SPRING_SECRET: SECRET["your-app-secret"]
8  POSTGRES_DB: springdb
9  POSTGRES_USER: springuser
10
11components:
12  # ── Docker Image Build ──
13  - kind: DockerImage
14    name: springboot-image
15    context: /
16    dockerfile: Dockerfile
17    gitRepo: 'https://github.com/your-org/your-springboot-repo.git'
18    gitBranch: main
19    gitApplicationPath: /
20
21  # ── PostgreSQL via Helm ──
22  - kind: Helm
23    name: postgres
24    runnerImage: 'dtzar/helm-kubectl:3.8.2'
25    deploy:
26      - |
27        cat << EOF > pg_values.yaml
28          global:
29            storageClass: bns-network-sc
30          auth:
31            postgresPassword: {{ env.vars.DB_PASSWORD }}
32            database: {{ env.vars.POSTGRES_DB }}
33            username: {{ env.vars.POSTGRES_USER }}
34        EOF
35      - 'helm repo add bitnami https://charts.bitnami.com/bitnami'
36      - 'helm upgrade --install --namespace {{ env.k8s.namespace }}
37        --post-renderer /bns/helpers/helm/bns_post_renderer
38        -f pg_values.yaml postgres bitnami/postgresql --version 11.9.11'
39      - |
40        POSTGRES_HOST="postgres-postgresql.{{ env.k8s.namespace }}.svc.cluster.local"
41    destroy:
42      - 'helm uninstall postgres --namespace {{ env.k8s.namespace }}'
43    start:
44      - 'kubectl scale --replicas=1 --namespace {{ env.k8s.namespace }}
45        statefulset/postgres-postgresql'
46    stop:
47      - 'kubectl scale --replicas=0 --namespace {{ env.k8s.namespace }}
48        statefulset/postgres-postgresql'
49    exportVariables:
50      - POSTGRES_HOST
51
52  # ── Spring Boot App via Helm ──
53  - kind: Helm
54    name: springboot-app
55    runnerImage: 'dtzar/helm-kubectl:3.8.2'
56    deploy:
57      - |
58        cat << EOF > springboot_values.yaml
59          replicaCount: 1
60          image:
61            repository: {{ components.springboot-image.image }}
62          service:
63            port: 8080
64          ingress:
65            enabled: true
66            className: bns-nginx
67            host: app-{{ env.base_domain }}
68          env:
69            SPRING_PROFILES_ACTIVE: production
70            SPRING_DATASOURCE_URL: 'jdbc:postgresql://{{ components.postgres.exported.POSTGRES_HOST }}/{{ env.vars.POSTGRES_DB }}'
71            SPRING_DATASOURCE_USERNAME: '{{ env.vars.POSTGRES_USER }}'
72            SPRING_DATASOURCE_PASSWORD: '{{ env.vars.DB_PASSWORD }}'
73            SPRING_REDIS_URL: 'redis://redis:6379'
74            APP_SECRET: '{{ env.vars.SPRING_SECRET }}'
75        EOF
76      - 'helm upgrade --install --namespace {{ env.k8s.namespace }}
77        --post-renderer /bns/helpers/helm/bns_post_renderer
78        -f springboot_values.yaml springboot-{{ env.unique }} ./helm/springboot'
79    destroy:
80      - 'helm uninstall springboot-{{ env.unique }} --namespace {{ env.k8s.namespace }}'
81    start:
82      - 'helm upgrade --namespace {{ env.k8s.namespace }}
83        --post-renderer /bns/helpers/helm/bns_post_renderer
84        --reuse-values --set replicaCount=1 springboot-{{ env.unique }} ./helm/springboot'
85    stop:
86      - 'helm upgrade --namespace {{ env.k8s.namespace }}
87        --post-renderer /bns/helpers/helm/bns_post_renderer
88        --reuse-values --set replicaCount=0 springboot-{{ env.unique }} ./helm/springboot'
89    gitRepo: 'https://github.com/your-org/your-springboot-repo.git'
90    gitBranch: main
91    gitApplicationPath: /helm/springboot
92
93  # ── Redis ──
94  - kind: Service
95    name: redis
96    dockerCompose:
97      image: 'redis:7-alpine'
98      ports:
99        - '6379:6379'

Always include --post-renderer /bns/helpers/helm/bns_post_renderer in your helm commands. This adds labels so Bunnyshell can track resources, show logs, and manage component lifecycle.

Step 3: Deploy and Enable Preview Environments

Same flow: paste the config in Configuration, hit Deploy, then enable ephemeral environments in Settings.


Enabling Preview Environments (All Approaches)

Regardless of which approach you used, enabling automatic preview environments is the same:

  1. Ensure your primary environment has been deployed at least once (Running or Stopped status)
  2. Go to Settings in your environment
  3. Toggle "Create ephemeral environments on pull request" → ON
  4. Toggle "Destroy environment after merge or close pull request" → ON
  5. Select the target Kubernetes cluster

What happens next:

  • Bunnyshell adds a webhook to your Git provider automatically
  • When a developer opens a PR, Bunnyshell creates an ephemeral environment cloned from the primary, using the PR's branch
  • Bunnyshell posts a comment on the PR with a direct link to the running deployment
  • When the PR is merged or closed, the ephemeral environment is automatically destroyed

No GitHub Actions. No GitLab CI pipelines. No maintenance. It just works.

Optional: CI/CD Integration via CLI

If you prefer to control preview environments from your CI/CD pipeline (e.g., for custom post-deploy steps), you can use the Bunnyshell CLI:

Bash
1# Install
2brew install bunnyshell/tap/bunnyshell-cli
3
4# Authenticate
5export BUNNYSHELL_TOKEN=your-api-token
6
7# Create, deploy, and verify health in one flow
8bns environments create --from-path bunnyshell.yaml --name "pr-123" --project PROJECT_ID --k8s CLUSTER_ID
9bns environments deploy --id ENV_ID --wait
10bns exec COMPONENT_ID -- curl -s http://localhost:8080/actuator/health

Remote Development and Debugging

Bunnyshell makes it easy to develop and debug directly against any environment — primary or ephemeral:

Port Forwarding

Connect your local tools to the remote database or Redis:

Bash
1# Forward PostgreSQL to local port 15432
2bns port-forward 15432:5432 --component POSTGRES_COMPONENT_ID
3
4# Connect with psql or any DB tool
5psql -h localhost -p 15432 -U springuser springdb
6
7# Forward Redis to local port 16379
8bns port-forward 16379:6379 --component REDIS_COMPONENT_ID

Execute Commands in the Container

Bash
1# Check Actuator health endpoint
2bns exec COMPONENT_ID -- curl -s http://localhost:8080/actuator/health
3
4# Check Flyway migration status
5bns exec COMPONENT_ID -- curl -s http://localhost:8080/actuator/flyway
6
7# Inspect environment variables
8bns exec COMPONENT_ID -- env | grep SPRING
9
10# Check JVM memory usage
11bns exec COMPONENT_ID -- curl -s http://localhost:8080/actuator/metrics/jvm.memory.used

Live Logs

Bash
1# Stream logs in real time
2bns logs --component COMPONENT_ID -f
3
4# Last 200 lines
5bns logs --component COMPONENT_ID --tail 200
6
7# Logs from the last 5 minutes
8bns logs --component COMPONENT_ID --since 5m

Live Code Sync

For active development, sync your local code changes to the remote container in real time (most useful with Spring Boot DevTools and hot-reload):

Bash
1bns remote-development up --component COMPONENT_ID
2# Edit files locally — changes sync automatically to the running container
3# When done:
4bns remote-development down

For Spring Boot DevTools hot-reload to work in remote development mode, add spring-boot-devtools to your pom.xml (scope runtime) and ensure spring.devtools.restart.enabled=true in your development profile. The JVM will restart when class files change.


Troubleshooting

IssueSolution
502 Bad GatewaySpring Boot isn't serving on port 8080. Confirm server.port=8080 and the container is healthy. Check logs with bns logs.
HTTPS URLs returned as HTTP (redirect loops)Add server.forward-headers-strategy=framework to application.properties.
Flyway migration failedCheck that SPRING_DATASOURCE_URL host is postgres (the component name), not localhost. Examine Flyway error in startup logs.
java.net.ConnectException to PostgreSQLDB container not yet ready. Spring Boot will retry — wait 30s and check logs again. For persistent failures, verify SPRING_DATASOURCE_URL.
Connection refused to RedisVerify SPRING_REDIS_URL uses redis as hostname (the Bunnyshell component name).
Pod stuck in CrashLoopBackOffRun bns logs --component ID --tail 100. Common causes: missing env var, OOM (increase JVM heap), or failed Flyway migration.
OutOfMemoryError: Java heap spaceAdd JVM flags to your entrypoint: java -Xms256m -Xmx512m -jar app.jar. Adjust based on your workload.
Build takes too long (Maven layer cache miss)Structure your Dockerfile to copy pom.xml and run dependency:go-offline before copying src/. This caches the dependency layer.
Service startup order issuesKubernetes doesn't guarantee depends_on ordering. Spring Boot retries DB connections, but add spring.datasource.hikari.connection-timeout=30000 and spring.datasource.hikari.initialization-fail-timeout=60000 for resilience.
522 Connection timed outCluster may be behind a firewall. Verify Cloudflare IPs are whitelisted on the ingress controller.

What's Next?

  • Add async workers — Deploy a separate consumer component using Spring's @Async or Spring Batch for background job processing
  • Add message queuing — Include a RabbitMQ or Kafka service component for event-driven workflows
  • Monitor with Actuator + Prometheus — Expose /actuator/prometheus and add a Prometheus/Grafana stack as additional components
  • Add distributed tracing — Pass OTEL_EXPORTER_OTLP_ENDPOINT as an environment variable and include spring-boot-starter-actuator with Micrometer Tracing
  • Tune JVM for containers — Add -XX:+UseContainerSupport and -XX:MaxRAMPercentage=75.0 to your ENTRYPOINT for correct memory limit detection in Kubernetes

Ship faster starting today.

14-day full-feature trial. No credit card required. Pay-as-you-go from $0.007/min per environment.