Preview Environments for Serverless: Lambda & Functions with Bunnyshell
GuideMarch 20, 202612 min read

Preview Environments for Serverless: Lambda & Functions with Bunnyshell

Why Preview Environments for Serverless?

Serverless architectures promise simplicity: write a function, deploy it, never think about servers. But that simplicity collapses the moment two developers need to test different versions of the same Lambda function. AWS does not give you per-PR environments out of the box. You get dev, staging, prod -- and everyone shares them.

The result is familiar: your teammate's broken DynamoDB schema poisons the shared dev stage. Your API Gateway routes conflict with someone else's experimental endpoint. You cannot demo your feature because another deployment overwrote yours ten minutes ago.

Preview environments solve this. Every pull request gets its own isolated deployment -- your Lambda functions, your API Gateway routes, your DynamoDB tables, your S3 buckets -- running in containers with production-like configuration. Reviewers click a link and see the actual running API, not just the diff.

With Bunnyshell, you get:

  • Automatic deployment -- A new environment spins up for every PR
  • AWS service emulation -- LocalStack provides DynamoDB, S3, SQS, and more without AWS costs
  • Isolation -- Each PR environment is fully independent, no shared stage conflicts
  • Automatic cleanup -- Environments are destroyed when the PR is merged or closed

The Serverless Preview Challenge

Traditional serverless platforms were not designed for preview environments. Here is why:

ChallengeWhy it's hard
No native "environments"Lambda uses aliases and versions, not isolated stacks. Creating a full copy of your serverless app per PR requires complex CloudFormation/SAM orchestration.
Shared AWS resourcesDynamoDB tables, S3 buckets, SQS queues, and API Gateway stages are shared. Isolating them per PR means unique naming conventions and cleanup logic.
Cost at scaleDeploying real AWS resources for every PR adds up. 10 open PRs means 10 DynamoDB tables, 10 API Gateway deployments, 10 sets of IAM roles.
Slow deploymentsCloudFormation stack creation takes minutes. Multiply that by every PR push.
Cleanup complexityForgetting to delete resources after PR merge leads to orphaned infrastructure and unexpected bills.

Bunnyshell's approach flips this: instead of deploying to AWS per PR, you containerize your serverless functions and run them in Kubernetes with LocalStack providing the AWS services. Fast, cheap, isolated, and automatic.

Bunnyshell's Approach: Containerized Serverless

The core idea is straightforward:

  1. Package your Lambda functions into Docker containers using the AWS Lambda Runtime Interface Emulator (RIE) or a simple Express adapter
  2. Use LocalStack to emulate DynamoDB, S3, SQS, and other AWS services
  3. Deploy everything in Bunnyshell as a standard environment with automatic PR-based lifecycle

This gives you sub-minute deployments, zero AWS costs for preview environments, and full isolation between PRs.

Prerequisites

Before setting up preview environments, ensure you have:

  • A Bunnyshell account with a connected Kubernetes cluster
  • A Git repository (GitHub, GitLab, or Bitbucket) connected to Bunnyshell
  • Your serverless functions written in Node.js (the examples use Node.js 20, but the pattern works for Python, Go, and Java)
  • Docker installed locally for testing container builds
  • Basic familiarity with Lambda function handlers

Approach A: Containerize Your Functions (Docker + Bunnyshell)

This is the recommended approach for most teams. You package each Lambda function (or group of functions) into a Docker container using the official AWS Lambda base image, which includes the Lambda Runtime Interface Emulator.

Step 1: Create a Dockerfile for Your Lambda Function

The AWS Lambda base images include the Runtime Interface Emulator (RIE), which lets your function run locally the same way it runs in AWS.

Dockerfile
1# Use the official AWS Lambda Node.js 20 base image
2FROM public.ecr.aws/lambda/nodejs:20
3
4# Copy function code
5COPY package*.json ${LAMBDA_TASK_ROOT}/
6RUN npm ci --omit=dev
7
8COPY src/ ${LAMBDA_TASK_ROOT}/src/
9
10# Set the handler
11CMD ["src/handlers/api.handler"]

Your Lambda handler stays exactly the same as in production:

JavaScript
1// src/handlers/api.js
2export const handler = async (event, context) => {
3  const { httpMethod, path, body, queryStringParameters } = event;
4
5  // Route handling
6  if (path === '/users' && httpMethod === 'GET') {
7    const users = await listUsers();
8    return {
9      statusCode: 200,
10      headers: { 'Content-Type': 'application/json' },
11      body: JSON.stringify(users),
12    };
13  }
14
15  if (path === '/users' && httpMethod === 'POST') {
16    const userData = JSON.parse(body);
17    const user = await createUser(userData);
18    return {
19      statusCode: 201,
20      headers: { 'Content-Type': 'application/json' },
21      body: JSON.stringify(user),
22    };
23  }
24
25  return {
26    statusCode: 404,
27    body: JSON.stringify({ message: 'Not Found' }),
28  };
29};

The AWS Lambda base images include the Runtime Interface Emulator automatically. When you run the container, it listens on port 8080 and accepts Lambda invocation payloads at the /2015-03-31/functions/function/invocations endpoint.

Step 2: Add an API Gateway Adapter

The Lambda RIE expects raw Lambda event payloads, not HTTP requests. You need a lightweight adapter that converts HTTP to Lambda events. Create an Express-based adapter:

JavaScript
1// src/gateway/server.js
2import express from 'express';
3import { handler } from '../handlers/api.js';
4
5const app = express();
6app.use(express.json());
7app.use(express.urlencoded({ extended: true }));
8
9// Convert HTTP requests to Lambda-style events
10app.all('*', async (req, res) => {
11  const event = {
12    httpMethod: req.method,
13    path: req.path,
14    headers: req.headers,
15    queryStringParameters: req.query,
16    body: req.body ? JSON.stringify(req.body) : null,
17    pathParameters: null,
18    requestContext: {
19      requestId: `local-${Date.now()}`,
20      stage: 'preview',
21    },
22  };
23
24  try {
25    const result = await handler(event, {});
26    const statusCode = result.statusCode || 200;
27    const headers = result.headers || {};
28    const body = result.body || '';
29
30    Object.entries(headers).forEach(([key, value]) => {
31      res.setHeader(key, value);
32    });
33
34    res.status(statusCode).send(body);
35  } catch (error) {
36    console.error('Handler error:', error);
37    res.status(500).json({ message: 'Internal Server Error' });
38  }
39});
40
41const PORT = process.env.PORT || 3000;
42app.listen(PORT, () => {
43  console.log(`API Gateway adapter running on port ${PORT}`);
44});

Then update your Dockerfile to use the adapter instead of the Lambda RIE:

Dockerfile
1FROM node:20-alpine
2
3WORKDIR /app
4
5COPY package*.json ./
6RUN npm ci --omit=dev
7
8COPY src/ ./src/
9
10EXPOSE 3000
11
12CMD ["node", "src/gateway/server.js"]

You now have two Dockerfile options: the Lambda base image (closer to production behavior) or the Express adapter (simpler, faster startup, standard HTTP). For preview environments, the Express adapter is often more practical because reviewers can interact with it directly via a browser.

Step 3: Create the Bunnyshell Configuration

Create bunnyshell.yaml in your repository root:

YAML
1kind: Environment
2name: serverless-preview
3type: primary
4
5environmentVariables:
6  AWS_ACCESS_KEY_ID: test
7  AWS_SECRET_ACCESS_KEY: test
8  AWS_DEFAULT_REGION: us-east-1
9  DYNAMODB_ENDPOINT: 'http://localstack:4566'
10  S3_ENDPOINT: 'http://localstack:4566'
11
12components:
13  # -- API Function (Express adapter) --
14  - kind: Application
15    name: api-function
16    gitRepo: 'https://github.com/your-org/your-serverless-repo.git'
17    gitBranch: main
18    gitApplicationPath: /
19    dockerCompose:
20      build:
21        context: .
22        dockerfile: Dockerfile
23      environment:
24        PORT: '3000'
25        NODE_ENV: production
26        AWS_ACCESS_KEY_ID: '{{ env.vars.AWS_ACCESS_KEY_ID }}'
27        AWS_SECRET_ACCESS_KEY: '{{ env.vars.AWS_SECRET_ACCESS_KEY }}'
28        AWS_DEFAULT_REGION: '{{ env.vars.AWS_DEFAULT_REGION }}'
29        DYNAMODB_ENDPOINT: '{{ env.vars.DYNAMODB_ENDPOINT }}'
30        S3_ENDPOINT: '{{ env.vars.S3_ENDPOINT }}'
31      ports:
32        - '3000:3000'
33    dependsOn:
34      - localstack
35    hosts:
36      - hostname: 'api-{{ env.base_domain }}'
37        path: /
38        servicePort: 3000
39
40  # -- LocalStack (AWS Service Emulation) --
41  - kind: Service
42    name: localstack
43    dockerCompose:
44      image: 'localstack/localstack:3.0'
45      environment:
46        SERVICES: dynamodb,s3,sqs,sns
47        DEFAULT_REGION: us-east-1
48        DOCKER_HOST: 'unix:///var/run/docker.sock'
49      ports:
50        - '4566:4566'
51
52  # -- LocalStack Init (create tables and buckets) --
53  - kind: InitContainer
54    name: localstack-init
55    dockerCompose:
56      image: 'amazon/aws-cli:2.15.0'
57      environment:
58        AWS_ACCESS_KEY_ID: test
59        AWS_SECRET_ACCESS_KEY: test
60        AWS_DEFAULT_REGION: us-east-1
61      command: >
62        sh -c "
63          sleep 5 &&
64          aws --endpoint-url=http://localstack:4566 dynamodb create-table
65            --table-name Users
66            --attribute-definitions AttributeName=id,AttributeType=S
67            --key-schema AttributeName=id,KeyType=HASH
68            --billing-mode PAY_PER_REQUEST &&
69          aws --endpoint-url=http://localstack:4566 s3 mb s3://uploads &&
70          echo 'LocalStack initialized successfully'
71        "
72    dependsOn:
73      - localstack

Replace your-org/your-serverless-repo with your actual repository URL. The AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY are set to test because LocalStack does not validate credentials -- this is expected and safe for preview environments.

Step 4: Wire Up DynamoDB in Your Function

Update your function code to point at LocalStack in preview environments:

JavaScript
1// src/lib/dynamodb.js
2import { DynamoDBClient } from '@aws-sdk/client-dynamodb';
3import { DynamoDBDocumentClient, ScanCommand, PutCommand } from '@aws-sdk/lib-dynamodb';
4
5const client = new DynamoDBClient({
6  region: process.env.AWS_DEFAULT_REGION || 'us-east-1',
7  ...(process.env.DYNAMODB_ENDPOINT && {
8    endpoint: process.env.DYNAMODB_ENDPOINT,
9    credentials: {
10      accessKeyId: process.env.AWS_ACCESS_KEY_ID || 'test',
11      secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY || 'test',
12    },
13  }),
14});
15
16const docClient = DynamoDBDocumentClient.from(client);
17const TABLE_NAME = process.env.USERS_TABLE || 'Users';
18
19export async function listUsers() {
20  const result = await docClient.send(new ScanCommand({
21    TableName: TABLE_NAME,
22  }));
23  return result.Items || [];
24}
25
26export async function createUser(userData) {
27  const user = {
28    id: crypto.randomUUID(),
29    ...userData,
30    createdAt: new Date().toISOString(),
31  };
32
33  await docClient.send(new PutCommand({
34    TableName: TABLE_NAME,
35    Item: user,
36  }));
37
38  return user;
39}

This pattern works in both production and preview environments. In production, DYNAMODB_ENDPOINT is not set, so the SDK uses the default AWS endpoint. In preview environments, it points to LocalStack. No code branches, no feature flags -- just environment variables.

Step 5: Deploy and Test

Click Deploy in Bunnyshell. Once the environment is running:

  1. Click Endpoints to get the API URL
  2. Test your function:
Bash
1# List users (empty initially)
2curl https://api-your-env.bunnyshell.dev/users
3
4# Create a user
5curl -X POST https://api-your-env.bunnyshell.dev/users \
6  -H "Content-Type: application/json" \
7  -d '{"name": "Jane Doe", "email": "jane@example.com"}'
8
9# List users again
10curl https://api-your-env.bunnyshell.dev/users

Approach B: Serverless Framework + Bunnyshell Terraform

For teams using the Serverless Framework that want to deploy real AWS resources per PR. This approach uses a Terraform component in Bunnyshell to provision actual Lambda functions, API Gateway, and DynamoDB tables in AWS.

This approach creates real AWS resources for each PR, which incurs AWS costs. Use Approach A (containerized) for cost-free preview environments, and reserve this approach for pre-production validation where you need real AWS behavior.

Step 1: Structure Your Serverless Project

Text
1serverless-app/
2├── serverless.yml
3├── terraform/
4│   ├── main.tf
5│   ├── variables.tf
6│   └── outputs.tf
7├── src/
8│   └── handlers/
9│       ├── api.js
10│       └── worker.js
11└── bunnyshell.yaml

Step 2: Parameterize Your Serverless Config

YAML
1# serverless.yml
2service: my-api-${self:custom.stage}
3
4custom:
5  stage: ${opt:stage, 'dev'}
6
7provider:
8  name: aws
9  runtime: nodejs20.x
10  region: us-east-1
11  environment:
12    USERS_TABLE: Users-${self:custom.stage}
13    UPLOADS_BUCKET: uploads-${self:custom.stage}
14
15functions:
16  api:
17    handler: src/handlers/api.handler
18    events:
19      - http:
20          path: /{proxy+}
21          method: ANY
22          cors: true
23
24resources:
25  Resources:
26    UsersTable:
27      Type: AWS::DynamoDB::Table
28      Properties:
29        TableName: Users-${self:custom.stage}
30        AttributeDefinitions:
31          - AttributeName: id
32            AttributeType: S
33        KeySchema:
34          - AttributeName: id
35            KeyType: HASH
36        BillingMode: PAY_PER_REQUEST

Step 3: Create the Terraform Component

HCL
1# terraform/main.tf
2terraform {
3  required_providers {
4    aws = {
5      source  = "hashicorp/aws"
6      version = "~> 5.0"
7    }
8  }
9}
10
11variable "stage" {
12  type        = string
13  description = "Environment stage name (e.g., pr-123)"
14}
15
16variable "aws_region" {
17  type    = string
18  default = "us-east-1"
19}
20
21provider "aws" {
22  region = var.aws_region
23}
24
25# Deploy using Serverless Framework
26resource "null_resource" "serverless_deploy" {
27  triggers = {
28    stage = var.stage
29  }
30
31  provisioner "local-exec" {
32    command = "npx serverless deploy --stage ${var.stage} --region ${var.aws_region}"
33  }
34
35  provisioner "local-exec" {
36    when    = destroy
37    command = "npx serverless remove --stage ${self.triggers.stage} --region us-east-1"
38  }
39}
40
41output "api_url" {
42  value = "https://placeholder.execute-api.${var.aws_region}.amazonaws.com/${var.stage}"
43}

Step 4: Bunnyshell Configuration with Terraform

YAML
1kind: Environment
2name: serverless-terraform
3type: primary
4
5environmentVariables:
6  AWS_ACCESS_KEY_ID: SECRET["your-aws-key"]
7  AWS_SECRET_ACCESS_KEY: SECRET["your-aws-secret"]
8  STAGE_NAME: 'pr-{{ env.unique }}'
9
10components:
11  - kind: Terraform
12    name: serverless-stack
13    gitRepo: 'https://github.com/your-org/your-serverless-repo.git'
14    gitBranch: main
15    gitApplicationPath: /terraform
16    runnerImage: 'hashicorp/terraform:1.7'
17    deploy:
18      - 'cd /work && terraform init'
19      - 'cd /work && terraform apply -auto-approve
20          -var="stage={{ env.vars.STAGE_NAME }}"
21          -var="aws_region=us-east-1"'
22      - 'API_URL=$(cd /work && terraform output -raw api_url)'
23    destroy:
24      - 'cd /work && terraform destroy -auto-approve
25          -var="stage={{ env.vars.STAGE_NAME }}"
26          -var="aws_region=us-east-1"'
27    exportVariables:
28      - API_URL

Approach C: AWS SAM Local in Containers

For teams using AWS SAM who want to run sam local inside a container for preview environments. SAM Local provides a Lambda-like execution environment without deploying to AWS.

Step 1: Create a SAM Template

YAML
1# template.yaml
2AWSTemplateFormatVersion: '2010-09-09'
3Transform: AWS::Serverless-2016-10-31
4
5Globals:
6  Function:
7    Runtime: nodejs20.x
8    Timeout: 30
9    Environment:
10      Variables:
11        USERS_TABLE: !Ref UsersTable
12
13Resources:
14  ApiFunction:
15    Type: AWS::Serverless::Function
16    Properties:
17      Handler: src/handlers/api.handler
18      Events:
19        ApiEvent:
20          Type: Api
21          Properties:
22            Path: /{proxy+}
23            Method: ANY
24
25  UsersTable:
26    Type: AWS::DynamoDB::Table
27    Properties:
28      AttributeDefinitions:
29        - AttributeName: id
30          AttributeType: S
31      KeySchema:
32        - AttributeName: id
33          KeyType: HASH
34      BillingMode: PAY_PER_REQUEST

Step 2: Dockerize SAM Local

Dockerfile
1FROM public.ecr.aws/sam/build-nodejs20.x:latest
2
3WORKDIR /app
4
5# Install SAM CLI
6RUN pip3 install aws-sam-cli
7
8COPY . .
9RUN npm ci --omit=dev
10
11EXPOSE 3000
12
13CMD ["sam", "local", "start-api", \
14     "--host", "0.0.0.0", \
15     "--port", "3000", \
16     "--docker-network", "host", \
17     "--warm-containers", "EAGER"]

SAM Local requires Docker-in-Docker to run Lambda functions in containers. This adds complexity and may not work in all Kubernetes clusters. For most teams, Approach A (Express adapter) is simpler and more reliable.


API Gateway Patterns in Preview Environments

When you containerize serverless functions, you lose the API Gateway routing layer. Here are three patterns to replace it:

Map each Lambda function to an Express route:

JavaScript
1// src/gateway/router.js
2import express from 'express';
3import { handler as usersHandler } from '../handlers/users.js';
4import { handler as ordersHandler } from '../handlers/orders.js';
5import { handler as webhookHandler } from '../handlers/webhook.js';
6
7const app = express();
8app.use(express.json());
9
10// Map routes to handlers
11const routes = [
12  { path: '/users', methods: ['GET', 'POST'], handler: usersHandler },
13  { path: '/users/:id', methods: ['GET', 'PUT', 'DELETE'], handler: usersHandler },
14  { path: '/orders', methods: ['GET', 'POST'], handler: ordersHandler },
15  { path: '/orders/:id', methods: ['GET'], handler: ordersHandler },
16  { path: '/webhooks/:provider', methods: ['POST'], handler: webhookHandler },
17];
18
19function toLambdaEvent(req) {
20  return {
21    httpMethod: req.method,
22    path: req.path,
23    headers: req.headers,
24    queryStringParameters: Object.keys(req.query).length ? req.query : null,
25    pathParameters: Object.keys(req.params).length ? req.params : null,
26    body: req.body ? JSON.stringify(req.body) : null,
27    requestContext: {
28      requestId: `preview-${Date.now()}`,
29      stage: 'preview',
30      identity: { sourceIp: req.ip },
31    },
32  };
33}
34
35routes.forEach(({ path, methods, handler }) => {
36  methods.forEach((method) => {
37    app[method.toLowerCase()](path, async (req, res) => {
38      try {
39        const event = toLambdaEvent(req);
40        const result = await handler(event, {});
41        Object.entries(result.headers || {}).forEach(([k, v]) => res.setHeader(k, v));
42        res.status(result.statusCode || 200).send(result.body || '');
43      } catch (err) {
44        console.error(`Error in ${path}:`, err);
45        res.status(500).json({ message: 'Internal Server Error' });
46      }
47    });
48  });
49});
50
51app.listen(3000, () => console.log('API Gateway adapter on port 3000'));

Pattern 2: Multiple Containers

Run each Lambda function as a separate container and use an Nginx reverse proxy:

Nginx
1# nginx/default.conf
2upstream users-api {
3    server users-function:3000;
4}
5
6upstream orders-api {
7    server orders-function:3000;
8}
9
10server {
11    listen 8080;
12
13    location /users {
14        proxy_pass http://users-api;
15    }
16
17    location /orders {
18        proxy_pass http://orders-api;
19    }
20}

Pattern 3: AWS API Gateway V2 Emulator

Use a lightweight emulator that reads your SAM/CloudFormation template and routes accordingly. Libraries like serverless-offline can help, but they add complexity.

For preview environments, Pattern 1 (Express Router) gives you the best balance of simplicity and fidelity. You get standard HTTP routing, easy debugging, and fast startup -- and the Lambda handler code stays identical to production.


DynamoDB/S3 in Preview Environments (LocalStack or Terraform-managed)

LocalStack emulates AWS services locally. Each preview environment gets its own LocalStack instance with its own data.

YAML
1# Add to bunnyshell.yaml components
2  - kind: Service
3    name: localstack
4    dockerCompose:
5      image: 'localstack/localstack:3.0'
6      environment:
7        SERVICES: dynamodb,s3,sqs,sns,secretsmanager
8        DEFAULT_REGION: us-east-1
9        PERSISTENCE: '1'
10      ports:
11        - '4566:4566'
12
13volumes:
14  - name: localstack-data
15    mount:
16      component: localstack
17      containerPath: /var/lib/localstack
18    size: 1Gi

Initialize tables and buckets with a script:

Bash
1#!/bin/bash
2# scripts/init-localstack.sh
3ENDPOINT="http://localstack:4566"
4
5echo "Waiting for LocalStack..."
6until aws --endpoint-url=$ENDPOINT dynamodb list-tables 2>/dev/null; do
7  sleep 2
8done
9
10echo "Creating DynamoDB tables..."
11aws --endpoint-url=$ENDPOINT dynamodb create-table \
12  --table-name Users \
13  --attribute-definitions AttributeName=id,AttributeType=S \
14  --key-schema AttributeName=id,KeyType=HASH \
15  --billing-mode PAY_PER_REQUEST
16
17aws --endpoint-url=$ENDPOINT dynamodb create-table \
18  --table-name Orders \
19  --attribute-definitions \
20    AttributeName=userId,AttributeType=S \
21    AttributeName=orderId,AttributeType=S \
22  --key-schema \
23    AttributeName=userId,KeyType=HASH \
24    AttributeName=orderId,KeyType=RANGE \
25  --billing-mode PAY_PER_REQUEST
26
27echo "Creating S3 buckets..."
28aws --endpoint-url=$ENDPOINT s3 mb s3://uploads
29aws --endpoint-url=$ENDPOINT s3 mb s3://assets
30
31echo "Creating SQS queues..."
32aws --endpoint-url=$ENDPOINT sqs create-queue --queue-name order-processing
33aws --endpoint-url=$ENDPOINT sqs create-queue --queue-name notifications
34
35echo "LocalStack initialization complete."

Option 2: Real AWS Resources via Terraform

For integration testing against real AWS:

HCL
1# terraform/preview-resources.tf
2variable "env_id" {
3  type = string
4}
5
6resource "aws_dynamodb_table" "users" {
7  name         = "Users-${var.env_id}"
8  billing_mode = "PAY_PER_REQUEST"
9  hash_key     = "id"
10
11  attribute {
12    name = "id"
13    type = "S"
14  }
15
16  tags = {
17    Environment = "preview"
18    ManagedBy   = "bunnyshell"
19    EnvId       = var.env_id
20  }
21}
22
23resource "aws_s3_bucket" "uploads" {
24  bucket        = "uploads-${var.env_id}"
25  force_destroy = true
26
27  tags = {
28    Environment = "preview"
29    ManagedBy   = "bunnyshell"
30  }
31}

When using real AWS resources, always set force_destroy = true on S3 buckets and include the Terraform destroy step in your Bunnyshell configuration. Otherwise, resources will be orphaned when the preview environment is deleted.


Enabling Preview Environments

Regardless of which approach you used, enabling automatic preview environments is the same:

  1. Ensure your primary environment has been deployed at least once (Running or Stopped status)
  2. Go to Settings in your environment
  3. Toggle "Create ephemeral environments on pull request" to ON
  4. Toggle "Destroy environment after merge or close pull request" to ON
  5. Select the target Kubernetes cluster

What happens next:

  • Bunnyshell adds a webhook to your Git provider automatically
  • When a developer opens a PR, Bunnyshell creates an ephemeral environment cloned from the primary, using the PR's branch
  • Bunnyshell posts a comment on the PR with a direct link to the running API
  • When the PR is merged or closed, the ephemeral environment is automatically destroyed (including any Terraform-managed AWS resources)

No GitHub Actions. No GitLab CI pipelines. No maintenance. It just works.


Troubleshooting

IssueSolution
LocalStack not readyThe init script runs before LocalStack is fully up. Add a retry loop: until aws --endpoint-url=... dynamodb list-tables; do sleep 2; done
DynamoDB "table not found"The init container may not have finished. Check init container logs. Ensure dependsOn includes the init container.
Lambda handler timeoutThe Express adapter has no default timeout. Add req.setTimeout(30000) to match Lambda's 30-second default.
S3 pre-signed URLs not workingLocalStack pre-signed URLs reference localhost:4566. Override the S3 endpoint in your URL generation: use the Bunnyshell-provided hostname.
Cold start differencesContainerized functions don't have Lambda's cold start behavior. If you're testing cold start performance, use Approach B (real AWS).
IAM permissions errorsLocalStack does not enforce IAM by default. If your code checks permissions, set ENFORCE_IAM=1 in LocalStack environment variables.
API Gateway CORS errorsThe Express adapter doesn't add CORS headers automatically. Add cors middleware: app.use(cors()).
Large payload rejectedExpress has a default body limit of 100KB. Increase with app.use(express.json({ limit: '10mb' })) to match API Gateway's 10MB limit.
SQS messages not processingYou need a separate worker container polling SQS from LocalStack. Add a Service component with your worker code.
Environment variable conflictsEnsure DYNAMODB_ENDPOINT and S3_ENDPOINT are only set in preview environments, not in production builds. Use conditional endpoint configuration.

What's Next?

  • Add Step Functions emulation -- LocalStack supports Step Functions for orchestrating multi-Lambda workflows
  • Add Cognito emulation -- Test authentication flows with LocalStack's Cognito service
  • Performance testing -- Run load tests against preview environments before merging
  • Seed test data -- Create a seed script that populates DynamoDB tables with realistic test data
  • Multi-function architectures -- Run separate containers for each function group (users, orders, payments) behind a single Nginx proxy

Ship faster starting today.

14-day full-feature trial. No credit card required. Pay-as-you-go from $0.007/min per environment.