Preview Environments for Serverless: Lambda & Functions with Bunnyshell
Why Preview Environments for Serverless?
Serverless architectures promise simplicity: write a function, deploy it, never think about servers. But that simplicity collapses the moment two developers need to test different versions of the same Lambda function. AWS does not give you per-PR environments out of the box. You get dev, staging, prod -- and everyone shares them.
The result is familiar: your teammate's broken DynamoDB schema poisons the shared dev stage. Your API Gateway routes conflict with someone else's experimental endpoint. You cannot demo your feature because another deployment overwrote yours ten minutes ago.
Preview environments solve this. Every pull request gets its own isolated deployment -- your Lambda functions, your API Gateway routes, your DynamoDB tables, your S3 buckets -- running in containers with production-like configuration. Reviewers click a link and see the actual running API, not just the diff.
With Bunnyshell, you get:
- Automatic deployment -- A new environment spins up for every PR
- AWS service emulation -- LocalStack provides DynamoDB, S3, SQS, and more without AWS costs
- Isolation -- Each PR environment is fully independent, no shared stage conflicts
- Automatic cleanup -- Environments are destroyed when the PR is merged or closed
The Serverless Preview Challenge
Traditional serverless platforms were not designed for preview environments. Here is why:
| Challenge | Why it's hard |
|---|---|
| No native "environments" | Lambda uses aliases and versions, not isolated stacks. Creating a full copy of your serverless app per PR requires complex CloudFormation/SAM orchestration. |
| Shared AWS resources | DynamoDB tables, S3 buckets, SQS queues, and API Gateway stages are shared. Isolating them per PR means unique naming conventions and cleanup logic. |
| Cost at scale | Deploying real AWS resources for every PR adds up. 10 open PRs means 10 DynamoDB tables, 10 API Gateway deployments, 10 sets of IAM roles. |
| Slow deployments | CloudFormation stack creation takes minutes. Multiply that by every PR push. |
| Cleanup complexity | Forgetting to delete resources after PR merge leads to orphaned infrastructure and unexpected bills. |
Bunnyshell's approach flips this: instead of deploying to AWS per PR, you containerize your serverless functions and run them in Kubernetes with LocalStack providing the AWS services. Fast, cheap, isolated, and automatic.
Bunnyshell's Approach: Containerized Serverless
The core idea is straightforward:
- Package your Lambda functions into Docker containers using the AWS Lambda Runtime Interface Emulator (RIE) or a simple Express adapter
- Use LocalStack to emulate DynamoDB, S3, SQS, and other AWS services
- Deploy everything in Bunnyshell as a standard environment with automatic PR-based lifecycle
This gives you sub-minute deployments, zero AWS costs for preview environments, and full isolation between PRs.
Prerequisites
Before setting up preview environments, ensure you have:
- A Bunnyshell account with a connected Kubernetes cluster
- A Git repository (GitHub, GitLab, or Bitbucket) connected to Bunnyshell
- Your serverless functions written in Node.js (the examples use Node.js 20, but the pattern works for Python, Go, and Java)
- Docker installed locally for testing container builds
- Basic familiarity with Lambda function handlers
Approach A: Containerize Your Functions (Docker + Bunnyshell)
This is the recommended approach for most teams. You package each Lambda function (or group of functions) into a Docker container using the official AWS Lambda base image, which includes the Lambda Runtime Interface Emulator.
Step 1: Create a Dockerfile for Your Lambda Function
The AWS Lambda base images include the Runtime Interface Emulator (RIE), which lets your function run locally the same way it runs in AWS.
1# Use the official AWS Lambda Node.js 20 base image
2FROM public.ecr.aws/lambda/nodejs:20
3
4# Copy function code
5COPY package*.json ${LAMBDA_TASK_ROOT}/
6RUN npm ci --omit=dev
7
8COPY src/ ${LAMBDA_TASK_ROOT}/src/
9
10# Set the handler
11CMD ["src/handlers/api.handler"]Your Lambda handler stays exactly the same as in production:
1// src/handlers/api.js
2export const handler = async (event, context) => {
3 const { httpMethod, path, body, queryStringParameters } = event;
4
5 // Route handling
6 if (path === '/users' && httpMethod === 'GET') {
7 const users = await listUsers();
8 return {
9 statusCode: 200,
10 headers: { 'Content-Type': 'application/json' },
11 body: JSON.stringify(users),
12 };
13 }
14
15 if (path === '/users' && httpMethod === 'POST') {
16 const userData = JSON.parse(body);
17 const user = await createUser(userData);
18 return {
19 statusCode: 201,
20 headers: { 'Content-Type': 'application/json' },
21 body: JSON.stringify(user),
22 };
23 }
24
25 return {
26 statusCode: 404,
27 body: JSON.stringify({ message: 'Not Found' }),
28 };
29};The AWS Lambda base images include the Runtime Interface Emulator automatically. When you run the container, it listens on port 8080 and accepts Lambda invocation payloads at the /2015-03-31/functions/function/invocations endpoint.
Step 2: Add an API Gateway Adapter
The Lambda RIE expects raw Lambda event payloads, not HTTP requests. You need a lightweight adapter that converts HTTP to Lambda events. Create an Express-based adapter:
1// src/gateway/server.js
2import express from 'express';
3import { handler } from '../handlers/api.js';
4
5const app = express();
6app.use(express.json());
7app.use(express.urlencoded({ extended: true }));
8
9// Convert HTTP requests to Lambda-style events
10app.all('*', async (req, res) => {
11 const event = {
12 httpMethod: req.method,
13 path: req.path,
14 headers: req.headers,
15 queryStringParameters: req.query,
16 body: req.body ? JSON.stringify(req.body) : null,
17 pathParameters: null,
18 requestContext: {
19 requestId: `local-${Date.now()}`,
20 stage: 'preview',
21 },
22 };
23
24 try {
25 const result = await handler(event, {});
26 const statusCode = result.statusCode || 200;
27 const headers = result.headers || {};
28 const body = result.body || '';
29
30 Object.entries(headers).forEach(([key, value]) => {
31 res.setHeader(key, value);
32 });
33
34 res.status(statusCode).send(body);
35 } catch (error) {
36 console.error('Handler error:', error);
37 res.status(500).json({ message: 'Internal Server Error' });
38 }
39});
40
41const PORT = process.env.PORT || 3000;
42app.listen(PORT, () => {
43 console.log(`API Gateway adapter running on port ${PORT}`);
44});Then update your Dockerfile to use the adapter instead of the Lambda RIE:
1FROM node:20-alpine
2
3WORKDIR /app
4
5COPY package*.json ./
6RUN npm ci --omit=dev
7
8COPY src/ ./src/
9
10EXPOSE 3000
11
12CMD ["node", "src/gateway/server.js"]You now have two Dockerfile options: the Lambda base image (closer to production behavior) or the Express adapter (simpler, faster startup, standard HTTP). For preview environments, the Express adapter is often more practical because reviewers can interact with it directly via a browser.
Step 3: Create the Bunnyshell Configuration
Create bunnyshell.yaml in your repository root:
1kind: Environment
2name: serverless-preview
3type: primary
4
5environmentVariables:
6 AWS_ACCESS_KEY_ID: test
7 AWS_SECRET_ACCESS_KEY: test
8 AWS_DEFAULT_REGION: us-east-1
9 DYNAMODB_ENDPOINT: 'http://localstack:4566'
10 S3_ENDPOINT: 'http://localstack:4566'
11
12components:
13 # -- API Function (Express adapter) --
14 - kind: Application
15 name: api-function
16 gitRepo: 'https://github.com/your-org/your-serverless-repo.git'
17 gitBranch: main
18 gitApplicationPath: /
19 dockerCompose:
20 build:
21 context: .
22 dockerfile: Dockerfile
23 environment:
24 PORT: '3000'
25 NODE_ENV: production
26 AWS_ACCESS_KEY_ID: '{{ env.vars.AWS_ACCESS_KEY_ID }}'
27 AWS_SECRET_ACCESS_KEY: '{{ env.vars.AWS_SECRET_ACCESS_KEY }}'
28 AWS_DEFAULT_REGION: '{{ env.vars.AWS_DEFAULT_REGION }}'
29 DYNAMODB_ENDPOINT: '{{ env.vars.DYNAMODB_ENDPOINT }}'
30 S3_ENDPOINT: '{{ env.vars.S3_ENDPOINT }}'
31 ports:
32 - '3000:3000'
33 dependsOn:
34 - localstack
35 hosts:
36 - hostname: 'api-{{ env.base_domain }}'
37 path: /
38 servicePort: 3000
39
40 # -- LocalStack (AWS Service Emulation) --
41 - kind: Service
42 name: localstack
43 dockerCompose:
44 image: 'localstack/localstack:3.0'
45 environment:
46 SERVICES: dynamodb,s3,sqs,sns
47 DEFAULT_REGION: us-east-1
48 DOCKER_HOST: 'unix:///var/run/docker.sock'
49 ports:
50 - '4566:4566'
51
52 # -- LocalStack Init (create tables and buckets) --
53 - kind: InitContainer
54 name: localstack-init
55 dockerCompose:
56 image: 'amazon/aws-cli:2.15.0'
57 environment:
58 AWS_ACCESS_KEY_ID: test
59 AWS_SECRET_ACCESS_KEY: test
60 AWS_DEFAULT_REGION: us-east-1
61 command: >
62 sh -c "
63 sleep 5 &&
64 aws --endpoint-url=http://localstack:4566 dynamodb create-table
65 --table-name Users
66 --attribute-definitions AttributeName=id,AttributeType=S
67 --key-schema AttributeName=id,KeyType=HASH
68 --billing-mode PAY_PER_REQUEST &&
69 aws --endpoint-url=http://localstack:4566 s3 mb s3://uploads &&
70 echo 'LocalStack initialized successfully'
71 "
72 dependsOn:
73 - localstackReplace your-org/your-serverless-repo with your actual repository URL. The AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY are set to test because LocalStack does not validate credentials -- this is expected and safe for preview environments.
Step 4: Wire Up DynamoDB in Your Function
Update your function code to point at LocalStack in preview environments:
1// src/lib/dynamodb.js
2import { DynamoDBClient } from '@aws-sdk/client-dynamodb';
3import { DynamoDBDocumentClient, ScanCommand, PutCommand } from '@aws-sdk/lib-dynamodb';
4
5const client = new DynamoDBClient({
6 region: process.env.AWS_DEFAULT_REGION || 'us-east-1',
7 ...(process.env.DYNAMODB_ENDPOINT && {
8 endpoint: process.env.DYNAMODB_ENDPOINT,
9 credentials: {
10 accessKeyId: process.env.AWS_ACCESS_KEY_ID || 'test',
11 secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY || 'test',
12 },
13 }),
14});
15
16const docClient = DynamoDBDocumentClient.from(client);
17const TABLE_NAME = process.env.USERS_TABLE || 'Users';
18
19export async function listUsers() {
20 const result = await docClient.send(new ScanCommand({
21 TableName: TABLE_NAME,
22 }));
23 return result.Items || [];
24}
25
26export async function createUser(userData) {
27 const user = {
28 id: crypto.randomUUID(),
29 ...userData,
30 createdAt: new Date().toISOString(),
31 };
32
33 await docClient.send(new PutCommand({
34 TableName: TABLE_NAME,
35 Item: user,
36 }));
37
38 return user;
39}This pattern works in both production and preview environments. In production, DYNAMODB_ENDPOINT is not set, so the SDK uses the default AWS endpoint. In preview environments, it points to LocalStack. No code branches, no feature flags -- just environment variables.
Step 5: Deploy and Test
Click Deploy in Bunnyshell. Once the environment is running:
- Click Endpoints to get the API URL
- Test your function:
1# List users (empty initially)
2curl https://api-your-env.bunnyshell.dev/users
3
4# Create a user
5curl -X POST https://api-your-env.bunnyshell.dev/users \
6 -H "Content-Type: application/json" \
7 -d '{"name": "Jane Doe", "email": "jane@example.com"}'
8
9# List users again
10curl https://api-your-env.bunnyshell.dev/usersApproach B: Serverless Framework + Bunnyshell Terraform
For teams using the Serverless Framework that want to deploy real AWS resources per PR. This approach uses a Terraform component in Bunnyshell to provision actual Lambda functions, API Gateway, and DynamoDB tables in AWS.
This approach creates real AWS resources for each PR, which incurs AWS costs. Use Approach A (containerized) for cost-free preview environments, and reserve this approach for pre-production validation where you need real AWS behavior.
Step 1: Structure Your Serverless Project
1serverless-app/
2├── serverless.yml
3├── terraform/
4│ ├── main.tf
5│ ├── variables.tf
6│ └── outputs.tf
7├── src/
8│ └── handlers/
9