Any Stack. All Environments.
Automated.

Integrations

NO More Bottlenecks in your
Software Development Life Cycle

Bunnyshell automatically creates and manages production replica environments for dev, QA and Staging. Stay lean, release faster, grow rapidly

Flow before Flow after

BeforeAfter

How does Bunnyshell work?

Connect

Connect your cloud providers or Kubernetes clusters to our platform in just a few simple steps.

Deploy

Easily deploy production, staging, and development environments on-demand or automatically.

Develop

Accelerate release cycle with automated generated ephemeral environments.

Optimize

Maximize efficiency, reduce costs and maintain the security you need to drive innovation.

Watch demo

What happens under the hood

To keep things organized and dev-friendly, we created a simple environment definition, env.yaml and we created the concepts of tasks & workflows, which gives you full control of the steps you need to run in order to have an environment deployed or updated.

Managing

You can opt to go with using K8s manifests directly, probably through Helm or go for generating K8s manifests starting from docker-compose.

Kubernetes manifests are not an option when generating dynamic environments, as they would require to be parameterized. Handling on-demand environments with EaaS is very different than dealing with a fixed number of environments.

If using Helm, please check out the dedicated Helm section. 🙂

If needing to start from a docker-compose, you would probably use http://kompose.io , which indeed does a fantastic job of generating K8s manifests from a docker-compose.yaml file. But we can tell you that it’s not a silver bullet by any means:

you still need to keep an association with the repository you keep the application code in, to be able to update it later, when new commits are pushed.
you will need services running on the same pod as another service does or to perform some tasks in the same filesystem, without the added complexity of creating & attaching volumes
you will probably need to initialize a database with a given seed
you will need to run migrations on the database
you will need to run cron jobs

To make terraform truly easy to use for developers, and eliminate the need for developers to have any Terraform knowledge, we created an abstraction layer, by defining Terraform modules. These introduce governance by the person who creates the module, through locking certain input values, and making others configurable (within some limits) by the users who will attach the Terraform modules to the environments.

We also adhere to GitOps practices, by updating the module within Bunnyshell whenever Git changes occur. Environments are able to auto-update Terraform resources (by choice), and offer protection against incompatible changes made in Git - compared to the configuration you defined for the module when you attached it. This prevents environments from becoming unusable through updates, even if auto-update is turned on.

Storing dynamic values is also something that is pretty straightforward, but you still need to store all of these encrypted, and keep some obfuscated from the eyes of the UI and API users. This involves creating and maintaining a highly-available infrastructure for encryption/decryption.

Managing

You may already have a pipeline building your images, but an image build system that is designed for a large volume of environments requires different considerations.

Handling updates for a fixed (and reduced) number of images is different from creating & deleting + updating a variable (and considerable) number of environments.

Keeping track of environments' state - components and additional resources, created through Terraform - as well as parallelizing the create/update operations can quickly become complex, as many situations may arise by combining various scenarios.

Generating, storing and injecting secrets into environments is also a must, in order to achieve full automation when creating.

Applications are usually linked by setting environment variables which point to the various domains used by the environment’s services.

Creating a build for every service, for every deployment is time- and resource-consuming.

To reduce deployment time you need to create some form of a cache system, to skip builds for which you already have images, and to build only images for components which have indeed changed.

Running builds in parallel will greatly decrease the deployment time, and is also something that is a definitive must.

Once your environments are deployed, you will need to create DNS records for them and configure Ingress routing rules, so their exposed services can be accessed. Furthermore, you will need to handle SSL certificates, by either allowing wildcard certificates or generating individual ones.

Chart

Renewing certificates might also be a challenge, depending on the solution you choose to implement.

When having sustained activity, you will probably need to scale and be able to run multiple Terraform processes for different environments in parallel.

To reduce the time needed to run Terraform, you would need to implement a distributed caching system for modules. Also, you would need to build a fully async process for both apply and destroy. We handle this by creating pods in a K8s cluster, and have them notify our platform of the end result.

Also, versioning the state will become mandatory, to be able to trace & possibly revert (unintended) changes.

Maybe you already have some of these pieces built into your pipelines. And you clone pipelines to create new environments. But this becomes messy to update & maintain really fast when going at scale, not to say propagating it towards already existing environments.

To ensure that users operate securely with our platform and API, we have HTTPS enforced for all public-facing network communication. Every time you communicate with the Bunnyshell - - platform or APIs, you will be redirected through a secure connection using HTTPS.
Authentication is done using JWT and OAuth 2.0.
All communication with internal services is done through a secured VPN connection.

To keep things organized and dev-friendly, we created a simple environment definition, env.yaml and we created the concepts of tasks & workflows, which gives you full control of the steps you need to run in order to have an environment deployed or updated.

Managing

You can opt to go with using K8s manifests directly, probably through Helm or go for generating K8s manifests starting from docker-compose.

Kubernetes manifests are not an option when generating dynamic environments, as they would require to be parameterized. Handling on-demand environments with EaaS is very different than dealing with a fixed number of environments.

If using Helm, please check out the dedicated Helm section. 🙂

If needing to start from a docker-compose, you would probably use http://kompose.io , which indeed does a fantastic job of generating K8s manifests from a docker-compose.yaml file. But we can tell you that it’s not a silver bullet by any means:

you still need to keep an association with the repository you keep the application code in, to be able to update it later, when new commits are pushed.
you will need services running on the same pod as another service does or to perform some tasks in the same filesystem, without the added complexity of creating & attaching volumes
you will probably need to initialize a database with a given seed
you will need to run migrations on the database
you will need to run cron jobs

To make terraform truly easy to use for developers, and eliminate the need for developers to have any Terraform knowledge, we created an abstraction layer, by defining Terraform modules. These introduce governance by the person who creates the module, through locking certain input values, and making others configurable (within some limits) by the users who will attach the Terraform modules to the environments.

We also adhere to GitOps practices, by updating the module within Bunnyshell whenever Git changes occur. Environments are able to auto-update Terraform resources (by choice), and offer protection against incompatible changes made in Git - compared to the configuration you defined for the module when you attached it. This prevents environments from becoming unusable through updates, even if auto-update is turned on.

Storing dynamic values is also something that is pretty straightforward, but you still need to store all of these encrypted, and keep some obfuscated from the eyes of the UI and API users. This involves creating and maintaining a highly-available infrastructure for encryption/decryption.

Managing

You may already have a pipeline building your images, but an image build system that is designed for a large volume of environments requires different considerations.

Handling updates for a fixed (and reduced) number of images is different from creating & deleting + updating a variable (and considerable) number of environments.

Keeping track of environments' state - components and additional resources, created through Terraform - as well as parallelizing the create/update operations can quickly become complex, as many situations may arise by combining various scenarios.

Generating, storing and injecting secrets into environments is also a must, in order to achieve full automation when creating.

Applications are usually linked by setting environment variables which point to the various domains used by the environment’s services.

Creating a build for every service, for every deployment is time- and resource-consuming.

To reduce deployment time you need to create some form of a cache system, to skip builds for which you already have images, and to build only images for components which have indeed changed.

Running builds in parallel will greatly decrease the deployment time, and is also something that is a definitive must.

Once your environments are deployed, you will need to create DNS records for them and configure Ingress routing rules, so their exposed services can be accessed. Furthermore, you will need to handle SSL certificates, by either allowing wildcard certificates or generating individual ones.

Chart

Renewing certificates might also be a challenge, depending on the solution you choose to implement.

When having sustained activity, you will probably need to scale and be able to run multiple Terraform processes for different environments in parallel.

To reduce the time needed to run Terraform, you would need to implement a distributed caching system for modules. Also, you would need to build a fully async process for both apply and destroy. We handle this by creating pods in a K8s cluster, and have them notify our platform of the end result.

Also, versioning the state will become mandatory, to be able to trace & possibly revert (unintended) changes.

Maybe you already have some of these pieces built into your pipelines. And you clone pipelines to create new environments. But this becomes messy to update & maintain really fast when going at scale, not to say propagating it towards already existing environments.

To ensure that users operate securely with our platform and API, we have HTTPS enforced for all public-facing network communication. Every time you communicate with the Bunnyshell - - platform or APIs, you will be redirected through a secure connection using HTTPS.
Authentication is done using JWT and OAuth 2.0.
All communication with internal services is done through a secured VPN connection.

Alex Circei CEO Waydev

“We've made scaling our business's whole infrastructure more efficient and cost effective, decreased the hosting costs by over 80%, and allowed us to quickly scale.”

Alex Circei
CEO

Aurelian Motica CTO Gomag

“Since we started working with Bunnyshell we can focus more on developing our product, knowing that scaling our infrastructure will hardly be an issue.”

Aurelian Motica
CTO

Frequently Asked Questions

Haven't found what you're looking for?
Explore the Bunnyshell Help center or Contact us.

EaaS is a service where the application and environment run together while undergoing version control, and it uses automation to perform server configuration for specific applications.
Ephemeral environments are usually environments that live for the life of a Pull Request or are created manually to preview changes, showcase demos, or test new configurations.
Using a fast and capable EaaS can improve development speed by at least two dimensions by removing rework and decreasing bottlenecks.
We are in the process of obtaining SOC2 certification, and we should finalize this process by March 2023.
Bunnyshell includes an extensive REST API for your existing CI/CD and DevOps tools enabling you to easily deploy environments directly from your own release pipeline