How to Build a Kubernetes Environment & Scale Production

How to Build a Kubernetes Environment & Scale Production

The best way to explain a Kubernetes environment is to use the following analogy:

Picture this: you built a house from the ground up. Your house has different possible uses - you can choose to live in it, turn it into a business or a medical centre, or continue to build alongside it. Whatever it is you decide to do with your house, your decision requires planning and preparation so your house can be resilient. However, the process to build your house remains the same. 

Similarly, a Kubernetes environment requires planning and preparation for its configuration to be resilient, especially if it’s going to run critical workloads. Although a production Kubernetes cluster environment might have different requirements than another type of environment (personal learning, development, or test environment Kubernetes), the steps to building it remain the same. What changes is how you want to use each environment. 

In this article, you’ll learn how to build a Kubernetes environment and what you have to pay attention to, how to scale Kubernetes in production, and how Bunnyshell’s Environment as a Service solution can make your job easier.   

How to Build a Kubernetes Environment

Enable High Velocity Development

Breakaway from the inability to quickly deploy isolated environments of any specification.

From our Benefits of Containerization article, you know that containers are an excellent way to bundle and run your applications. But, if a container goes down, you must have a backup ready to start. That’s where Kubernetes comes in. It’s a system for automating your containerized apps, meaning it takes care of things like scaling and failover for your app, storage orchestration, automatic bin rollouts, rollbacks, and bin packing, and more. 

Furthermore, Kubernetes fulfils the need for orchestration. It comprises a set of control processes (both independent and composable) that drive the current state towards the desired state. Therefore, like the house analogy, it doesn’t matter how you get from point A to C. What matters is that there is a solid foundation in place. Kubernetes offers you the building blocks to build developer platforms in an easier-to-use, more resilient, robust, and extensible system. This means that you have choices and flexibility where it matters most. 

However, the process is complex and time-consuming, and you have many decisions to make. You may choose to have your cluster environment live on-premises or in a cloud. Ensure that when you install Kubernetes, you select an installation type based on available resources, ease of maintenance, expertise, security, and control required to manage and operate a cluster. You may work with a solution like Bunnyshell if you need help with Kubernetes environments within your cluster, but more on that in a bit. 

Manage resources

You’ll want to use requests and limits to manage CPU and memory. Since every node in a cluster has an allocated amount of computation power (CPU) and memory (RAM) that it uses to run containers, you can specify the minimum and maximum limits for each container by using requests and limits. This is especially useful because:

  • You can have a Pod (a grouping of one or more containers) that you manage and deploy on top of a node. By specifying the request, Kubernetes rolls all container requests into a total Pod request. This allows the Pod to deploy on a node with enough resources, allowing you to better use resources. By specifying a limit on a container, Kubernetes communicates to the container runtime service (like Docker) to enforce the limit. 

Pro tip: If you’re working with multiple teams or several devs who are working within the same Kubernetes cluster, a best practice is to set common resource requirements so additional resources don’t go to waste or you don’t run out. With Kubernetes, you can define different namespaces for teams and use Resource Quotas to enforce the quotas on the namespaces. More on namespaces in a bit. 

Additionally, error and health monitoring becomes vital, especially if you have large clusters with multiple services running within Kubernetes. Understanding how Kubernetes handles CPU and memory usage and enabling configuration to manage these resources are two vital steps to ensuring your clusters have enough capacity at all times.

Use probes 

In a cluster, separate components work independently. This means that each part will still run even if other components fail, making the system harder to manage. Apps may crash at some point or not be able to receive and process requests. Essentially, the only way to assess a system’s health is by ensuring all of its components are working. Thankfully, with probes, you can determine if a container is “dead or alive.”    

Probes tell Kubernetes if a container is healthy. Remember, though, probes don’t tell you anything. Kubernetes determines overall Pod health by verifying individual container health. As containers go through the different stages of their life cycles (as they’re created, started, run, and terminated), Kubernetes uses different kinds of probes to determine a container’s health:

  • Liveness probes allow Kubernetes to check if an app is alive, restarting the container if it’s no longer serving requests.
  • Readiness probes allow Kubernetes to know when the container is ready to start accepting traffic. A Pod is ready only when all of the individual containers are, so if it’s not, Kubernetes routes requests to other containers and the faulty one is removed. These probes run throughout the entire life cycle of the container. 
  • Startup probes determine when a container app initializes successfully. A Pod restarts if a startup probe fails. A startup probe forces liveness and readiness probe checks to wait until it succeeds to avoid app compromise or too many containers from restarting.

These probes help Kubernetes decide when a pod needs to be removed from service or if a container needs to be restarted, helping keep your system reliable and available. To create health check probes, you must issue requests against a container in the following three ways:

1. Send an HTTP request

2. Run a command

3. Open a TCP socket.  

Create Kubernetes Secrets 

Containerized apps running in Kubernetes require access to external resources like databases, services, etc., which in turn require passwords, tokens, keys, or Secrets to gain access. By using Kubernetes Secrets, you can manage sensitive app information across your cluster. You can create Kubernetes Secrets in the following three ways:

  • A configuration file works great when you have multiple Secrets you want to include in a Pod at the same time.
  • Via the command line, most useful when you add one or two Secrets to a Pod (like a username and password).
  • With a generator (Kustomize), best when you have one or more Secrets or configuration files that you want to deploy to multiple Pods.  

However, there are also downsides to Secret management. For one, Secrets use namespaces like Pods. That means that if Pods and Secrets are in the same namespace, all those Pods can read the Secrets. Additionally, you need to manually rotate Secrets as they don’t do this automatically. Thankfully, you can use an alternative configuration to address these issues: 

  • Integrate a cloud-vendor Secrets management tool
  • Integrate a cloud vendor Identity and Access Management (IAM) tool 
  • Run a third-party Secrets manager.  

Organize clusters 

A cluster can start out small, and as a team progressively expands its installation, the cluster can expand to dozens of Pods and hundreds of containers, sometimes more. If you don’t organize these clusters, you can see how easily things can get out of control, leading to issues related to security, performance, and not only. The following three tools can help you keep your clusters organized and manageable: 

  1. Namespaces - think of namespaces as virtual clusters. By configuring multiple virtual clusters, which Kubernetes supports, your teams aren’t limited to a single namespace, improving security, manageability, and performance (similar idea to feature branches). 
    undefined
  2. Labels - it’ll become challenging to manage and organize a growing number of objects in your cluster. In turn, increased project complexity means that components may disrupt cluster structures. To combat this, labels allow you to attach relevant and meaningful metadata to clusters to find, organize, and work on them collectively. 
  3. Annotations - similar to labels, annotations allow you to identify, group, and arrange clusters, but they’re not intended for use in searches. 

Use Environment as a Service 

Kubernetes Environment as a Service (EaaS) allows you to create automatic or on-demand environments for staging, development, and production on Kubernetes clusters. This is especially useful if you don’t want to manage environments within your Kubernetes cluster on your own. You’ll have massive increases in development speed, less rework and bottlenecks, and cloud costs reduced up to 60%. Pay only for the resources you need and integrate them easily when you do need them.

How to Scale Kubernetes Production

So you want to adapt your infrastructure to new load conditions and also be efficient. In other words, you want to scale. Cloud computing has made scaling easy with the click of a button. There are two types of scaling that occurs: 

  • Vertical scaling - when you increase the allocated resources (CPU, memory)
  • Horizontal scaling - when you add more instances to the environment. 

Application scaling 

Kubernetes can scale up or down and track the resources that any app might need to ensure none go to waste. To scale your app, you must define the CPU, memory, and network bandwidth needs. To figure out your needs, profile the app in production and then express the needs in the configuration of the Pod

Pro tip: you may want to run multiple Pods for high availability to keep uptime close to 99.9999%. A Horizontal Pod Autoscaler can add or remove pods as needed and monitors CPU, memory, and other metrics. 

Cluster scaling 

Although Kubernetes can scale up or down, it doesn’t have the resources to scale itself. You can have a middleware tool that monitors utilization (connected to another service that can provide virtual machines). This tool can be a public or private cloud or a virtual machine farm. Amazon, Microsoft, Digital Ocean, and Google all provide their Kubernetes users with auto-scaling technology in their cloud, all of which are available with EaaS

Pro tip: EaaS may be a great solution if you have multiple clusters to make your infrastructure more robust. 

Kubernetes FAQs

How to set an environment variable in Kubernetes

You must first have an existing Kubernetes cluster and a basic understanding of Pods, Secrets, and ConfigMaps. There are four ways to set an environment variable:

  1. Using string literals
  2. From Pod configuration
  3. From Secrets
  4. From ConfigMaps.

Are Kubernetes and Docker the same?

While they both excel as leaders in open-source software development platforms, they are different technologies, so you can’t compare them. Kubernetes is for container orchestration, while Docker is an innovation for managing and creating containers. 

Are Kubernetes labels case sensitive?

Yes, Kubernetes labels are case-sensitive, and you can find some examples here.  

Can Kubernetes run without Docker?

Yes, you can run Kubernetes without Docker since Docker is a standalone software that enables us to create, run, and manage containers on a single operating system and can be installed on any computer to run containerized apps.

The idea is that the container runtime (responsible for running and pulling your container images) inside your Kubernetes cluster replaced Docker after v1.20 with container runtimes compatible with Container Runtime Interface (which is Kubernetes-designed). This offers more flexibility in interchanging container runtimes and choosing one that best fits your needs. Docker was a popular choice, but it wasn’t designed to embed in Kubernetes. 

However, Docker runtime support will be removed for v1.22 in late 2021, so Kubernetes will no longer support it. You’ll need to switch to a compliant container runtime (like containers or CRI-O) and ensure that whichever one you choose supports configurations like logins and others. 

Can Kubernetes run on Windows?

Yes, as long as your Kubernetes server is at v1.17 or later. You’re able to run a mixture of Linux and Windows nodes to mix Pods run on Linux with Pods run on Windows. Find out how to add a node to your cluster here

Can Kubernetes replace Docker?

Because they’re different platforms and perform two different things, neither can replace the other. Docker creates and runs containers from images (which it also builds), while Kubernetes runs “on top”, managing containers, working together, and adding features such as scaling, network capabilities, and others. 

In other words, you can use them independently or together. However, as stated earlier, you won’t be able to use them directly together after 2021. You can use Kubernetes and another compatible container runtime instead, not Docker. 

Kubernetes - where to start?

You must first understand what Kubernetes can do for you. To recap, Kubernetes is an extensible, portable, open-source platform used for container orchestration. It facilitates configuration and automation. It also helps ensure that your containerized apps run where and how you want by helping them find the resources and tools they need to work. You can learn about some basic Kubernetes modules here.  

Who uses Kubernetes in production?

Maybe a better question here is who doesn’t use Kubernetes in production? From companies like Pinterest to Slack, here’s a detailed list of those who do. 

Optimize Your K8S Development the Bunnyshell Way

You already know about the many benefits of using an EaaS solution: lowered costs, reduced rework, collaboration, and the list goes on. When you already have so much planning and preparation that goes into a Kubernetes cluster to configure it to be resilient, why would you want the hassle of having to plan anything else? 

Bunnyshell offers an excellent Environment as a Service solution, and now Kubernetes Environments are coming soon! 

Enable High Velocity Development

Breakaway from the inability to quickly deploy isolated environments of any specification.