fbpx

Deployment method: Immutable

Immutable deployment strategy is used to either reduce the risk of system failure when the servers configuration changes or as a frequent deployments method, which uses images to build each server without spending time on configurations.

Automated configuration tools (such as CFEnginePuppet, or Chef) allow you to specify how servers should be configured, and bring new and existing machines into compliance. This helps avoid the problem of fragile SnowflakeServers. Such tools can create PhoenixServers that can be torn down and rebuilt at will. An Immutable Server is the logical conclusion of this approach, a server that once deployed, is never modified, merely replaced with a new updated instance.- Kief Morris

To review some of load balancer concepts please visit this page.

How it works:

Build an image for the new server. Once that is done, you should not run any configuration management tools on servers as this would increases the risk to introduce untested changes.

With the new image, create a staging environment to test the server configuration (functional tests, unit tests etc.)

Step 1 is to create new instances, the same size as the one available in load balancer, and install the new software/configuration.

deployment strategy immutable cloud

deployment strategy immutable cloud

Step 2 is to make the new servers available in the load balancer along with old servers. They will handle the traffic at the same time during connection draining of the old serves.

deployment strategy immutable cloud

Step 3 is to make make old servers unavailable, after this step all incoming traffic in handled by new servers.

deployment strategy immutable cloud

Step 4 is to destroy old servers.

deployment strategy immutable cloud

When should you use this method: 

Let’s assume we have to make server configuration changes. The changes are so big so that  the old instances can not be used any more, and thus, new servers need to be created.

To reduce downtime, the old servers will handle all incoming traffic until new servers are ready. Then, we will redirect all traffic to the new servers and the old ones will be destroyed.

This method is also great if you need to create an easy-to-scale infrastructure, with servers containing identical software.

Cons:

  • Any configuration management performed on a server, any path/update of a software introduces the risk of making the system inconsistent as the other one contains the exact replica of the built image.
  • All serves should pull configuration values from a central repository or using another way during runtime, as they contain the exact software built with the same image.
  • Minimizes the number of configurations per-server
  • Data can not be kept on any server since they are destroyed after deployment
  • Since they are immutable,  when you need a quick deploy for disaster-recovery you have to make sure you can do this very fast using images
  • Small fixes require a deploy on all the infrastructure
  • Depending on how often the deploy process is being triggered, based on how many new servers you create, your costs can be impacted

Pros:

  • Build one image for all servers, this ensures the all servers built with that image are consistent and contain the exact same software
  • Automatic one-time provisioning
  • Can be easily versioned
  • Fixed configuration, if nothing is changed then nothing will break
  • You can scale easily since this process requires building servers using a build image.
  • You can have granular control over all serves with automated provisioning
  • Prevents snowflake servers

Observations

  • When all serves (new ones and old ones) handle the traffic at the same time, there may be situations when shared resources can work with both versions of servers in which case the system fails, in best case scenario only on new or old ones. To solve this issue, ensure that any shared resource has backward compatibility.
  • A common issue is handling the cache keys which holds the information prepared by the new or the old servers. If the same key holds as value different structures of information and is affected by both types of instances the system will fail because an instance knows how to process only one type of structure. A solution for this is to version the cache key, this way the new servers will not enter in conflict with old servers.
  • It might be a good idea to keep the old servers for a while, even if the are cut from the load balancer, to see how the new software behaves. If the system crashes for some reason, a rollback procedure will assume only switching new servers pool with old servers pool. Keep in mind that this kind of rollback assumes the old servers can work with any shared resources between them and the new ones, affected by the new ones.
  • For data persistence you can use shared file system, mutable storage devices or external services

Next >> “Blue / Green” Deployment strategy

Credits to: 
Maria Chiris for artdesign and image graphics
Designed by Freepik

Spread the love

Leave a Reply