Companies looking to deploy, manage, and scale applications cost effectively have to look towards the cloud. One strategy to keep costs down and performance high is to make use of a Kubernetes cluster.
This piece will talk about what Kubernetes architecture is, and how companies can leverage the cloud to make the best use of this powerful virtualisation tool.
What is Kubernetes?
Kubernetes is a container orchestration system. It allows a company to manage all of their containerised applications. Containerisation in this case means operating system level virtualisation, where applications run in protected user spaces called containers. Imagine zipping up a piece of software and all of its dependencies so that it can run reliably on any infrastructure. That is the ultimate goal of containerised software.
Kubernetes manages these containers efficiently, serving as a master node over many minion nodes. Each node consists of one virtual machine in the cloud, running one or more containers. The allocation of resources, monitoring, starting, stopping, and updating of each node and container is Kubernetes’ main task.
Why Use the Kubernetes Architecture?
In order to save DevOps time, simplify automation, and cut costs, many companies are migrating their clouds into Kubernetes clusters. It takes a trusted cloud provider to properly advise a company as to how their Kubernetes architecture should be structured. Less scrupulous providers can easily overprovision and try to force shared services on their clients without them being fully aware. For that reason, deploying Kubernetes on AWS is preferred, as they’re a known quantity with well documented support schemes.
Kubernetes ‘expands’ into any space that it’s given. Once allocated a certain amount of drive space, RAM, and CPU time, it tends to aggressively take those resources over. It does not share. Persistent volumes allow the administrator to automatically mount the storage system of their choice, whether from a local drive array, a public cloud provider such as AWS, or a network storage system.
So it is suggested that most companies start with the middle range of what their provider suggests for their master and minion nodes, and adjust up or down from there. There’s no great way to start ‘small’ and slowly work upwards with Kubernetes architecture. Tiny adjustments to the resource pool are next to meaningless, spread across so many systems and applications.
Pool Management for Kubernetes on AWS
By grouping multiple nodes into a ‘node pool’, Kubernetes allows businesses to efficiently make full use of a VM. By using performance testing and monitoring tools, each containerised app can be measured at its typical and peak CPU, memory, and storage usage. This will allow intelligent assignment of applications to pools that will most efficiently ‘fill up’ the VM.
The best cost strategies involve ‘break point’ VMs. These are machines that have the best resources per dollar spent, even if they are somewhat larger than the norm. Large node pools can be easily managed by the Kubernetes master, so running a dozen containerised apps on a huge VM is often far more cost effective than multiple smaller VMs.
Non-critical or highly periodic systems can be run on preemptible VMs to save even more money. Kubernetes clusters are self healing. They restart nodes and workloads automatically. So as long as those containerised apps don’t need true 24/7 uptime, preemptible machines can save hundreds or thousands of dollars per year, per node.
This is another reason to run Kubernetes on AWS. Amazon’s EC2 Spot VM’s can run up to 90% cheaper than their on demand services, for example. The Spot Instance Advisor can show historic interruption chances on any given ‘spare’ VM. Spot VMs even have a pricing history feature that lets the DevOps admin know if they’re getting the best deal for their needs, or if they should shop around for a better price.
These are the main ways to cut cluster costs using containerisation. Over time, performance tweaks might allow a company to make more efficient node pools. But capturing the easy, low hanging fruit is far more important. Making full use of the AWS Cost Calculation tool ahead of time is highly suggested.
Kubernetes Network Management
More complex applications, like enterprise web hosting with an international audience, might require advanced network management in order to account for multiple mirrors or sets of load balanced services.
Not to worry. Kubernetes gives pods their own individual IP addresses, while assigning a single DNS name for the entire set of Pods. The Kubernetes master then load-balances across them. If there are dependencies, such as front end services relying on the existence of one or more backend services, Kubernetes can set a policy to never destroy certain backend resources, even if they aren’t being immediately utilised.
Support for more complex architectures, simultaneous multi-port services, mixtures of internal and external IP setups due to the use of hybrid clouds, and a number of different DNS setups can be found in the Kubernetes Service Documentation.
Developers and DevOps teams will be happy to know that there’s a large Kubernetes community that is quite knowledgeable and supportive. Links to their forums and Stack Overflow issue resolutions can be found on the Community page.
As an open source architecture, any code improvements can be shared with the entire software maintenance and development team. New releases with major and minor bug fixes are frequent.
The Kubernetes architecture is a flexible, cost effective container management system that can be efficiently utilised by medium to enterprise level businesses. The bigger the scale of operations, the more likely the company is to realise a significant cost savings.
The flexibility of Kubernetes on AWS cannot be overstated. Self healing combined with preemptible VMs such as EC2 Spot is a potent combination, which should be utilised for any service that is not majorly impacted by minor delays or brief downtime.
Over 80% of enterprises in the cloud are using containers in production. 78% of those firms use Kubernetes as their production container management system. These numbers are hugely impressive for a piece of software that is only entering its seventh year of service in June 2021.