An explainer for newcomers to the most powerful force in digital transformation
There’s a Dilbert cartoon in which his pointy-haired manager is blurting out buzzwords. “You can’t solve a problem just by saying techy things,” moans Dilbert. The boss responds: “Kubernetes”.
Kubernetes. It’s a word that gets used a lot in digital transformation. VMWare estimates 65% of large companies now use Kubernetes in production, up from 59% in 2020. It’s everywhere. But what is it?
Like blockchain, Kubernetes is notoriously hard to understand for the layperson. It’s abstract. Technical. Practitioners quickly wander into esoteric territory when explaining the technology, as the humble listener glazes over.
In fact, the principles of Kubernetes are straightforward, while its role in the digital world is profound. It’s worth getting to grips with the concept.
First, the word. The pronunciation is koob-er-nett-ees. It’s sometimes written as K8s, with the “8” replacing eight letters. The software system was created by Google and is now maintained by the Cloud Native Computing Foundation.
Kubernetes literally means “helmsman” in Greek, which is apt. In a nutshell, Kubernetes manages small units of software called “containers”. It automates container management, summoning new containers when needed and killing redundant ones. A common analogy is that Kubernetes is like the conductor in an orchestra, controlling the containers.
What, then, is a container? This is the key to understanding Kubernetes, as the two work together. A container is a subdivision of an application. Rather than being a monolithic entity, the application comprises a plethora of containers which run independently.
Simon Bennett, chief technical officer of Rackspace Technology in Europe, does his best to explain containers to the newcomer.
“Containers are the units of software that form part of an application and enable it to run. These applications are usually made of multiple containers, all performing a specific function provided as a service to each other. For example, at an ATM machine, there would be one container used for consumers wanting to check their account balance and another for those wanting to withdraw money.”
Importantly, containers are lightweight. They share an operating system kernel rather than requiring their own, so are phenomenally quick to spin up. As they only contain software pieces and application logic, they can be destroyed and recreated at any time, Bennett explains, without impacting the application’s availability or risking the loss of application data.
Together, Kubernetes and containers offer “self-healing software”. If a container malfunctions, Kubernetes will notice, kill the unit, and spin up another instance. Human intervention isn’t required.
Crucially, containers run on any hardware equally well. This is vital: projects are often ported from one environment to another, so if software only works on one specific hardware set-up it creates a nightmare for managers.
Containers were developed as part of the quest to “abstract” software from hardware. “It all started with Virtual Machines (VM),” says Dr Anjali Subburaj, digital commerce chief architect at Mars. A VM is an abstraction of an entire computer, from the operating system all the way down to the memory and storage.
“However, VM technology lacked portability and continued to suffer from the ‘but it works on my machine’ problem of traditional methods,” she explains. “Code developed in a specific computing environment, when transferred to a new location, often results in bugs and errors. For example, when a developer transfers code from a desktop computer to a VM or from a Linux to a Windows operating system.”
Containerisation eliminates this problem by bundling the application code together with the configuration files, libraries and dependencies it needs to run, Subburaj says.
“This single unit of software or ‘container’ is abstracted away from the host operating system,” she notes, meaning “it stands alone and becomes portable - able to run across any platform or cloud, free of issues. Put simply, containerisation allows applications to be ‘written once and run anywhere’.”
And the payoff? “Containers and Kubernetes are great for short-running applications, especially those where the load usage can vary,” says Matt Saunders, head of DevOps at Adaptavist. Saunders runs the London DevOps meetup group, which has over 7,000 members.
“It’s great for being able to scale up and down according to demand,” he adds, pointing to Black Friday, when internet traffic soars. A company may normally run on a few servers, then suddenly need a few hundred.
Kubernetes spins up containers to handle the workload, and then kills them when demand subsides. In a cloud environment, where you only pay for what you use, this is tremendously cost-effective.
Containers are commonly used with microservices. Whereas once applications were designed as a monolithic whole, it helps to break the application down into autonomous chunks. Each chunk – or microservice – can sit on the cloud independently and communicate with each other, mimicking the monolith.
There are lots of advantages. Teams can work on smaller chunks more easily, making changes at their own pace. This suits the DevOps approach to software development, where changes are made frequently, as often as dozens of times a day.
Overall, Kubernetes and containers are cheap and easy to scale, expanding and contracting as needed. Applications can run on a variety of hardware, while errors are healed automatically.
The default choice?
So should all applications be rebuilt with Kubernetes and containers? Opinions vary. The consensus is that Kubernetes is the default. It’s best to re-engineer on cloud native principles to derive the biggest benefit.
“As Kubernetes and containers are abstraction layers made to facilitate the deployment of applications, all applications are fit to run using this technology,” says Emmanuelle Demompion, Kubernetes product manager at Scaleway, an infrastructure-as-a-service provider. “When dealing with an application with only one big component (aka a ‘monolith’ in tech talk), the risk and investment entailed in moving towards a containerised architecture can be very high. Changing architecture such as critical legacy software can even go hand in hand with service availability issues and the exposure of bugs.”
Nevertheless, she says this transition can be managed by moving one chunk at a time.
In human terms, learning about Kubernetes is always worth it, she adds. “Kubernetes is only on the rise,” Demompion says. “So training developers, DevOps, architects and all your tech team on containerisation and Kubernetes will never be a waste of time, as they will very likely work on those technologies in the near future.”
Bluntly, it’s a technology you’ve got to know about.