containers-groundview

The use case for Kubernetes

Paavo Pokkinen1 year ago
article

Quite a few software engineering teams and organizations I’ve seen raise the question should they utilize Kubernetes or not for running their workloads. What problems could it solve for them, and how much extra complexity does it adds to the stack?

Let’s start by bringing up two crucial points about what Kubernetes is:

  1. It is a de facto container orchestration system

  2. It is a declaratively managed control system

For the first point answer is relatively straightforward. If the team needs to manage a lot of Docker container-based workloads, then Kubernetes is your likely choice. On the flip side, it does add quite a bit of complexity to the stack, but this can be alleviated by utilizing managed solutions such as Google Container Engine (GKE), or even its fully managed offering, GKE Autopilot. There’s a classic tradeoff between build vs buy.

If you only have to manage one or a few trivial container-based applications, you are better off using a simple tool like Cloud Run, which abstracts away a lot of that underlying complexity. Or if you are working on a non-container workload, there are other options like Cloud Functions or even traditional VMs.


For the second point, it gets a bit more interesting. We realize we can use Kubernetes to manage almost anything in our infrastructure and application stack, even other Kubernetes clusters and resources on the cloud provider side. Kubernetes can be used as a basis to build an internal developer platform that controls all our software provisioning and lifecycle management, end to end. Victor Farcic has made an excellent video describing the concept of Kubernetes-based Internal Developer Platforms (IDPs) more in detail.

Kubernetes is founded on the concept of control loops and controllers that execute those loops to eventually reach desired state described by the human operator. Its API can be extended using custom resource definitions, where we can declare desired state in terms meaningful to us, and it is the job of Kubernetes and its controllers, or custom controllers, to reach that state.

Let’s take a look at one example: we can envision a company that creates and maintains WordPress-based websites for a lot of different clients. We can envision having business requirements such as:

  • We must be able to quickly and repeatedly provision and configure WordPress instances, including DNS and Cloudflare records, databases, caches, and so on.

  • We must be able to fully decommission WordPress instances, in a way that all its dependencies (eg. databases, DNS records, TLS certificates) are cleaned out. This requirement is often overlooked by more conventional approaches, and some records are left behind in various systems.

  • We must at all times know how many instances are running, what their current versions are and which teams and clients they belong to, and who should be invoiced for them.

  • We must do this on a high enough level, that is understandable for a developer working on WordPress sites. They must be able to use concepts native to WordPress, not to Docker containers, IP addresses, load balancers, persistent volumes, etc. This means there must be some entity that describes what a “WordPress site” is.

All this can be built using Kubernetes, in such a way that models both business and technical entities and processes, and ties the logic together without a need to build any custom scripts and portals, or store state about our WordPress instances somewhere outside of the system. All our “master data” is contained within Kubernetes Custom Resource Definitions, business logic executed by our own (or 3rd party) controllers, and accessible via one universal API.

The one mistake many Platform and Ops teams do is exposing native Kubernetes entities such as pods, volumes, services, and ingresses directly to developers. This is never going to work: developers do not have the time or capacity to start digging into the Kubernetes world. That is why Ops people and Platform Engineers are there, to build abstractions to hide away some of that complexity.

Stay tuned for the second part of the article or reach out to me to find out more about how I can help you with the topic.