Kubernetes — the 2020 Buzz

Rutujakonde
7 min readDec 25, 2020

--

Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available.

Going back in time — History

Let’s take a look at why Kubernetes is so useful by going back in time.

Traditional deployment era: Early on, organizations ran applications on physical servers. There was no way to define resource boundaries for applications in a physical server, and this caused resource allocation issues. For example, if multiple applications run on a physical server, there can be instances where one application would take up most of the resources, and as a result, the other applications would under perform. A solution for this would be to run each application on a different physical server. But this did not scale as resources were underutilized, and it was expensive for organizations to maintain many physical servers.

Virtualized deployment era: As a solution, virtualization was introduced. It allows you to run multiple Virtual Machines (VMs) on a single physical server’s CPU. Virtualization allows applications to be isolated between VMs and provides a level of security as the information of one application cannot be freely accessed by another application.

Virtualization allows better utilization of resources in a physical server and allows better scalability because an application can be added or updated easily, reduces hardware costs, and much more. With virtualization you can present a set of physical resources as a cluster of disposable virtual machines.

Each VM is a full machine running all the components, including its own operating system, on top of the virtualized hardware.

Container deployment era: Containers are similar to VMs, but they have relaxed isolation properties to share the Operating System (OS) among the applications. Therefore, containers are considered lightweight. Similar to a VM, a container has its own filesystem, share of CPU, memory, process space, and more. As they are decoupled from the underlying infrastructure, they are portable across clouds and OS distributions.

Containers have become popular because they provide extra benefits, such as:

  • Agile application creation and deployment: increased ease and efficiency of container image creation compared to VM image use.
  • Continuous development, integration, and deployment: provides for reliable and frequent container image build and deployment with quick and easy rollbacks (due to image immutability).
  • Dev and Ops separation of concerns: create application container images at build/release time rather than deployment time, thereby decoupling applications from infrastructure.
  • Observability not only surfaces OS-level information and metrics, but also application health and other signals.
  • Environmental consistency across development, testing, and production: Runs the same on a laptop as it does in the cloud.
  • Cloud and OS distribution portability: Runs on Ubuntu, RHEL, CoreOS, on-premises, on major public clouds, and anywhere else.
  • Application-centric management: Raises the level of abstraction from running an OS on virtual hardware to running an application on an OS using logical resources.
  • Loosely coupled, distributed, elastic, liberated micro-services: applications are broken into smaller, independent pieces and can be deployed and managed dynamically — not a monolithic stack running on one big single-purpose machine.
  • Resource isolation: predictable application performance.
  • Resource utilization: high efficiency and density.

Why we need Kubernetes?

Containers are a good way to bundle and run your applications. In a production environment, you need to manage the containers that run the applications and ensure that there is no downtime. For example, if a container goes down, another container needs to start. Wouldn’t it be easier if this behavior was handled by a system?

That’s how Kubernetes comes to the rescue! Kubernetes provides you with a framework to run distributed systems resiliently. It takes care of scaling and failover for your application, provides deployment patterns, and more. For example, Kubernetes can easily manage a canary deployment for your system.

OK, but why all the buzz? Why is Kubernetes so popular?

As more and more organizations move to microservice and cloud native architectures that make use of containers, they’re looking for strong, proven platforms. Practitioners are moving to Kubernetes for four main reasons:

1. Kubernetes helps you move faster. Indeed, Kubernetes allows you to deliver a self-service Platform-as-a-Service (PaaS) that creates a hardware layer abstraction for development teams. Your development teams can quickly and efficiently request the resources they need. If they need more resources to handle additional load, they can get those just as quickly, since resources all come from an infrastructure shared across all your teams.

2. Kubernetes is cost efficient. Kubernetes and containers allow for much better resource utilization than hypervisors and VMs do; because containers are so light weight, they require less CPU and memory resources to run.

3. Kubernetes is cloud agnostic. Kubernetes runs on Amazon Web Services (AWS), Microsoft Azure, and the Google Cloud Platform (GCP), and you can also run it on-premise. You can move workloads without having to redesign your applications or completely rethink your infrastructure — which lets you to standardize on a platform and avoid vendor lock-in.

4. Cloud providers will manage Kubernetes for you. As noted earlier, Kubernetes is currently the clear standard for container orchestration tools. It should come as no surprise, then, that major cloud providers are offering plenty of Kubernetes-as-a-Service-offerings. Amazon EKS, Google Cloud Kubernetes Engine, Azure Kubernetes Service (AKS), Red Hat Openshift, and IBM Cloud Kubernetes Service all provide a full Kubernetes platform management, so you can focus on what matters most to you — shipping applications that delight your users.

Kubernetes Leads the Pack

The graph below shows the skyrocketing interest in the project.

So, how does Kubernetes work?

The central component of Kubernetes is the cluster. A cluster is made up of many virtual or physical machines that each serve a specialized function either as a master or as a node. Each node hosts groups of one or more containers (which contain your applications), and the master communicates with nodes about when to create or destroy containers. At the same time, it tells nodes how to re-route traffic based on new container alignments.

The following diagram depicts a general outline of a Kubernetes cluster:

Who uses Kubernetes?

COMPANIES

2253 companies reportedly use Kubernetes in their tech stacks, including Google, Shopify, and Slack.

DEVELOPERS

24652 developers on StackShare have stated that they use Kubernetes.

Kubernetes Integrations

Docker, Microsoft Azure, Ansible, Vagrant, and Google Compute Engine are some of the popular tools that integrate with Kubernetes. Here’s a list of all 226 tools that integrate with Kubernetes.

CASE STUDY:

CASE STUDY: adform

Improving Performance and Morale with Cloud Native

Challenge

Adform’s mission is to provide a secure and transparent full stack of advertising technology to enable digital ads across devices. The company has a large infrastructure: OpenStack-based private clouds running on 1,100 physical servers in 7 data centers around the world, 3 of which were opened in the past year. With the company’s growth, the infrastructure team felt that “our private cloud was not really flexible enough,” says IT System Engineer Edgaras Apšega. “The biggest pain point is that our developers need to maintain their virtual machines, so rolling out technology and new software takes time. We were really struggling with our releases, and we didn’t have self-healing infrastructure.”

Solution

The team, which had already been using Prometheus for monitoring, embraced Kubernetes and cloud native practices in 2017. “To start our Kubernetes journey, we had to adapt all our software, so we had to choose newer frameworks,” says Apšega. “We also adopted the microservices way, so observability is much better because you can inspect the bug or the services separately.”

Impact

“Kubernetes helps our business a lot because our features are coming to market faster,” says Apšega. The release process went from several hours to several minutes. Autoscaling has been at least 6 times faster than the semi-manual VM bootstrapping and application deployment required before. The team estimates that the company has experienced cost savings of 4–5x due to less hardware and fewer man hours needed to set up the hardware and virtual machines, metrics, and logging. Utilization of the hardware resources has been reduced as well, with containers notching 2–3 times more efficiency over virtual machines. “The deployments are very easy because developers just push the code and it automatically appears on Kubernetes,” says Apšega. Prometheus has also had a positive impact: “It provides high availability for metrics and alerting. We monitor everything starting from hardware to applications. Having all the metrics in Grafana dashboards provides great insight on your systems.”

“Kubernetes enabled the self-healing and immutable infrastructure. We can do faster releases, so our developers are really happy. They can ship our features faster than before, and that makes our clients happier.”

CONCLUSION

Kubernetes is a great tool for orchestrating containerised applications. It automates the very complex task of dynamically scaling an application in real time. The problem with K8s is that it’s a complex system itself; this is an impairment when things are not working as expected.

--

--

No responses yet