Industry uses of Kubernetes and use cases solved by it.

Rishabh Arya
9 min readAug 3, 2023

Kubernetes, also known as K8s, is an open-source system for automating deployment, scaling, and management of containerized applications. It is an orchestration tool for containers.

Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available.

The name Kubernetes originates from Greek, meaning helmsman or pilot. Google open-sourced the Kubernetes project in 2014. Kubernetes combines over 15 years of Google’s experience running production workloads at scale with best-of-breed ideas and practices from the community.

Traditional deployment era: Early on, organizations ran applications on physical servers. There was no way to define resource boundaries for applications in a physical server, and this caused resource allocation issues. For example, if multiple applications run on a physical server, there can be instances where one application would take up most of the resources, and as a result, the other applications would underperform. A solution for this would be to run each application on a different physical server. But this did not scale as resources were underutilized, and it was expensive for organizations to maintain many physical servers.

Virtualized deployment era: As a solution, virtualization was introduced. It allows you to run multiple Virtual Machines (VMs) on a single physical server’s CPU. Virtualization allows applications to be isolated between VMs and provides a level of security as the information of one application cannot be freely accessed by another application.

Virtualization allows better utilization of resources in a physical server and allows better scalability because an application can be added or updated easily, reduces hardware costs, and much more. With virtualization you can present a set of physical resources as a cluster of disposable virtual machines.

Each VM is a full machine running all the components, including its own operating system, on top of the virtualized hardware.

Container deployment era: Containers are similar to VMs, but they have relaxed isolation properties to share the Operating System (OS) among the applications. Therefore, containers are considered lightweight. Similar to a VM, a container has its own filesystem, share of CPU, memory, process space, and more. As they are decoupled from the underlying infrastructure, they are portable across clouds and OS distributions.

Containers have become popular because they provide extra benefits, such as:

  • Agile application creation and deployment: increased ease and efficiency of container image creation compared to VM image use.
  • Continuous development, integration, and deployment: provides for reliable and frequent container image build and deployment with quick and easy rollbacks (due to image immutability).
  • Dev and Ops separation of concerns: create application container images at build/release time rather than deployment time, thereby decoupling applications from infrastructure.
  • Observability not only surfaces OS-level information and metrics, but also application health and other signals.
  • Environmental consistency across development, testing, and production: Runs the same on a laptop as it does in the cloud.
  • Cloud and OS distribution portability: Runs on Ubuntu, RHEL, CoreOS, on-premises, on major public clouds, and anywhere else.
  • Application-centric management: Raises the level of abstraction from running an OS on virtual hardware to running an application on an OS using logical resources.
  • Loosely coupled, distributed, elastic, liberated micro-services: applications are broken into smaller, independent pieces and can be deployed and managed dynamically — not a monolithic stack running on one big single-purpose machine.
  • Resource isolation: predictable application performance.
  • Resource utilization: high efficiency and density

Kubernetes use cases by different companies

IBM

✔ Challenges Faced:

IBM Cloud offers public, private, and hybrid cloud functionality across a diverse set of runtimes from its OpenWhisk-based function as a service (FaaS) offering, managed Kubernetes and containers, to Cloud Foundry platform as a service (PaaS). These runtimes are combined with the power of the company’s enterprise technologies, such as MQ and DB2, its modern artificial intelligence (AI) Watson, and data analytics services. Users of IBM Cloud can exploit capabilities from more than 170 different cloud native services in its catalog, including capabilities such as IBM’s Weather Company API and data services. In the later part of 2017.

The IBM Cloud Container Registry team wanted to build out an image trust service.

✔ Solution they came up with:

The work on this new service culminated with its public availability in the IBM Cloud in February 2018. The image trust service, called Portieris, is fully based on the Cloud Native Computing Foundation (CNCF) open source project Notary, according to Michael Hough, a software developer with the IBM Cloud Container Registry team.

“Portieris is a Kubernetes admission controller for enforcing content trust. Users can create image security policies for each Kubernetes namespace, or at the cluster level, and enforce different levels of trust for different images. Portieris is a key part of IBM’s trust story, since it makes it possible for users to consume the company’s Notary offering from within their IKS clusters.”

The offering is that Notary server runs in IBM’s cloud, and then Portieris runs inside the IKS cluster. This enables users to be able to have their IKS cluster verify that the image they’re loading containers from contains exactly what they expect it to, and Portieris is what allows an IKS cluster to apply that verification.

✔ Impact to the industry:

IBM’s intention in offering a managed Kubernetes container service and image registry is to provide a fully secure end-to-end platform for its enterprise customers.

“Image signing is one key part of that offering, and our container registry team saw Notary as the de facto way to implement that capability in the current Docker and container ecosystem,” Hough says.

The company had not been offering image signing before, and Notary is the tool it used to implement that capability.

“We had a multi-tenant Docker Registry with private image hosting,” Hough says.

“The Docker Registry uses hashes to ensure that image content is correct, and data is encrypted both in flight and at rest. But it does not provide any guarantees of who pushed an image. We used Notary to enable users to sign images in their private registry namespaces if they so choose.”

✔ The key reason for selecting Notary

The key reason for selecting Notary was that it was already compatible with the existing authentication stack IBM’s container registry was using. So was the design of TUF, which does not require the registry team to have to enter the business of key management. Both of these were “attractive design decisions that confirmed our choice of Notary,” Michael Hough says.

“Image signing is one key part of our Kubernetes container service offering, and our container registry team saw Notary as the de facto way to implement that capability in the current Docker and container ecosystem” — MICHAEL HOUGH, A SOFTWARE DEVELOPER WITH THE IBM CLOUD CONTAINER REGISTRY TEAM

Now that the Notary-implemented service is generally available in IBM’s public cloud as a component of its existing IBM Cloud Container Registry, it is deployed as a highly available service across five IBM Cloud regions. This high-availability deployment has three instances across two zones in each of the five regions, load balanced with failover support. “We have also deployed it with end-to-end TLS support through to our back-end IBM Cloudant persistence storage service,” Hough says.

✔ Growth of IBM due to Kubernetes

“After contribution to CNCF of both TUF and Notary, we perceived that it was becoming the de facto standard for image signing in the container ecosystem”, says Michael Hough, a software developer with the IBM Cloud Container Registry team.

The IBM team has created and open sourced a Kubernetes admission controller called Portieris, which uses Notary signing information combined with customer-defined security policies to control image deployment into their cluster. “We are hoping to drive adoption of Portieris through its use of our Notary offering,” Hough says.

IBM has been a key player in the creation and support of open source foundations, including CNCF. Todd Moore, IBM’s vice president of Open Technology, is the current CNCF governing board chair and a number of IBMers are active across many of the CNCF member projects.

“With our IBM Cloud Kubernetes as-a-service offering and the admission controller we have made available, it allows both IBM services as well as customers of the IBM public cloud to use security policies to control service deployment.” — MICHAEL HOUGH, A SOFTWARE DEVELOPER WITH THE IBM CLOUD CONTAINER REGISTRY TEAM

“Given that, we see CNCF as a safe haven for cloud native open source, providing stability, longevity, and expected maintenance for member projects — no matter the originating vendor or project,” Hough says. Because the entire cloud native world is a fast-moving area with many competing vendors and solutions, “we see the CNCF model as an arbiter of openness and fair play across the ecosystem,” he says.

— — — — — — — — — — — — — — — — — — — — — —

Spotify

Spotify is a digital music, podcast, and video streaming service that gives you access to millions of songs and other content from artists all over the world.

1 . Challenge Faced by Spotify :

Launched in 2008, the audio-streaming platform has grown to over 200 million monthly active users across the world. “Our goal is to empower creators and enable a really immersive listening experience for all of the consumers that we have today — and hopefully the consumers we’ll have in the future,” says Jai Chakrabarti, Director of Engineering, Infrastructure and Operations. An early adopter of microservices and Docker, Spotify had containerized microservices running across its fleet of VMs with a homegrown container orchestration system called Helios. By late 2017, it became clear that “having a small team working on the features was just not as efficient as adopting something that was supported by a much bigger community,” he says.

2 . How Kubernetes helps Spotify to solve their challenges :

“We saw the amazing community that had grown up around Kubernetes, and we wanted to be part of that,” says Chakrabarti. Kubernetes was more feature-rich than Helios. Plus, “we wanted to benefit from added velocity and reduced cost, and also align with the rest of the industry on best practices and tools.” At the same time, the team wanted to contribute its expertise and influence in the flourishing Kubernetes community. The migration, which would happen in parallel with Helios running, could go smoothly because “Kubernetes fit very nicely as a complement and now as a replacement to Helios,” says Chakrabarti.

3 . Impact after using Kubernetes :

The team spent much of 2018 addressing the core technology issues required for a migration, which started late that year and is a big focus for 2019. “A small percentage of our fleet has been migrated to Kubernetes, and some of the things that we’ve heard from our internal teams are that they have less of a need to focus on manual capacity provisioning and more time to focus on delivering features for Spotify,” says Chakrabarti. The biggest service currently running on Kubernetes takes about 10 million requests per second as an aggregate service and benefits greatly from autoscaling, says Site Reliability Engineer James Wen. Plus, he adds, “Before, teams would have to wait for an hour to create a new service and get an operational host to run it in production, but with Kubernetes, they can do that on the order of seconds and minutes.” In addition, with Kubernetes’s bin-packing and multi-tenancy capabilities, CPU utilization has improved on average two- to threefold.

“We saw the amazing community that’s grown up around Kubernetes, and we wanted to be part of that. We wanted to benefit from added velocity and reduced cost, and also align with the rest of the industry on best practices and tools.”

— JAI CHAKRABARTI, DIRECTOR OF ENGINEERING, INFRASTRUCTURE AND OPERATIONS, SPOTIFY

— — — — — — — — — — — — — — — — — — — — — — — — — — — — — —

Thanks for reading.

--

--

Rishabh Arya

I am an active learner who likes to challenge every problem with a can-do mindset in order to make any idea a reality.