Priyanka Somrah

Priyanka Somrah 🦋
9 min readMar 5, 2018

--

A Comparison of Red Hat’s OpenShift — CoreOS’ Tectonic — Vanilla Kubernetes

The $250M acquisition of CoreOS earlier this January by Red Hat, the multinational software company, is set to be a game changer for the Enterprise technology ecosystem. It is imperative to understand how this acquisition could potentially disrupt current trends in the Kubernetes community. After hitting the mainstream market, cloud is now extending its impact beyond Silicon Valley and with a Red Hat-CoreOS duo, we expect Red Hat to reign over the upstream open source communities.

CoreOS is a San Francisco based Series B startup that has been hailed as a product innovator in the Kubernetes community and creator of Tectonic which is offered as a Container-as-a-Service (CaaS). From 2014 to 2016, CoreOS has raised a total of $48M in funding led by GV, Kleiner Perkins and other investors. Big deal!

Meanwhile, Red Hat, having successfully predicted the shift towards containers, has rearchitected its product, OpenShift, a Platform-as-a-Service (PaaS), to integrate a Kubernetes core in the existing platform.

It isn’t hard to imagine why Red Hat’s acquisition of CoreOS is a big step. Not only does this acquisition highlight the significance of containers in the enterprise infrastructure ecosystem but this move predicts that the deployment of containers in a Kubernetes cluster is on its way to becoming the very foundation of contemporary applications.

Following the Red Hat-CoreOS acquisition, I expect to see CoreOS’ Tectonic being developed as a public collaboration and being freely available. Given that Tectonic is, at its core, a commercial Kubernetes platform backing its enterprise customers with commercial services, OpenShift will stand as an enterprise-grade container platform and a direct competitor to Ubuntu for Linux distribution in the public cloud. But the most interesting outcome of this acquisition will be in the way customers build containerized applications and deploy them in just about any open-source environment. I look forward to further development in the existing hybrid cloud platform in a way that will hopefully accommodate for modern apps. I bet we’re in for a real treat!

Going forward, I will analyze our two main options for Kubernetes implementation, OpenShift and Tectonic, and see how they compare against the basic model of the Vanilla Kubernetes. To be clear, a Kubernetes cluster can be implemented multiple ways. Our implementation options are broadly categorized into three ways: Vanilla Kubernetes, Vendor Linux Distributions and Hosted Container Platforms.

The folks at The New Stack mapped out this stellar tabular distribution to make sense of the various Kubernetes implementations.

The New Stack’s “Community-Supported” distributions is what I previously referred to as Vanilla Kubernetes. Interestingly, the table separates Vendor distributions based on the added value. The fourth and final column reflects the companies with PaaS offering. All in all, the table aptly maps out the Kubernetes distributions across the platforms that are being offered. But if you look closely, you will see that we are missing the distribution of Hosted Container Platforms that subsets cloud native giants like Amazon’s ECS Container Services and Microsoft’s Azure Container Service. Hosted Container Platforms might be food for a different post but for now let’s jump to some high-value analysis.

(Disclaimer: I am not a software engineer. I am just someone with a background in economics who is deeply inquisitive about everything containers and K8s!)

OpenShift Container Platform

Platform Overview

Red Hat, Inc. (NYSE: RHT), is the architect behind Red Hat OpenShift, a private PaaS. Red Hat was quick to develop a taste for containers. It took a giant leap when it re-engineered its core operating system, the Red Hat Enterprise Linux (RHEL), to integrate container management. What we have now is a new OpenShift with a modified core of Kubernetes Orchestration.

Customer perspective

With a new version of OpenShift that has both an Integrated Development Environment (IDE) and a Kubernetes Orchestration Platform, customers can now easily deploy OpenShift through on-prem (short for on-premises software) or cloud-based solutions. Given that Docker-based containers are fully viable on the new IDE platform, developers can now create, test and deploy formatted Docker applications to the cloud infrastructure.

OpenShift is portable. Red Hat’s DeltaCloud API allows for its customers to seamlessly migrate software deployments from one cloud infrastructure to another.

**DeltaCloud graduated from the Apache Incubator in May 2010. Check out the list of softwares that are currently in Apache incubation and for the ones that already graduated!

OpenShift is a stable platform. The Red Hat Enterprise Linux, a layer of Container Orchestration, a Docker runtime and a bunch of thoroughly validated containers stack up to build OpenShift’s core architecture. Given this assembly of components, our infrastructure software is now interoperable which means that we have a stable containerized platform.

(In tech jargon, the interoperability of a system is the characteristic that facilitates the use and exchange of data between systems/softwares.)

OpenShift supports independent software vendors. For the most part, the software vendors hail from Fortune 100 businesses. These vendors bring in third-party solutions as they build along Red Hat’s OpenShift to further secure and stabilize its platform.

OpenShift has a solid storage capacity. Kubernetes has rendered storage very simple with the new added feature called Storage Classes. With Storage Classes, storage is easily managed through a set of parameters that allow cluster administrators to profile the different “classes” that are offered.

Storage on the platform is fulfilled when containerized applications running on OpenShift specify the Storage Class capacity.

OpenShift doesn’t however solely rely on Kubernetes Storage Class. It uses the Container-Native Storage (CNS) for persistent storage capabilities. The CNS provides OpenShift with a storage capacity that is both scalable, secure and portable.

OpenShift can be installed in many ways. An Ansible-based installation (root or non-root) is used to gather data that is required for the OpenShift components to run on the targeted hosts, and the same installation process is used finally to deploy these components onto those hosts.

OpenShift sustains both automated and manual upgrades. You can manually perform non-disruptive upgrades, but you should upgrade each of the components in a given cluster. Similarly, automated upgrades can be seamlessly performed. Depending on whether you installed OpenShift using the advanced installation or if you opted for the quick installation, you can perform upgrades by using the “upgrade playbook” or the “installer.”

CoreOS Tectonic

Platform Overview

Tectonic is CoreOS’s own enterprise-ready Kubernetes platform which is secure and automated. When CoreOS identified that there was a security problem plaguing the way data is stored on the Internet, it created a novel brand of computing known as Distributed Trust Computing. With that, I can now introduce Tectonic as being the most trusted and secure Kubernetes platform for deploying, managing and securing containers. When everything is cryptographically verifiable and auditable, attacks against the deployment pipeline and other security breaches are detected and effectively thwarted. This is the Tectonic revolution.

Tectonic is a Container-as-a-Service. That being said, the primary role of Tectonic is to provide a framework for managing containers and deploying applications. By building a whole new category of infrastructure and with an eye set on commercial distribution, Tectonic packages application containers and automates operational tasks.

Tectonic pioneered the etcd. The distributed key-value store which is primarily responsible for storing and replicating all cluster states have been configured in a way that makes the whole automation mechanism and the data storage a reliable process.

Tectonic delivers GIFEE. GIFFE or “Google Infrastructure for Everyone Else” was conceived with the idea of making software that used to be strictly available to hyperscale companies available to the whole community. With GIFEE, infrastructures are robust, secure and scalable. As CoreOS’ CEO, Alex Polvi puts it, “GIFEE is a style of managing infrastructure where you can pull the plug on any server at any time, and the apps keep running.” Way to go, Polvi!

Tectonic sustains portability. While the cloud computing landscape is becoming overwhelmingly restrictive, CoreOS targeted the restrictive nature as a definite problem and figured the only way to move forward would be to build a cloud infrastructure that is deployed with freedom through an open source system. For this purpose, CoreOS now has a Kubernetes-as-a-Service Offering which offers customers the freedom to exhaust from both an open source software as well as from cloud providers.

With greater freedom comes portability. While tackling this issue, CoreOS is addressing the existing “Vendor Lock-in” problem that is directly linked to portability. Think of the process that goes into moving data from one cloud environment to another. Often times, the data will first need to be downloaded to the customer’s site before the whole cluster of data gets transferred to the new provider’s cloud infrastructure. The complexities in transferring some type of product or service “over the clouds” result in what is coined as the Vendor Lock-in situation. Hence, given that CoreOS has aggressively (and successfully) tackled this Vendor Lock-in problem, it is consciously building a platform that sustains portability across hybrid cloud solutions.

** I use the term “hybrid” to describe the cloud computing environment as being a mixture of on-prem, private and third-party cloud services that are orchestrated between two infrastructures.

Tectonic has a strong storage capacity. When I first tackled the case for OpenShift’s storage mechanism, I left out the role that “Persistent Volumes” play in the Kubernetes Storage Classes so I will briefly parse through it below:

Persistent Volumes or PVs are what makes up the storage resources in a Kubernetes cluster. Since pods are embedded with containers, pods also use PVs. However, PVs have a lifecycle that is independent of each of the individual pods that subsist on the PVs. This is an important feature that caters for persistent storage resources.

Persistent Volumes allow for a given Storage Class to be called by “Name” and allow for permanent storage of data. I am not delving into the depths of the mechanism, but the overarching point that I want to highlight is that Kubernetes Storage Classes simplifies storage management.

Tectonic has a couple of installation options. The choice is yours! You can use the Tectonic Installer, create a CoreOS account and install a scalable cluster or you can simply deploy Kubernetes onto any type of physical infrastructure using Tectonic’s PXE-based tools.

Tectonic sustains automated upgrades. As mentioned above, CoreOS has a deep engagement to fulfilling commercial distribution which is now furthered by the launch of Stackanetes. Put simply, Stackanetes with Tectonic is the distribution of Kubernetes and the CoreOS Stack and, interestingly, this technology was engineered to ease out on the use of application clusters loaded on the containerized system. As a result, Stackanetes also provides for a painless process for upgrading the system. It must be highlighted that Tectonic users should NOT conduct manual upgrades.

Vanilla Kubernetes

Platform Overview

Building your very own cloud with Vanilla Kubernetes is hard. But all is not lost, for we have a thriving Kubernetes community so it’s not too hard to get a hold of useful instructions. Should you be really curious on how to build your Kubernetes cloud, here’s what I found on some github.

Hence, if you opt out of using OpenShift or Tectonic, feel free to call for some in-house expertise to navigate through the technical complexities of building plain Vanilla.

Storage: Vanilla Kubernetes uses it’s very own Storage Classes similar to OpenShift and Tectonic.

Vanilla Kubernetes eases out the installation process. If you scroll back up to The New Stack’s table, you will see kubeadm and kops as listed under the first header. Kubeadm is just one of the many available automated toolkit options that can be run on any type of machine. However, by design, kubeadm will not install any type of networking solutions. Given that it is easily integrated with Ansible and Terraform, kubeadm makes for a great toolkit for new users who are just getting started.

Vanilla Kubernetes are easily upgradable. If you are looking for curated updates or migrations, Vanilla Kubernetes is NOT what you are looking for. In fact, with Vanilla Kubernetes there are no right guarantees for production. Yet, it receives the latest upgrades faster than Tectonic or OpenShift. The reasoning is pretty straightforward: Unlike the two aforementioned cloud-native platforms, Vanilla Kubernetes has a simple infrastructure that does not necessitate additional time to validate upgrade certificates. Upgrades are ran simply by using the command:kubeadm upgrade apply [version].

To wrap it all up, I’m including some super cool links for you to play around with. That being said, go forth, and containerize!

OpenShift:

Tectonic:

Vanilla K8s:

--

--