This month we are going to deal with a very controversial topic within the AWS world.

In the past few months, I have come across quite a few people who have a tendency to hate Kubernetes and try to avoid it at all costs. It has not been precisely people who commit to legacy models but, quite the opposite, people with an extensive background in application modernization and use of AWS.

This is interesting because the current state of implementation of Kubernetes grows bigger every day, and it has many defenders.

Kubernetes is a great product. I think no one questions this, and it has a lot of valid use cases.

There are also many valid alternatives to Kubernetes that may be more suitable depending on the use case.

But what are the problems that I see with Kubernetes and why I like other alternatives better? I invite you to follow me in this beautiful story of hatred of a technological stack.

What Is Kubernetes?

“Kubernetes is a container orchestrator”. I have had a lot of discussions about this sentence, which is still on the official page of Kubernetes. It has even generated quite a few jokes (someone has a very nice kimono with that phrase screenprinted on the back). But in short (and agreeing with the owner of the Kimono), Kubernetes is not a container orchestrator, it is many more things, and this is where perhaps the first problem arises: Kubernetes is not easy; it never was, and it never will be.

To explain this, we must go back to the origin of Kubernetes. In 1979 … Okay, it is not necessary to go that far back, but the ancestors of the containerization of applications go back to that date. ;) In 2006 several Google engineers began to develop something curious within the Linux kernel called “process containers”—although they later renamed it ""control groups”" or, as it is better known, “cgroups”. This feature is what allowed the birth of the containers as we now know them.

Google began to use this functionality internally for its applications and, as it needed to manage them, it created Borg (the ancestor of Kubernetes) to manage its own containers.
Years later, Docker was released (previous container implementations already existed, but none as good and simple).

The first problem arose here: Containers were an incredible idea, but there was no way to manage and govern them (it is true that there were certain solutions, but they were not very complete).

Everything changed at the end of 2014. Google rewrote Borg with all the knowledge it had gained and released it as Kubernetes. Kubernetes was here to fix all problems and quickly became the preferred technology for managing and governing containers. This is a very short summary and in broad strokes; we could write row after row about the history of containers since 1979 or something like that… 2014. It was a prolific year. At the end of the year, 2 container-based AWS services were launched that we will talk about later: Lambda and ECS.

The Problems Begin

Kubernetes was the most complete and mature solution. Google had been using and maintaining Borg for almost 10 years when it launched Kubernetes.

But it is a solution that is designed to manage thousands and thousands of containers distributed among hundreds of physical clusters, and this is quite noticeable when using it.
It is not intended for small workloads; rather, it is intended for very large workloads.

Kubernetes solved many things, but it also brought about new problems such as managing Kubernetes networking, load isolation, scaling, securitization, encryption, deployment model, operation model, patching, etc.

A clear example of these problems is the scaling of Kubernetes. Escalating and de-escalating a pod (Which is a group of one or more containers) is easy. But escalating and de-escalating the infrastructure where the pods are running (nodes) is very complicated because you have to create the infrastructure, the pods do not have homogeneous sizes, you have to distribute the replicas of the pods on different servers, reunify pods on the same server to empty others and be able to turn them off, etc.

Although there are many solutions that be of help (such as Karpenter), the effort and cost of managing and maintaining a Kubernetes cluster are very high.

Kubernetes requires a lot of expertise in the technology and a team to maintain the technology stack.

In 2014 cloud implementation was still small. For example, OpenStack was at its highest point. At that time, the IT world was much more complex and dependent on pure infrastructure.

Managing layers and layers of complexity was our daily bread. But we are not in 2014 anymore and the cloud has made life easier for us and shifted the paradigm.

We are now living in a time when attempts are made to simplify these tasks as much as possible, and developers tend to be empowered to speed up deployments. Thus, adding these layers of infrastructure complexity is going against the grain.

Here, a series of cloud Services that are simpler than a pure Kubernetes Cluster but in which we can execute containerized loads come into play: Lambda, Fargate, ECS (Elastic Container Service) and EKS (Elastic Kubernetes Service).

I’m going to be honest with you: None, including EKS, which is a Kubernetes managed service, have the power of pure Kubernetes, but we don’t need that power.

Is It Necessary to Deploy Kubernetes in All Use Cases?

The answer is no. Many use cases do not require something as complex as Kubernetes.

Lambda is a serverless service that allows code to be run directly, without the need to provide infrastructure.

It can’t manage the container itself, but actually generates a pre-built container or context that allows us to run code directly.
Fargate (ECS or EKS) is a service that allows you to run containers in serverless mode without having to worry about the cluster running the load.

Both Lambda and Fargate remove you from all that complexity; they take care of managing it. You simply deploy your code or your image and that’s it. Something very simple but very powerful at the same time.

Both make use of a very interesting open source technology developed by Amazon called Firecracker.

ECS is an AWS-managed container orchestrator. It’s much simpler than Kubernetes. It is just a container orchestrator that delegates the rest of the tasks to other AWS services.

EX is a managed Kubernetes service that abstracts us from infrastructure deployment. EKS is in charge of managing part of our Kubernetes clusters (Kubernetes Masters). It eliminates part of the complexity of Kubernetes but affords us some flexibility.

All these Services make it easier to deploy applications based on a container model by abstracting from the complexity of Kubernetes.
These are services that are more limited and are designed for a wide variety of use cases.

Why deploy something as complex as Kubernetes if we can use simpler tools?

Well, it is quite complicated to explain, but there are usually reasons, which we are going to see and discuss here.

Kubernetes Does Not Have Lock-In

It is quite common to think that Kubernetes does not have Lock-In, and it is one of the most used justifications for prioritizing the use of Kubernetes over other alternatives.

But, unfortunately, Kubernetes has Vendor Lock-In.

First, a development made in Kubernetes has to be executed in Kubernetes; you will not be able to execute it in another type of containerization, and even less outside the world of containers.

And that is a Lock-In. It is not very big because the container model is quite flexible and allows us to move in a “simple” way. But let’s be honest: no one deploys vanilla Kubernetes.

Vanilla Kubernetes has little Lock-In (although it does) but it is difficult to deploy and also requires more software to deal with all the complexity associated with Kubernetes.

The Vendors who propose different Stacks that add tools that solve or facilitate many of the problems that we have mentioned come in here.

The problem here is that each Vendor adds its own features to add value to its Stack, thereby causing a Lock-In between different Stacks. It is ironic to talk about avoiding Lock-Ins with Stacks that use their own tools and even modify the Kubernetes model.

Many people think that migrating from one flavour of Kubernetes to another is a transparent process, while going to a cloud service is going to be very expensive.

Once we are working in containers, the effort is going to be very similar.
There is an on a recent study comparing migrating a standard project to different flavors of managed Kubernetes and additionally to ECS. Interestingly, the migration time and the migration effort are exactly the same.

We have talked about Lock-In other times on this blog.
And it’s a necessary evil, so we must manage it as such.

There is a great tendency to avoid it, and this is partly due to an abuse of Lock-In by certain Vendors and not having been managed properly in the past.

It is necessary to assess whether a Lock-In like the one that Lambda, Fargate, ECS or EKS may have suits us and makes life easier for us and also whether it takes into account how much would it cost us to move to another technology.

The important thing isn´t not having Lock-In (because basically it’s impossible to avoid it) but managing it correctly.

Kubernetes is Multi-Cloud

This is the biggest lie ever told in Cloud computing, and the answer is no. Kubernetes is not Multi-Cloud. You can run Kubernetes on multiple Clouds, but it does not mean that it works the same in each Cloud.

An example I like to give is Terraform. Terraform allows infrastructure to be deployed in all Clouds, but a Terraform code that you have created for AWS will only work on AWS; it will not work on any other Cloud.

What Terraform gives us is the chance of using the same structure and language—but not the same content. The same goes for Kubernetes (although, in actuality, it is power that is given to us by the containers and not Kubernetes).

A Kubernetes Cluster on AWS won’t work the same in Azure and Google Cloud, and this is so because the different Clouds are similar, but their implementation is totally different. Only by looking at the differences in the networking model and the IAM (Identity and Access Management) model can we realize the differences.

For a while, and thanks to the great Corey Quinn, I always recommend the same thing when talking about Multi-Cloud: First of all, try to mount it in Multi-Region with the same Cloud Provider.

Managing something as simple as persistence becomes very complicated the moment we move from a single region to multiple regions.

And each layer that we add gets more and more complicated—and we are talking about the same Cloud where the model is the same and the APIs are compatible. If we move to another Cloud, the problem multiplies exponentially.

Kubernetes == Cloud

There’s a pretty big assumption that if we’re using Kubernetes, we’re using cloud.

Although all Cloud Providers have managed Kubernetes services, Kubernetes as a technology was not born in the cloud and evolved in parallel.

It is true that the Kubernetes best practices align very much with the good practices for both cloud and application modernization.

The use of containers makes sense in microservices architectures.

Containers are not a new thing. The use of containers, or, rather, the ancestors of containers, comes from way back, and many Unix system administrators have used those ancestors, so evolving to Kubernetes was not something difficult and can even be seen as something natural.

This in itself is not a problem. Having a strategy in an On-Prem container is not bad per se.

The problem is that Kubernetes is sometimes used as part of a non-existent cloud evolution. We are talking about a cloud strategy that is based on the use of Kubernetes in the cloud as if it were an On-Prem infrastructure.

This is a very bad idea because we are actually using the cloud as an attached CPD, and the cloud does not work the same as a CPD.

In Kubernetes Not Everything Fits

In the end, Kubernetes has been given so much flexibility that many workloads can be run.

This seems good at first, but the fact that it can be executed does not mean that it is the most optimal solution, and even less so if we want to evolve.

A clear example would be databases in Kubernetes.

It is possible to run a database on Kubernetes but it just doesn’t make sense. In the end, you are not containerizing a microservice but containerizing an entire database server.

What good is a pod if it requires an entire server? (You might be surprised at what you see out there.)

Another horrible example is the famous “Lift and Shift to Kubernetes.” What is the point of moving from a virtualized server to a pod in Kubernetes?

It’s possible to do it, but we’re just asking for trouble and using container technology for something that’s not its intended purpose.

The problem is not whether Kubernetes can run these uploads. The problem is that it’s a bad use case; that it’s becoming too general.

With great power comes great responsibility, and in the case of Kubernetes, this power is being used to containerize loads that should not be running on Kubernetes.

Summary

I’m not going to lie to you: Kubernetes is not a bad solution. There are use cases where it is the most optimal solution.

Here at Paradigma there are colleagues who are working on Kubernetes projects where there is no other option than using Kubernetes, and they are doing a great job.
I have seen quite a few Kubernetes Clusters that are very well set up, very well operated and necessary.

I really don’t hate Kubernetes; I hate bad Kubernetes implementations, which unfortunately are the most common of late.

A good technology that should be used for one type of use case is being used for the wrong use cases.

This is a problem because a lot of times we are creating unnecessary complexity. These bad implementations are doomed to failure.

It is very common for us to start things by setting up a Kubernetes cluster to run our future workloads in without taking into account the workloads themselves.

First, we assemble the cluster and then we define the loads. There is also the possibility of directly develop in Kubernetes without taking into account if it is going to be the most optimal solution.

We are in 2023. The division between infrastructure and development is something of the past. We must think about the load that we are going to generate and choose the most optimal place to execute it.

My recommendation is to go from less to more complexity, load by load, and to evaluate each jump.

The order that I propose would be the following:

  1. Lambda
  2. Fargate
  3. ECS
  4. EX
  5. Kubernetes EC2
  6. Kubernetes OnPrem

It is important to evaluate the “why” in each jump.

If I can’t use Lambda, I must ask myself the reason and if it is really justified. In many cases, Lambda is not used because the container always has to be running. But is this requirement really justified, or is it just because a development where the service does not depend on events and is always running is more comfortable or usual for me? The same goes for Fargate, which is often discarded for not allowing persistent disks to be mounted.

Although ECS, EKS and Kubernetes allow persistent disks to be mounted in the pods, it is not recommended; in fact, it should be avoided as much as possible.

We must do this exercise with all the loads and in all the steps. A lot of times Kubernetes is abused because it allows us to misuse it from the past. But this is not an advantage; it is a problem.

It is also important to analyze each load without taking into account the global.
If, for example, 80% of our loads can work in Fargate and the rest require EKS, nothing happens. Let’s then set up a small cluster for that remaining 20%, ​and execute the other 80% in Lambda.

Last but not least, we must not forget about EC2. There are loads that it does not make sense to containerize right now. A containerized monolith is still a monolith. In these cases, staying in EC2 and evolving our application to other models in the future is not a bad idea.

This is all for this discussion of my hatred for Kubernetes or, rather, its misuse.
P.S.: The post includes several links to very interesting technologies such as Karpenter and Firecracker. I recommend that you take a look at them.

Tell us what you think.

Comments are moderated and will only be visible if they add to the discussion in a constructive way. If you disagree with a point, please, be polite.

Subscribe

We are committed.

Technology, people and positive impact.