Top 8 RBAC Tools Every Kubernetes Admin Should Know
Role-Based Access Control (RBAC) is important for managing permissions in Kubernetes environments, ensuring that users...
Jul 27, 2021
Kubernetes helps with scaling, deploying, and managing containerized workloads, facilitating a faster deployment cycle and configuration management—all while providing improved access control. Kubernetes, the founding CNCF project, is at the base of cloud-native and can be easily deployed through any cloud provider.
This blog will compare on-premises, or self-hosted, Kubernetes clusters to managed ones. It will also outline your options for Kubernetes in the cloud. To do this, we’ll look at ease of use and set-up, custom node support, cost, release cycles, version support, and more.
Building and maintaining infrastructure requires both experienced engineers and domain experts. But not every organization can assemble such a dream team. Domain experts are rare, and almost all of them are already working with large software companies, tech giants, etc.
So, when choosing between managed and self-hosted Kubernetes, here are the main points you’ll need to take into consideration.
Self-managed Kubernetes means you’re running the Kubernetes installations, either in your data center or on virtual machines in the cloud. This entails a separate cost attached to the machines used to run your control plane, meaning you’ll have to plan for high availability and disaster recovery on your own. You’ll also have to set up automation to scale the nodes along with dependencies and provision your network for increased load.
As noted above, self-managed Kubernetes requires a big team that understands the deployment and amangement of different Kubernetes components. Your team will need to be able to handle etcd, the control plane, nodes, the Container Network Interface (CNI), service mesh, and smaller components like RBAC.
With managed Kubernetes, you don’t have to manage etcd or control planes, and many managed Kubernetes oversee CNI and service mesh for you. This makes it a lot easier for a small team of three to five people to handle a Kubernetes cluster than if you went with self-managed clusters.
Upgrading clusters is a big undertaking and can take a lot of time if you’re handling it yourself. Tasks include researching the changes in the next Kubernetes version, as well as components or APIs that are deprecated.
Managed Kubernetes, on the other hand, can be easily upgraded in one or two steps. You don’t need to take care of an etcd backup or make sure you individually upgrade control plane nodes to maintain high availability. All of this is taken care of for you by your cloud provider.
That said, cloud providers tend to lag behind vanilla Kubernetes, for the reasons mentioned earlier. At the time of writing this, Kubernetes 1.29 has just been released (December 2023), while the major providers of managed Kubernetes are lagging behind, offering versions based on Kubernetes 1.25 and even 1.24. the upside of this is the stability of the versions on offer. However, if you are waiting for a specific feature or bug-fix, you might be waiting for a up to a year.
Moving between cloud providers costs more than money. You might also incur performance and reliability costs. It’s important to consider which provider will best meet your needs so you can avoid these peripheral costs. If you’re managing your infrastructure as code, there will be changes required for this migration as well.
If you’re thinking about going with cloud-agnostic Kubernetes clusters, you may consider managing them yourself, as this will give you the flexibility to move clusters across clouds since you aren’t depending on any cloud resources except the underlying machine. You have to choose your CNI very carefully though, as not all CNIs work directly on all clouds.
Managing your own Kubernetes will require in-house Kubernetes experts who can dig deeper into issues and find are solution.
Given the above considerations, a managed Kubernetes cluster is preferable to a self-managed one if:
You’ll find managed Kubernetes are easier to upgrade and highly available and—most importantly—you’ll have support from your cloud provider. There are many managed Kubernetes options to choose from; we’ll take a look at the main ones below.
There is also a middle road. Those are firms that provide white-glove managed Kubernetes, which in effect become your network facing SRE team. They will give you benefits of in-house vanlilla Kubernetes, while taking away much of the toil.
Below are the key factors you should consider when looking into managed Kubernetes clusters.
In most cases, you’ll be using kubectl from the command line instead of the GUI. But for the most intuitive UI, Amazon EKS is very easy to use, giving you only the options you need. AKS and GKE, on the other hand, have a lot of options that you may not use. OpenShift and Tanzu come with a custom UI for a better developer experience.
Generally speaking, deploying from the UI is discouraged. You typically wrap your deployments with pipelines that are supposed to deploy on Kubernetes, so having a UI doesn’t add a lot of value.
GKE, is easier for managing Day-2 operations. It gives the unique option of config sync, allowing you to sync your Kubernetes cluster from a git repository.
All of the providers noted above are easy to set up.
Generating a kubeconfig file in GKE and getting started is especially easy. With AKS and EKS, you have to make changes in the IAM, then add the user to the cluster. Finally, OpenShift comes packed with its own components and has Istio as its service mesh. VMware Tanzu also has its own CNI and service mesh.
Each of the providers mentioned has great integration with their other cloud offerings, like RDS or storage, through an internal network of various resources. AWS, for example, provides the most reliable platforms, while Azure offers a lot of configuration options.
It’s also good to note that AWS uses layer 2 networking, while AKS and GKE use layer 3 networking. So, if you’re planning to use CNI, which uses BGP for routing, you may face issues in GKE and AKS.
Determining the absolute “most expensive” and “cheapest” managed Kubernetes offering is complex, as pricing can vary greatly based on numerous factors, including:
Read more on Kubernetes security pricing structure.
AKS does not support custom worker nodes and has no plans to include it. GKE and OpenShift don’t offer it either. In fact, such nodes are restricted in GKE, meaning you cannot create a node that has basic tooling or software, like installing antivirus and then using it as a Kubernetes node. On the other hand, EKS has very good support for custom worker nodes and VMware Tanzu supports them as well.
Potentially Most Expensive:
Potentially Cheapest:
Node upgrades are automated in GKE and AKS, while you have to manually update the worker nodes in EKS. AKS also provides a manual option for node replacement. When you upgrade on GKE, you don’t have control over how, or in what order, nodes are upgraded, thus your workload will be affected.
If the workload is stateful, this can cause issues. Manual node upgrades give you control over your own node replacement, so you can move your workload before taking the VM down for an upgrade.
The table below provides a convenient reference for many aspects of the primary managed Kubernetes solutions.
Feature | EKS | AKS | GKE | OpenShift | VMware Tanzu |
---|---|---|---|---|---|
Upgrade Cycle | Rolling upgrades, manual or automated | Rolling upgrades, automated | Rolling upgrades, automatic | Automatic updates or manual rolling upgrades | Rolling upgrades, manual or automated |
Cost | Pay-as-you-go for all resources, preemptible instances available | Free tier, pay-as-you-go for additional resources, spot VMs available | Per-minute billing for all resources | Subscription-based pricing, includes infrastructure and services | Subscription-based pricing, includes management tools and platform features |
Nodes | EC2 instance types, bare metal support | VM instances, Azure Arc for on-premises | VM instances, preemptible VMs available | Dedicated VMs, bare metal support | VMs, container pods on VMs |
CI/CD Integration | AWS CodeBuild, AWS CodeDeploy, integrations with various CI/CD tools | Azure DevOps, integrations with various CI/CD tools | Cloud Build, Cloud Deploy, integrations with various CI/CD tools | Built-in CI/CD tools with pipelines and webhooks | External CI/CD tools supported, Tanzu Application Catalog for deployment automation |
CNI and Service Mesh | CNI plugins like Calico, Cilium, Flannel | Azure CNI, CNI plugins like Calico, Cilium, Istio | Istio service mesh, CNI plugins like Calico, Flannel | OpenShift Service Mesh (OSM), CNI plugins like Calico | Tanzu Service Mesh, CNI plugins like Calico |
Log and Metric Collection | CloudWatch Logs, Prometheus integration | Azure Monitor, OpenTelemetry support | Cloud Monitoring, Stackdriver Logging | OpenShift Logging, Prometheus integration | Tanzu Observability for cloud-native monitoring |
Max Nodes per Cluster | 2000 | 2000 | 5000 | 2000 | 1000 |
Underlying Networking | VPC networking | Azure CNI, Azure Virtual Network | Google Cloud VPN, VPC | VXLAN with Open vSwitch | Kubernetes NetworkPolicy, vxlan CNI |
Additional Notes:
Kubernetes clusters take time and staff to set up and maintain. That might mean a few minutes when using a managed cloud service or hours in a self-hosted version of Kubernetes. Ultimately, your organization’s ability to keep up with the open-source projects you selected for your stack is the biggest factor to consider when choosing managed over DIY Kubernetes. If you choose to go it alone, you’ll need experts on your team who can handle such a large undertaking. Separately managing etcd, upgrades, high availability, and reliability requires far more expertise than running a managed Kubernetes cluster.
Handling self-managed Kubernetes is a huge task. When you combine it with demands like security and compliance, it grows exponentially harder to manage. Cloud providers augment their offerings with pre-defined constructs like security groups, firewalls, and subnet and VNet segregation. For security and auditing, you can deploy tools like Gatekeeper, Falco, and Wazuh.
You can also check out ARMO Platform, which provides more visibility and control while also taking care of security and compliance for you. Try ARMO today.
Role-Based Access Control (RBAC) is important for managing permissions in Kubernetes environments, ensuring that users...
This guide explores the challenges of RBAC implementation, best practices for managing RBAC in Kubernetes,...
In the dynamic world of Kubernetes, container orchestration is just the tip of the iceberg....