Top 12 Container Scanning Tools for 2025
Kubernetes is a revolutionary technology for orchestrating containerized applications, enabling organizations to deploy and manage...
Feb 2, 2023
In this article, we examine the impact of a shared responsibility model on end-user security administration in managed Kubernetes environments. We also explore typical difficulties and effective methods for securing these environments.
If you’re looking to run Kubernetes in production, there is a fair chance you may consider using a managed service provider (MSP) as it offers simpler deployment options, all-around support and a fully managed control plane for reduced overhead. AWS, Google Cloud and MS Azure, each offer their own packaged version of Kubernetes distribution that takes care of the heavy lifting required to run and manage a Kubernetes cluster. These are: EKS, GKE and AKS, respectively. This can be a big benefit for businesses, as it frees up resources that can be better used elsewhere.
With benefits, however, there are also challenges of using a managed Kubernetes service – and administering security properly is one of them. This is often due to the inherent complexity of Kubernetes and an unclear scope of applying security controls over a shared responsibility model.
In this article, we discuss how a shared responsibility model affects the end-user administration of security in managed Kubernetes environments. We also delve into common challenges and best practices of securing managed Kubernetes environments.
Managed Kubernetes instances enforce a shared security model. The cloud provider is responsible for securing the infrastructure that hosts the Kubernetes cluster, while the application owner is responsible for securing the applications and data running on the cluster. This segregation of responsibilities helps ensure that each party is focused on their own area of expertise, and it also helps to avoid potential conflicts of interest.
However, the shared responsibility principle is often misunderstood. Organizations are lulled into a false sense of security, resulting in insecure workloads vulnerable to exploits. When securing managed Kubernetes clusters, enterprises should consider the potential risks of the shared responsibility model and proactively ensure a strong security posture.
By default, a Kubernetes ecosystem operates with wide open network access and weak authentication controls. A managed Kubernetes service offers a variety of configuration options, many of which are commonly left unchanged. This may lead to inter-policy conflicts and potential vulnerabilities within the cluster. A study found that 47.9% of violations were exploited over default Kubernetes settings that included insecure secrets administration, over-permissive security profiles, and insecure namespaces.
Typical insecure default configurations of managed Kubernetes clusters include:
Kubernetes is a complex system, and understanding all the moving parts can be difficult. This is made even harder when you can’t set up comprehensive observability of an operating cluster. While MSPs do offer some level of visibility into cluster resources, it is often incomplete or lacks detail to assess the cluster state.
In contrast to self-managed Kubernetes, a sizable chunk of resource management and monitoring is retained by the MSP, while teams that manage clusters can only instrument their workloads for telemetry data. As a result, correlating issues with containerized workloads and the performance of Kubernetes resources is often difficult. In addition, without complete visibility into the resources being used by each cluster, it’s difficult to ensure that every cross-functional team is following best practices and keeping their data secure. This can lead to problems down the line if one team’s security measures are not up to par with organizational security policies.
It is often unclear where the security perimeter lies when using managed Kubernetes services. Is it the individual nodes? The control plane? Or the entire cluster? And what role do the various managers and controllers play in securing the system? Managed Kubernetes services can help to offset some of these concerns by handling many of the routine maintenance and management tasks, but they also introduce complexities.
Unlike self-managed Kubernetes clusters where there is a definite scope, managed Kubernetes instances obscure the boundaries of security measures. It is important to understand how your service provider defines the security boundaries. What steps they have taken to secure the system and what has been left up to you.
Data inconsistency is one of the most common problems that occur when there are multiple versions of the same data set as it gets transferred between systems. With a hyperscaler-managed service, data can be spread across multiple availability zones and regions to ensure high availability.
In Kubernetes, data must be shared between clusters and nodes, so there is an increased risk of data inconsistency as it traverses multiple availability zones and systems. Formulating robust data governance processes to ensure data is regularly backed up and monitored for any inconsistencies is a recommended solution. However, it is often challenging and cost-intensive due to the distributed nature of the ecosystem.
The need to protect data while simultaneously making it available to authorized users and services requires a delicate balancing act for optimum data security and portability. Because Kubernetes can run on any infrastructure, it is important to consider how data will traverse between different environments.
For example, if an organization wants to move data from on-premises storage to a cloud-based object storage, they need to ensure that the data is appropriately encrypted and transferred securely. Additionally, it is important to consider how data will be accessed in different environments; for example, if an application needs to access data stored in a private cloud environment, the application will need to have appropriate authentication credentials.
Compliance frameworks, such as Service Organization Control 2 (SOC2) define how enterprises can collect, store and use customers’ personal data. These frameworks issue guidance on processing of data, rights of data owners, and how to perform both internal and external data transfers.
With an MSP, the scope of sharing compliance and regulatory responsibilities is often unclear. The customer may argue that they should not have to worry about patching and updating, since they are paying the hyperscaler for a managed service. The hyperscaler, on the other hand, may argue that they are not responsible for ensuring compliance, since the customer is ultimately in control of the environment.
Ambiguity around a fixed geographical location where the data resides, introduces further challenges in determining the applicable rules. With managed environments, it is also complex to effectively enforce coordination and perform breach response. Consequently leaving the sensitive data at a larger risk of compromise.
While the shared responsibility model relegates security implementation to the managed service provider as much as possible, security of workloads operating within the clusters falls under the scope of the enterprise user. Some best practices to ensure security in managed Kubernetes instances include:
Most MSPs expose metadata APIs locally to VMs used as Kubernetes worker nodes. Any pod running within these instances can access the metadata API service, which may contain sensitive data such as kubelet credentials or the node’s cloud credentials. Malicious access of these credentials may lead to privilege escalation attacks and deeper exploitation of cluster services.
When using a managed service, it is recommended to use network policies to limit the level of access allowed for each node’s cloud credentials. This is especially important for managed Kubernetes services, which have many users and user groups with different levels of privileges. By creating a network policy that only allows traffic from specific IP addresses or subnets, you essentially ensure that only authorized users can access the metadata API.
Running an outdated version of Kubernetes can leave you vulnerable to security breaches and data loss. By synchronizing your security controls with the MSP’s platform upgrades, you can ensure that your clusters are always secure and compliant. MSPs upgrade the control plane as new versions of Kubernetes are rolled out. Users are responsible for upgrading clusters to the latest stable version.
Updating the cluster allows deployment teams to leverage the latest features, security enhancements and vulnerability fixes to remediate emerging threats. While it is important to stay up-to-date with the latest developments, administrators should also restrict access to alpha and beta features as these may contain security gaps that expose the cluster to an attack.
Isolating worker nodes from public networks by deploying them into secure private subnets (virtual networks) helps eliminate direct access of the data plane from the public internet, consequently reducing the chances of compromise from an external threat actor. In addition, it is also recommended to leverage firewall rules or ingress controllers that only permit access through the port specified in the subnet’s Access Control List (ACL).
CIS Kubernetes Benchmarks are consensus-based, security configuration guidelines that help you create a robust security posture for securing Kubernetes clusters. Organizations can use the benchmark to assess their own deployments or third-party offerings. The benchmark is also useful for organizations that want to verify whether their providers are adhering to best practices.
Organizations of all sizes can benefit from using the CIS benchmark. But it’s especially important for hyperscalers, who handle large amounts of sensitive data and are often targets for attacks. The benchmark provides detailed guidance on hardening various services, from firewalls and intrusion detection to password management and incident response. By following the recommendations in the benchmark and by using leading CIS benchmark scanning tools, organizations can harden their deployments and reduce the risk of attack.
Managed Kubernetes services have become increasingly popular in recent years, as they offer a number of benefits over traditional, self-managed deployments. A recent IBM study noted that enterprises commonly leverage partnerships with hyperscalers to choose the best fit that suits their workload specifications, and the best practices they can offer to address internal and external governance, risk, and compliance requirements.
In particular, MSPs take away the operational burden of running a Kubernetes cluster, leaving users free to focus on their applications. However, they are also known for being restrictive with offering the amount of control enterprise users typically prefer. While you look to benefit from the economies of scale offered by hyperscale providers, it is recommended to realize the challenges and scope of administering security on Kubernetes workloads.
The only runtime-driven, open-source first, cloud security platform:
Continuously minimizes cloud attack surface
Secures your registries, clusters and images
Protects your on-prem and cloud workloads
Kubernetes is a revolutionary technology for orchestrating containerized applications, enabling organizations to deploy and manage...
We’re thrilled to announce a new partnership with OVHcloud, a leading global cloud provider! This...
We’re honored to share a new partnership with Orange Business (Norway), a global leader in...