Get the latest, first
How to Deploy Pods in Kubernetes?

How to Deploy Pods in Kubernetes?

Jul 12, 2022

Ben Hirschberg
CTO & Co-founder

Kubernetes leverages various deployment objects to simplify the provisioning of resources and configuration of workloads running in containers. These objects include ReplicaSets, lSets, Sets, and Deployments. A pod is the smallest deployment unit in Kubernetes that usually represents one instance of the containerized application.

Considered the fundamental building block of a Kubernetes ecosystem, a pod’s template is used by almost all Kubernetes deployment objects to define configuration specs for the workloads they manage.

In this article, we learn how pods enable resource sharing and the different approaches to deploying a pod.

What Are Pods in Kubernetes?

A pod is the most basic execution unit you can create and manage in Kubernetes. It represents a single instance of the workload/application and encapsulates one or more containers. A pod packages containers along with container resources, such as networking (each pod has a unique IP), storage, and container execution information. For multi-container pods, all containers are managed as a single logical entity with shared resources.

Working with Pods

Being the smallest deployment unit of a Kubernetes cluster, a pod can be used as a template when configuring other deployment objects such as ReplicaSets, Deployments, and StatefulSets.

Pods are created in two ways:

  1. Automatically by a controller when creating ReplicaSets, Deployments, and StatefulSets
  2. Manually using a pod manifest file

The Pod Manifest File

When creating pods manually, cluster administrators specify the configuration in a YAML/JSON manifest file. The file is divided into four main sections, namely:

  • apiVersion: Version of the Kubernetes API used to create the object; for pods, this value is v1
  • Kind: The object type being created (such as a pod)
  • Metadata: Information that helps you uniquely identify an object, including pod name (compulsory), UID, and namespace
  • Spec: A nested field specifying the desired state for the pod, with configurations including containers, labels, and selectors

Specifications within a manifest are included in a .yaml/.json file and used to manage various stages of a pod lifecycle, including deployment, configuration, and termination.

Creating Pod Requests

Once a pod is deployed, it requests compute resources (CPU and memory) to guide the Kubernetes controller in determining the node for deployment. A request typically represents the minimum number of resources for scheduling a node to host the pod. Cluster administrators can set CPU and memory requests for both the pod and the containers it runs. When a container request is set, the pod request is assumed to be the sum total of individual container requests.

Container requests can be specified in the manifest as follows:

spec.containers[].resources.requests.cpu
spec.containers[].resources.requests.memory
spec.containers[].resources.requests.hugepages-<size>

Setting Pod Limits

A pod limit is a specification in the manifest that defines the maximum amount of memory or CPU that Kubernetes assigns to a pod. Although limits in Kubernetes ensure that a container does not consume resources above the specified value, they can also prevent workloads from causing resource contention and system instability by starving the node’s OS of resources.

In the manifest files, limits can be specified as follows:

spec.containers[].resources.limits.cpu
spec.containers[].resources.limits.memory
spec.containers[].resources.limits.hugepages-<size>

Scheduling Pods on Nodes

Kubernetes runs pods on the cluster’s node pool by default. As Kubernetes only schedules pods on a node that satisfies resource requests, explicitly specifying resource requests is often a preferred method of determining and assigning pods to an appropriate node. To do so, administrators can explicitly force the deployment of a pod on a specific node by setting the pod manifest’s nodeSelector spec.

The sample snippet below outlines the specification for pods to be deployed on nodes with SSD disks only:

spec:
  containers:
  - name: darwin
    image: darwin 
  nodeSelector:
    disktype: ssd

A more expressive and verbose form of constraining pods to specific nodes is using inter-pod affinity and anti-affinity rules. Node affinity types in Kubernetes include:

  • requiredDuringSchedulingIgnoredDuringExecution: Kubernetes can only schedule the pod once the rule is met.
  • preferredDuringSchedulingIgnoredDuringExecution: Kubernetes tries to find an appropriate node but schedules the pod even when a matching node does not exist.

Storage in Pods

As pods are ephemeral, when they terminate, the data they process and store is also lost. Kubernetes allows various volume abstractions through PersistentVolume, PersistentVolumeClaim, and StorageClass to persist data for workloads in pods.

  • A volume is a data directory that is connected to containers in a pod. Volumes are ephemeral and are deleted as soon as pods connected to them terminate.
  • A PersistentVolume (PV) is a data directory connected to the pod whose lifecycle is independent of the pods.
  • A PersistentVolumeClaim (PVC) is a request for PV storage by an application/process.
  • A StorageClass object enables dynamic provisioning of cluster resources by allowing storage administrators to abstract the classes of storage within a cluster.

Once the PV and PVC are configured, you can call the PVC within the pod as shown:

spec:
  volumes:
    - name: darwin-storage
      persistentVolumeClaim:
        claimName: darwin-claim

Pod Networking

Since a pod is a unified logical host, containers running within the same pod are allowed to share network resources and can communicate with each other via localhost or other interprocess communication systems. Kubernetes uses various services to enable networking functions for pods. The service object abstracts a logical set of pods and gives pods their own IP addresses, allowing for service discovery and load balancing.

Services in Kubernetes can be primarily categorized as:

  • ClusterIP: An internal IP reachable from within the cluster
  • NodePort: Routes traffic from an open port on the node to cluster services
  • LoadBalancer: Exposes the cluster services to the internet

How to Deploy a Pod in Kubernetes

In this section, we go through the steps to create and manage a pod in an existing Kubernetes cluster. While this demo uses a Minikube cluster, the steps are essentially the same for any production-grade Kubernetes cluster configured to use the kubectl CLI tool.

Option 1: Using the kubectl run Command

Kubernetes allows starting containers using the CLI and custom arguments. The syntax format for running a pod is similar to:

$ kubectl run pod-name --image=image-name

In our case, to deploy a pod named darwin running an nginx image:

$ kubectl run darwin –image=nginx

On successful execution of the command above, the following response is returned:

pod/darwin created

Now, verify the creation using the kubectl get command:

$ kubectl get pods

This returns the details of the various pods that are already provisioned within the cluster:

NAME     READY   STATUS    RESTARTS   AGE
darwin   1/1     Running   0          3m41s

Option 2: Using a Pod Configuration

Apart from using the CLI tool, you can also use a configuration manifest for deploying a pod in a working cluster. This is one of the most preferred ways of managing an entire lifecycle of a pod, including deployment, configuration/updation, and termination.

Step 1: Creating the config file (simple Nginx pod)

Start with creating a directory to store the pod manifest file:

$ mkdir pod-example

To create the file, type the following specification text into the text editor and name it darwin.yaml:

apiVersion: v1
kind: Pod
metadata:
  name: darwin
spec:
  containers:
  - name: nginx
    image: nginx:1.21.4
    ports:
    - containerPort: 80

Step 2: Applying the configuration

Once the specification file is saved, run the following command to deploy the pod:

$ kubectl apply -f darwin.yaml

This returns a response of the form:

pod/darwin created

Step 3: Viewing the pod

To verify pod creation, run the kubectl get command, as shown:

$ kubectl get pods -w

The above command watches for pods and returns the following list of pods deployed:

NAME     READY   STATUS    RESTARTS   AGE
darwin   1/1     Running   0          24s

You can also verify that the pod is running the specified image by using the command:

$ kubectl describe pods

This provides details of the pod specification, including the image name as shown in the output image below:

pod specification

Step 4: Updating the pod

Once a pod is deployed, it’s possible to use the config file to update and make changes to its specification. For instance, to change the Nginx image version to 1.19.5, you can edit the spec section of the manifest file

spec:
  containers:
  - name: nginx
    image: nginx:1.19.5
    ports:
    - containerPort: 80

To apply these changes:

$ kubectl apply -f darwin.yaml

This returns the prompt:

pod/darwin configured

Verify the changes using the describe command, and check for the image version:

$ kubectl describe pods

The above returns the details of the pod specification, as shown in the output below:

pod specification

Step 5: Terminating/deleting the pod (optional)

As an optional step, to clean up the cluster, you can terminate the pod using the kubectl delete command:

$ kubectl delete pod darwin

You should then see the response:

pod "darwin" deleted

Summary

Instead of running containers directly, Kubernetes allows for the encapsulation of containers within pods to enable efficient, intelligent resource sharing. You can also use pods to connect workloads with required environmental dependencies in order to run containers in the pod.

Even though most deployments use controllers for automated deployment, it is important to understand how pods work since they form the basis for all other deployment objects of a Kubernetes cluster. 


Unifying AppSec, CloudSec and DevSec

The only runtime-driven, open-source first, cloud security platform:

Continuously minimizes cloud attack surface

Secures your registries, clusters and images

Protects your on-prem and cloud workloads

slack_logos

Continue to Slack

Get the information you need directly from our experts!

new-messageContinue as a guest