Introduction
Today represents a significant milestone as we dive into the intricate world of Deployments within Kubernetes. Our discussion will uncover the pivotal role played by Deployments in orchestrating applications, explore the features that facilitate seamless management, and provide a hands-on tutorial to solidify your comprehension.
Exploring Deployments in Kubernetes
At its core, a Deployment in Kubernetes serves as a strategic conduit for overseeing the deployment, scalability, and updates of application instances. It functions as a blueprint for maintaining the desired state of Pods and ReplicaSets that house your application containers. This desired state encompasses scaling operations, update management, and ensuring unwavering high availability.
By eliminating the need for manual management of individual Pods or ReplicaSets, a Deployment introduces abstraction. This empowers you to focus your efforts on refining application logic and defining the desired operational state. The linchpin of this process is the Deployment Controller, a foundational element of Kubernetes. This controller ensures alignment between the current operational state of your application and the envisioned state outlined in the Deployment manifesto.
Key characteristics of Deployments include:
Declarative Evolution: Deployments enable you to articulate the preferred operational state of your application. The system then orchestrates the transition from the existing state to the specified state, ensuring predictability and consistency during updates.
Rolling Iterations: With Deployments, transitioning from one version of your application to another follows a gradual, rolling methodology. This minimizes downtime and mitigates the risks associated with abrupt changes.
Scalability On-Demand: Adjusting the count of replicas (instances) for your application is a seamless endeavor. Modifying the Deployment manifest allows for the smooth handling of fluctuating traffic volumes and workloads.
Automated Restoration: In the event of a Pod failure, the Deployment Controller springs into action, promptly replacing the faulty Pod with a fresh counterpart. This maintains the desired replica count, ensuring operational robustness.
Task of the Day - Creating a Deployment for a Sample todo-app with "Auto-healing" and "Auto-Scaling"
Today's task involves creating a Kubernetes Deployment for a todo-app while harnessing the auto-healing and auto-scaling features. We will follow these steps:
Prerequisites:
A Docker Hub account
A Docker image of the todo-app that has been built and tagged
Step 1: Push the Docker Image to Docker Hub
Before deploying the Kubernetes Deployment, make sure you have built the Docker image of the todo-app and pushed it to Docker Hub. This step ensures that the Kubernetes cluster can access the image when creating the pods.
docker login # Login to your Docker Hub account
docher push nikhilaut/node-app-test-new
Step 2: Create a Deployment YAML File
Create a new YAML file named deployment.yml
and add the following content to it. This YAML file defines the Deployment with auto-healing and auto-scaling features.
apiVersion: apps/v1
kind: Deployment
metadata:
name: todo-app-deployment
spec:
replicas: 3 # Initial number of replicas
selector:
matchLabels:
app: todo-app
template:
metadata:
labels:
app: todo-app
spec:
containers:
- name: todo-app
image: nikhilaut/node-app-test-new
ports:
- containerPort: 8000
Explanation of deployment.yml
apiVersion
: Specifies the Kubernetes API version to use.kind
: Defines the resource type as a Deployment.metadata
: Contains metadata about the Deployment, including its name.spec.replicas
: Specifies the desired number of replicas (pods) to maintain. This is the basis for auto-scaling.spec.Selector
: Defines how the Deployment selects which pods to manage based on labels.spec.template
: Specifies the pod template to be used for creating new pods.spec.template.metadata.labels
: Labels to be applied to the pods.spec.template.spec.containers
: Defines the containers within the pod, including the image and ports.
This deployment file will create a deployment with 2 replicas of a container named todo. The container is based on the image nikhilaut/node-app-test-new, which should be replaced with the actual image name of your application. The container listens on port 8000. The deployment is associated with a label app: todo, which is used in the selector to identify the pods that belong to the deployment.
Step 3 : Apply the deployment to your k8s (minikube) cluster
kubectl apply -f deployment.yaml
To list a deployment:
kubectl get deployments
To get Pods :
kubectl get pods
Now let's try to delete a pod and let us see whether it creates a new pod
kubectl delete pod <pod_name>
As you can see pod has been created a concept called Auto healing
Auto-scaling is a feature that allows Kubernetes automatically scale the number of pods
a deployment based on the resource usage of the existing pods.
Thank you for reading this blog! ๐ Hope you have gained some value.
Please follow me on Linkedin- www.linkedin.com/in/nikhil-raut-965133b4
To know more about me: https://linktr.ee/nikhil_raut