Stateless application deployment in kubernetes cluster

A stateless application is one that depends on no persistent storage. The only thing your cluster is responsible for is the code, and other static content, being hosted on it. That’s it, no changing databases, no writes, and no leftover files when the pod is deleted.

Pod
A Pod is the smallest possible abstraction in Kubernetes and it can have one or more containers running within it. These containers share resources (storage, volume) and can communicate with each other over localhost.
Create a simple Pod using the YAML file below.
apiVersion: v1
kind: Pod
metadata:
name: kin-stateless-1
spec:
containers:
- name: nginx
image: nginx
As a part of the Pod spec, we convey our intent to run nginx in Kubernetes and use the spec.containers.image to point to its container image on DockerHub.
# kubectl apply -f pod.yml
pod/kin-stateless-1 created
# kubectl get pods
NAME READY STATUS RESTARTS AGE
kin-stateless-1 1/1 Running 0 14s
Now, let’s delete the Pod
# kubectl delete pod kin-stateless-1
pod "kin-stateless-1" deleted

For serious applications, we have to take care of the following aspects
*High availability and resiliency — Ideally, your application should be robust enough to self-heal and remain available in face of failure e.g. Pod deletion due to node failure, etc.
*Scalability — What if a single instance of our app Pod does not suffice Wouldn’t you want to run replicas/multiple instances

Pod Controllers
Although it is possible to create Pods directly, it makes sense to use higher-level components that Kubernetes provides on top of Pods in order to solve the above-mentioned problems. In simple words, these components also called Controllers can create and manage a group of Pods.
The following controllers work in the context of Pods and stateless apps
ReplicaSet
Deployment
ReplicationController

ReplicaSet
A ReplicaSet can be used to ensure that a fixed number of replicas/instances of your application Pod are always available. It identifies the group of Pods that it needs to manage with the help of a user-defined selector and orchestrates them creates or deletes them to maintain the desired instance count.
# cat pod1.yml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: kin-stateless-rs
spec:
replicas: 2
selector:
matchLabels:
app: kin-stateless-rs
template:
metadata:
labels:
app: kin-stateless-rs
spec:
containers:
- name: nginx
image: nginx
Let's create the ReplicaSet
# kubectl apply -f pod1.yml
replicaset.apps/kin-stateless-rs created

# kubectl get replicasets
NAME DESIRED CURRENT READY AGE
kin-stateless-rs 2 2 2 34s
nfs-client-provisioner-7cdc9bf95d 1 1 1 4h47m
test-8577b85d5d 1 1 1 3h27m

# kubectl get pods --selector=app=kin-stateless-rs
NAME READY STATUS RESTARTS AGE
kin-stateless-rs-bm2wg 1/1 Running 0 102s
kin-stateless-rs-x6srs 1/1 Running 0 102s
Our ReplicaSet object named kin-stateless-rs was created along with two Pods notice that the names of the Pods contain a random alphanumeric string e.g. bm2wg

We used --selector in the kubectl get command to filter the Pods based on their labels which in this case was app=kin-stateless-rs.
Try deleting one of the Pods, note that the Pod name will be different in your case
# kubectl delete pod kin-stateless-rs-x6srs
pod "kin-stateless-rs-x6srs" deleted

# kubectl get pods --selector=app=kin-stateless-rs
NAME READY STATUS RESTARTS AGE
kin-stateless-rs-bm2wg 1/1 Running 0 6m49s
kin-stateless-rs-qgfwb 1/1 Running 0 18s
We still have two Pods. This is because a new Pod highlighted was created to satisfy the replica count (two) of the ReplicaSet.

To scale your application horizontally, all we need to do is update the spec. replicas field in the manifest file and submit it again.
Deployment
Deployments are upgraded and a higher version of a replication controller. They manage the deployment of replica sets which is also an upgraded version of the replication controller. They have the capability to update the replica set and are also capable of rolling back to the previous version.
# cat deploy.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: kin-stateless-depl
spec:
replicas: 2
selector:
matchLabels:
app: kin-stateless-depl
template:
metadata:
labels:
app: kin-stateless-depl
spec:
containers:
- name: nginx
image: nginx
Let's create the deployment
# kubectl apply -f deploy.yml
deployment.apps/kin-stateless-depl created

# kubectl get deployment kin-stateless-depl
NAME READY UP-TO-DATE AVAILABLE AGE
kin-stateless-depl 2/2 2 2 4m13s

# kubectl get replicasets
NAME DESIRED CURRENT READY AGE
kin-stateless-depl-7cbf999d95 2 2 2 5m2s

# kubectl get pods -l=app=kin-stateless-depl
NAME READY STATUS RESTARTS AGE
kin-stateless-depl-7cbf999d95-5w49f 1/1 Running 0 6m5s
kin-stateless-depl-7cbf999d95-jcbq4 1/1 Running 0 6m5s
Deployment kin-stateless-depl got created along with the ReplicaSet and (two) Pods as specified in the spec.replicas field
To see which nginx version we’re using.
# kubectl exec kin-stateless-depl-7cbf999d95-5w49f -- nginx -v
nginx version: nginx/1.19.10

Rollback
If things don't go as expected with the current Deployment, we can revert back to the previous version in case the new one is not working as expected. This is possible since Kubernetes stores the rollout history of a Deployment in the form of revisions

To check the history for the Deployment
# kubectl rollout history deployment/kin-stateless-depl
deployment.apps/kin-stateless-depl
REVISION CHANGE-CAUSE
1
We can roll back to the previous one using kubectl rollout undo
# kubectl rollout undo deployment/kin-stateless-depl




Recent Comments

No comments

Leave a Comment