A pod can also have one or more containers, one of which is the application container, and the others are the init container, which halts after it completes a job or the application container is ready to perform its function, and the sidecar container, which is affixed to the primary application container. A container or pod will not always leave due to an application failure. In scenarios like this, you will need to restart your Kubernetes Pod explicitly. In this guide, you will explore how to force pods in a deployment to restart using several ways.

Pre-requisites

To restart the pod using kubectl, make sure you have installed the kubectl tool along with the minikube cluster. Otherwise, you will not be able to implement the prescribed article.

Methods to create pods using Kubectl

To restart pods using Kubectl, you have to first run the minikube cluster by using the following appended command in the terminal.
we can create pods like this:

1kubectl  run nginx --image nginx --port=80
2# 也可以使用yaml文件输出,通过下面的命令会输出yaml文件内容
3kubectl  run nginx --image nginx --port=80 --dry-run=client -o yaml

we also can user deployment deploy pods, like this:

 1kubectl create deployment myweb --image=nginx --replicas=1 --port=80
 2# 也可以使用yaml文件部署,yaml文件内容如下:
 3[root@ecs-82f5]~# kubectl create deployment myweb --image=nginx --replicas=1 --port=80 --dry-run=client -o yaml
 4apiVersion: apps/v1
 5kind: Deployment
 6metadata:
 7  creationTimestamp: null
 8  labels:
 9    app: myweb
10  name: myweb
11spec:
12  replicas: 1
13  selector:
14    matchLabels:
15      app: myweb
16  strategy: {}
17  template:
18    metadata:
19      creationTimestamp: null
20      labels:
21        app: myweb
22    spec:
23      containers:
24      - image: nginx
25        name: nginx
26        ports:
27        - containerPort: 80
28        resources: {}
29status: {}

Method 1:

A rolling restart will be used to restart each pod in order from deployment. This is the most recommended strategy because it will not cause a service break. Write the below-affixed command in the terminal.kubectl rollout restart deployment <deployment name>

 1[usera@ecs-82f5 ~]$ kubectl get pods
 2NAME                     READY   STATUS    RESTARTS   AGE
 3myweb-55565b5c87-pcb5c   1/1     Running   0          26s
 4[usera@ecs-82f5 ~]$ kubectl rollout restart deployment myweb
 5deployment.apps/myweb restarted
 6[usera@ecs-82f5 ~]$ kubectl get pods
 7NAME                     READY   STATUS              RESTARTS   AGE
 8myweb-55565b5c87-pcb5c   1/1     Running             0          3m5s
 9myweb-c849b688b-grvbn    0/1     ContainerCreating   0          2s
10[usera@ecs-82f5 ~]$ kubectl get pods
11NAME                    READY   STATUS    RESTARTS   AGE
12myweb-c849b688b-grvbn   1/1     Running   0          8s

The command mentioned above will restart it. Your app will be accessible since most of the containers will be functioning. we can found the pod name changed.

Method 2:

The second method is to compel pods to restart and synchronize with the modifications you made by setting or changing an environment variable.kubectl set env deployment <deployment name> DEPLOY_DATE="$(date)"

 1[usera@ecs-82f5 ~]$ kubectl get pods
 2NAME                    READY   STATUS    RESTARTS   AGE
 3myweb-c849b688b-grvbn   1/1     Running   0          3h10m
 4[usera@ecs-82f5 ~]$ kubectl set env deployment myweb DEPLOY_DATE="$(date)"
 5deployment.apps/myweb env updated
 6[usera@ecs-82f5 ~]$ kubectl get pods
 7NAME                     READY   STATUS              RESTARTS   AGE
 8myweb-67cd6f8c99-jrrq9   0/1     ContainerCreating   0          3s
 9myweb-c849b688b-grvbn    1/1     Running             0          3h10m
10[usera@ecs-82f5 ~]$ kubectl get pods
11NAME                     READY   STATUS    RESTARTS   AGE
12myweb-67cd6f8c99-jrrq9   1/1     Running   0          6s

Method 3:

Reducing the number of deployment copies to zero and scaling back up to the appropriate state is another method for restarting Pods. This compels all current pods to cease and terminate, followed by the scheduling of fresh pods in their place. Limiting the number of copies to 0 will result in an outage. Hence a rolling restart is advised. Use the following appended command to set a deployment’s replicas to 0.

1kubectl scale deployment <deployment name=""> --replicas=0
2kubectl scale deployment <deployment name=""> --replicas=1
3</deployment></deployment>

The command scale specifies the number of replicas that should be active for each pod. It effectively shuts down the process when users set it to zero. To start the said pod again, we are going to set its replica value more than 0.

 1[usera@ecs-82f5 ~]$ kubectl scale deployment myweb --replicas=0
 2deployment.apps/myweb scaled
 3[usera@ecs-82f5 ~]$ kubectl get pods
 4No resources found in default namespace.
 5[usera@ecs-82f5 ~]$ kubectl scale deployment myweb --replicas=1
 6deployment.apps/myweb scaled
 7[usera@ecs-82f5 ~]$ kubectl get pods
 8NAME                     READY   STATUS              RESTARTS   AGE
 9myweb-67cd6f8c99-zbczp   0/1     ContainerCreating   0          2s
10[usera@ecs-82f5 ~]$ kubectl get pods
11NAME                     READY   STATUS    RESTARTS   AGE
12myweb-67cd6f8c99-zbczp   1/1     Running   0          4s

Conclusion

Kubernetes is an effective container orchestration platform. However, difficulties do arise, as they do with all systems. So, restarting your pod will not resolve the fundamental issue that caused it to fail in the first place, so be sure to identify and resolve the root cause.