Advanced Pod operations
Advanced operations with pods
Adding resource requests and limits
Check the list of namespaces:
kubectl get ns
Create a new benchmark
Namespace for the Pod instance:
kubectl create ns benchmark
Create a Pod template manifest with a stress tool included:
kubectl run -n benchmark stress --image=vish/stress --dry-run=client -o yaml \
-- -cpus 1 -mem-total 350Mi -mem-alloc-size 100Mi -mem-alloc-sleep 5s > pod-stress.yaml
Edit the pod-stress.yaml
manifest to get:
apiVersion: v1
kind: Pod
metadata:
name: stress
namespace: benchmark
spec:
containers:
- name: stress
args:
- -cpus
- "1"
- -mem-total
- "350Mi"
- -mem-alloc-size
- "100Mi"
- -mem-alloc-sleep
- "5s"
image: vish/stress
resources:
requests:
cpu: "500m"
memory: "100Mi"
limits:
cpu: "1"
memory: "300Mi"
Apply the Pod manifest and create it:
kubectl apply -f pod-stress.yaml
View detailed information about the Pod:
kubectl get pod -n benchmark stress -o wide
Check resource usage after a few seconds:
kubectl top pods -n benchmark
Check the Pod status:
kubectl get pods -n benchmark
Output:
NAME READY STATUS RESTARTS AGE
stress 1/1 Running 1 (25s ago) 95s
Now, check the Pod details and try to find out why it was restarted:
kubectl describe pod -n benchmark stress
Output:
Name: stress
Namespace: benchmark
...
State: Running
Started: Wed, 28 Dec 2022 12:12:52 +0000
Last State: Terminated
Reason: OOMKilled
Exit Code: 1
Started: Wed, 28 Dec 2022 12:11:42 +0000
Finished: Wed, 28 Dec 2022 12:12:50 +0000
...
Clean up the environment:
kubectl delete ns benchmark
Add a named port to Pod
Add a named port for the container in Pod manifests:
Create the myapp
namespace:
kubectl create ns myapp
Create the myapp
Pod manifest:
apiVersion: v1
kind: Pod
metadata:
name: myapp
namespace: myapp
spec:
containers:
- name: myapp
image: ghcr.io/go4clouds/myapp:v1.0
resources:
limits:
cpu: "500m"
memory: "200Mi"
requests:
cpu: "150m"
memory: "100Mi"
ports:
- name: http
containerPort: 8081
protocol: TCP
Apply the myapp
Pod manifest to the cluster:
kubectl apply -f pod-myapp.yaml
Check the Pod status and wait until the status changes to Running
:
kubectl get pod -n myapp myapp -o wide
Check the logs of the myapp
Pod container:
kubectl logs -n myapp myapp
Open a tunnel connection to the myapp
pod container:
kubectl port-forward -n myapp pod/myapp 8081:8081
Test the connection in another terminal:
curl http://127.0.0.1:8081
Add liveness and readiness probes
We would like to extend the previous example and add liveness and readiness probes to it.
Update the myapp
Pod manifest example with a liveness probe:
apiVersion: v1
kind: Pod
metadata:
name: myapp
namespace: myapp
spec:
containers:
- name: myapp
image: ghcr.io/go4clouds/myapp:v1.0
resources:
limits:
cpu: "500m"
memory: "200Mi"
requests:
cpu: "150m"
memory: "100Mi"
ports:
- name: http
containerPort: 8081
protocol: TCP
livenessProbe:
httpGet:
path: /
port: 8080
failureThreshold: 1
initialDelaySeconds: 3
periodSeconds: 5
Save the changes and clean up the already running Pod instance:
kubectl delete -f pod-myapp.yaml
Apply the changes and start a new Pod:
kubectl apply -f pod-myapp.yaml
Wait a few seconds and check the Pod status:
kubectl describe pod -n myapp myapp
Output:
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 52s default-scheduler Successfully assigned myapp/myapp to worker1
Normal Pulled 17s (x2 over 51s) kubelet Container image "ghcr.io/go4clouds/myapp:v1.0" already present on machine
Normal Created 17s (x2 over 51s) kubelet Created container myapp
Normal Started 17s (x2 over 51s) kubelet Started container myapp
Warning Unhealthy 12s (x2 over 47s) kubelet Liveness probe failed: Get "http://192.168.235.155:8080/": dial tcp 192.168.235.155:8080: connect: connection refused
Normal Killing 12s (x2 over 47s) kubelet Container myapp failed liveness probe, will be restarted
The pod was restarted because the check couldn't succeed since the application is listening on port 8081
.
Let's fix the configuration:
apiVersion: v1
kind: Pod
metadata:
name: myapp
namespace: myapp
spec:
containers:
- name: myapp
image: ghcr.io/go4clouds/myapp:v1.0
resources:
limits:
cpu: "500m"
memory: "200Mi"
requests:
cpu: "150m"
memory: "100Mi"
ports:
- name: http
containerPort: 8081
protocol: TCP
livenessProbe:
httpGet:
path: /
port: http
failureThreshold: 1
initialDelaySeconds: 3
periodSeconds: 5
Save the changes and clean up the already running Pod instance:
kubectl delete -f pod-myapp.yaml
Apply the changes and start a new Pod:
kubectl apply -f pod-myapp.yaml
Now check the Pod status:
kubectl get pods -n myapp
kubectl describe pod -n myapp myapp
Clean up the myapp
Pod:
kubectl delete ns myapp