Kubernetes Resource Confirmation

Maciej
4 min readFeb 16, 2022
Photo by Scott Graham on Unsplash

Introduction

Setting up a pod in a Kubernetes cluster consumes resources, so even though the resource consumption of the pod / container is small, it is still necessary to monitor the resources. In actual operation, I think that it is common to monitor with Prometheus or another tool, but here I will summarize the confirmation method with the kubectl command.

Kubectl top command

You can check the CPU and memory usage and usage rate of each node that configures the cluster with the following command.

root@vagrant:/home/vagrant# kubectl top node
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
vagrant 99m 9% 1052Mi 26%
root@vagrant:/home/vagrant#

You can check the CPU and memory usage used by each pod with the following command. If you do not specify namespace with -n, only pods with the default namespace are displayed.

root@vagrant:/home/vagrant# kubectl get ns
NAME STATUS AGE
default Active 17m
kube-system Active 17m
kube-public Active 17m
kube-node-lease Active 17m
root@vagrant:/home/vagrant# kubectl -n kube-system top pod
NAME CPU(cores) MEMORY(bytes)
coredns-96cc4f57d-649pm 2m 10Mi
local-path-provisioner-84bb864455-85n5x 1m 6Mi
metrics-server-ff9dbcb6c-nksvs 10m 14Mi
svclb-traefik-q48pc 0m 1Mi
traefik-55fdc6d984-g8n8r 2m 17Mi

Multiple containers may be deployed in a pod. In that case, you can also check the usage for each container by adding the --containers option.

root@vagrant:/home/vagrant# kubectl -n kube-system top pod --containers
POD NAME CPU(cores) MEMORY(bytes)
coredns-96cc4f57d-649pm coredns 2m 10Mi
local-path-provisioner-84bb864455-85n5x local-path-provisioner 1m 6Mi
metrics-server-ff9dbcb6c-nksvs metrics-server 7m 14Mi
svclb-traefik-q48pc lb-port-443 0m 0Mi
svclb-traefik-q48pc lb-port-80 0m 0Mi
traefik-55fdc6d984-g8n8r traefik 1m 17Mi

Kubectl describe command

kubectl describe displays the details of the specified resource. You can also check the resources used in it. If you specify a node, you can check the pods deployed and usage on that node.

root@vagrant:/home/vagrant# kubectl get nodes
NAME STATUS ROLES AGE VERSION
vagrant Ready control-plane,master 19m v1.22.6+k3s1
root@vagrant:/home/vagrant# kubectl describe node vagrant
Name: vagrant
Roles: control-plane,master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/instance-type=k3s
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=vagrant
kubernetes.io/os=linux
node-role.kubernetes.io/control-plane=true
node-role.kubernetes.io/master=true
node.kubernetes.io/instance-type=k3s
Annotations: flannel.alpha.coreos.com/backend-data: {"VNI":1,"VtepMAC":"96:72:f0:5c:2d:a2"}
flannel.alpha.coreos.com/backend-type: vxlan
flannel.alpha.coreos.com/kube-subnet-manager: true
flannel.alpha.coreos.com/public-ip: 10.0.2.15
k3s.io/hostname: vagrant
k3s.io/internal-ip: 10.0.2.15
k3s.io/node-args: ["server"]
k3s.io/node-config-hash: AEGR3FHSXOSNKE6577IWP4BLTQAFK2FNMKUPOQFR5RBES5VH2QWA====
k3s.io/node-env: {"K3S_DATA_DIR":"/var/lib/rancher/k3s/data/8307e9b398a0ee686ec38e18339d1464f75158a8b948b059b564246f4af3a0a6"}
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sun, 06 Feb 2022 17:28:30 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: vagrant
AcquireTime: <unset>
RenewTime: Sun, 06 Feb 2022 17:48:08 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Sun, 06 Feb 2022 17:45:35 +0000 Sun, 06 Feb 2022 17:28:30 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Sun, 06 Feb 2022 17:45:35 +0000 Sun, 06 Feb 2022 17:28:30 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Sun, 06 Feb 2022 17:45:35 +0000 Sun, 06 Feb 2022 17:28:30 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Sun, 06 Feb 2022 17:45:35 +0000 Sun, 06 Feb 2022 17:28:40 +0000 KubeletReady kubelet is posting ready status. AppArmor enabled
Addresses:
InternalIP: 10.0.2.15
Hostname: vagrant
Capacity:
cpu: 1
ephemeral-storage: 80536048Ki
hugepages-2Mi: 0
memory: 4030880Ki
pods: 110
Allocatable:
cpu: 1
ephemeral-storage: 78345467433
hugepages-2Mi: 0
memory: 4030880Ki
pods: 110
System Info:
Machine ID: 11f0d789b54247c384a47ed77ab8d622
System UUID: 7265acc1-e11b-9e47-9152-06aeb404575a
Boot ID: 31c40a2f-8465-4799-8655-5a2a27971ab7
Kernel Version: 5.4.0-42-generic
OS Image: Ubuntu 20.04.2 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: containerd://1.5.9-k3s1
Kubelet Version: v1.22.6+k3s1
Kube-Proxy Version: v1.22.6+k3s1
PodCIDR: 10.42.0.0/24
PodCIDRs: 10.42.0.0/24
ProviderID: k3s://vagrant
Non-terminated Pods: (6 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system local-path-provisioner-84bb864455-85n5x 0 (0%) 0 (0%) 0 (0%) 0 (0%) 19m
kube-system coredns-96cc4f57d-649pm 100m (10%) 0 (0%) 70Mi (1%) 170Mi (4%) 19m
kube-system metrics-server-ff9dbcb6c-nksvs 100m (10%) 0 (0%) 70Mi (1%) 0 (0%) 19m
kube-system svclb-traefik-q48pc 0 (0%) 0 (0%) 0 (0%) 0 (0%) 18m
kube-system traefik-55fdc6d984-g8n8r 0 (0%) 0 (0%) 0 (0%) 0 (0%) 18m
default sample-pod 0 (0%) 0 (0%) 0 (0%) 0 (0%) 6m43s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 200m (20%) 0 (0%)
memory 140Mi (3%) 170Mi (4%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 19m kube-proxy
Normal Starting 19m kubelet Starting kubelet.
Warning InvalidDiskCapacity 19m kubelet invalid capacity 0 on image filesystem
Normal NodeHasSufficientMemory 19m (x2 over 19m) kubelet Node vagrant status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 19m (x2 over 19m) kubelet Node vagrant status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 19m (x2 over 19m) kubelet Node vagrant status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 19m kubelet Updated Node Allocatable limit across pods
Normal NodeReady 19m kubelet Node vagrant status is now: NodeReady
root@vagrant:/home/vagrant#

--

--

Maciej

DevOps Consultant. I’m strongly focused on automation, security, and reliability.