Kubernetes Resource Requests and Resource Limits

Maciej
7 min readJan 11, 2021

--

What is Resource Requests ??

It is mechanism that allows you to specify the resources (CPU/memory) required when deploying a pod, however, the pod can use more resources than the specified resource requests. We should use limit resource usage.

The specified resource is not reserved for the pod it is just for checking if the resource amount is free when deploying the pod. Key point is that when deploying a pod, the resource usage of node is not seen, but resource requests are seen for deployment.

So basically even if the resource like CPU or memory usage of node is 100%, it will be deployed if there is space in resource requests. It will not be deployed to a node that is full.

Quick case:

We have a node with 1G of memory and two pods with resource requests of 400MB are deployed, the third cannot be deployed. When we will try to deploy, then the STATUS of that pod will be Pending.

root@vagrant:/home/vagrant# cat test.yaml
apiVersion: v1
kind: Pod
metadata:
name: test-pod
spec:
containers:
- image: busybox
command: ["dd", "if=/dev/zero", "of=/dev/null"]
name: main
resources:
requests:
cpu: 200m
memory: 10Mi

Explanation:

  • cpu 200m: is an abbreviation for 200 milli cores. (1 cpu =1000 milli cores)

Let’s try specifying Resource Requests

root@vagrant:/home/vagrant# kubectl apply -f test.yaml
pod/requests-pod created
root@vagrant:/home/vagrant# kubectl get pods
NAME READY STATUS RESTARTS AGE
requests-pod 1/1 Running 0 2m41s
root@vagrant:/home/vagrant# kubectl describe nodes
Name: vagrant
Roles: master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/instance-type=k3s
beta.kubernetes.io/os=linux
k3s.io/hostname=vagrant
k3s.io/internal-ip=10.0.2.15
kubernetes.io/arch=amd64
kubernetes.io/hostname=vagrant
kubernetes.io/os=linux
node-role.kubernetes.io/master=true
node.kubernetes.io/instance-type=k3s
Annotations: flannel.alpha.coreos.com/backend-data: {"VtepMAC":"be:9d:d9:21:40:d7"}
flannel.alpha.coreos.com/backend-type: vxlan
flannel.alpha.coreos.com/kube-subnet-manager: true
flannel.alpha.coreos.com/public-ip: 10.0.2.15
k3s.io/node-args: ["server"]
k3s.io/node-config-hash: YQUWTDZQW5X5N3TCQFORF2PABT2DSBR7PKNKWRBMPKHS26HXPUCQ====
k3s.io/node-env: {"K3S_DATA_DIR":"/var/lib/rancher/k3s/data/b9574be94e4edbdbb93a39a2cb1f4e4df3ba699171a8b86863d1e8c421c91f63"}
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sun, 03 Jan 2021 17:53:54 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: vagrant
AcquireTime: <unset>
RenewTime: Sun, 03 Jan 2021 18:16:35 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
NetworkUnavailable False Sun, 03 Jan 2021 17:54:06 +0000 Sun, 03 Jan 2021 17:54:06 +0000 FlannelIsUp Flannel is running on this node
MemoryPressure False Sun, 03 Jan 2021 18:11:55 +0000 Sun, 03 Jan 2021 17:53:54 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Sun, 03 Jan 2021 18:11:55 +0000 Sun, 03 Jan 2021 17:53:54 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Sun, 03 Jan 2021 18:11:55 +0000 Sun, 03 Jan 2021 17:53:54 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Sun, 03 Jan 2021 18:11:55 +0000 Sun, 03 Jan 2021 17:54:04 +0000 KubeletReady kubelet is posting ready status. AppArmor enabled
Addresses:
InternalIP: 10.0.2.15
Hostname: vagrant
Capacity:
cpu: 1
ephemeral-storage: 81052112Ki
hugepages-2Mi: 0
memory: 4039424Ki
pods: 110
Allocatable:
cpu: 1
ephemeral-storage: 78847494492
hugepages-2Mi: 0
memory: 4039424Ki
pods: 110
System Info:
Machine ID: a87eceb6fc564af5a2cadee62e5cf6ff
System UUID: FB8CA06F-A295-4738-8250-C8112EF1879A
Boot ID: 6530cfe8-b50f-4232-948c-18c9cd8806c3
Kernel Version: 4.15.0-76-generic
OS Image: Ubuntu 18.04.4 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: containerd://1.4.3-k3s1
Kubelet Version: v1.19.5+k3s2
Kube-Proxy Version: v1.19.5+k3s2
PodCIDR: 10.42.0.0/24
PodCIDRs: 10.42.0.0/24
ProviderID: k3s://vagrant
Non-terminated Pods: (6 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system local-path-provisioner-7ff9579c6-m97g9 0 (0%) 0 (0%) 0 (0%) 0 (0%) 22m
kube-system metrics-server-7b4f8b595-xwx5h 0 (0%) 0 (0%) 0 (0%) 0 (0%) 22m
kube-system coredns-66c464876b-nqmmg 100m (10%) 0 (0%) 70Mi (1%) 170Mi (4%) 22m
kube-system svclb-traefik-lwnsd 0 (0%) 0 (0%) 0 (0%) 0 (0%) 22m
kube-system traefik-5dd496474-2z8bp 0 (0%) 0 (0%) 0 (0%) 0 (0%) 22m
default requests-pod 200m (20%) 0 (0%) 10Mi (0%) 0 (0%) 31s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 300m (30%) 0 (0%)
memory 80Mi (2%) 170Mi (4%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 22m kubelet Starting kubelet.
Warning InvalidDiskCapacity 22m kubelet invalid capacity 0 on image filesystem
Normal NodeAllocatableEnforced 22m kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 22m (x2 over 22m) kubelet Node vagrant status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 22m (x2 over 22m) kubelet Node vagrant status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 22m (x2 over 22m) kubelet Node vagrant status is now: NodeHasSufficientPID
Normal Starting 22m kube-proxy Starting kube-proxy.
Normal NodeReady 22m kubelet Node vagrant status is now: NodeReady
root@vagrant:/home/vagrant#

Let’s try specify a Resource Request beyond the node capacity

root@vagrant:/home/vagrant# kubectl run test-pod-1 --image=busybox --restart Never --requests='cpu=2000m,memory=2000Mi' -- sleep 100000
pod/test-pod-1 created
root@vagrant:/home/vagrant# kubectl get pods
NAME READY STATUS RESTARTS AGE
requests-pod 1/1 Running 0 4m11s
test-pod-1 0/1 Pending 0 35s
root@vagrant:/home/vagrant# kubectl describe pod test-pod-1
Name: test-pod-1
Namespace: default
Priority: 0
Node: <none>
Labels: run=test-pod-1
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Containers:
test-pod-1:
Image: busybox
Port: <none>
Host Port: <none>
Args:
sleep
100000
Requests:
cpu: 2
memory: 2000Mi
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-64xm2 (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
default-token-64xm2:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-64xm2
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 66s default-scheduler 0/1 nodes are available: 1 Insufficient cpu.
Warning FailedScheduling 66s default-scheduler 0/1 nodes are available: 1 Insufficient cpu.

As we can see STATUS is Pending because there is no free space in Resource Request.

What are Resource Limits?

If you specify resource limits, you can set limits on the resources CPU/memory used by the pod.

  • If there are free resources in node, resources above the specified resource limits cannot be used.
  • If you do not specify resource limits, you will use unlimited resources.
  • If resource requests are not specified as above, a value similar to the limit value will be set in resource requests.
  • Resource limit we can be specified beyond the amount of available resources of node.

Case:

If the container process actually tries to use more than the CPU resources specified by Resource Limit, it will only slow down the CPU speed but will not cause any problems. For memory, it’s different, because If a container attempts to use more memory than specified in the Resource Limit, the container process is killed.

Let’s try specifying Resource Limits

root@vagrant:/home/vagrant# cat test2.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-with-limits
spec:
containers:
- image: busybox
command: ["dd", "if=/dev/zero", "of=/dev/null"]
name: main
resources:
limits:
cpu: 0.5
memory: 20Mi
root@vagrant:/home/vagrant# kubectl apply -f test2.yaml
pod/pod-with-limits created
root@vagrant:/home/vagrant# kubectl get pods
NAME READY STATUS RESTARTS AGE
pod-with-limits 1/1 Running 0 4s
root@vagrant:/home/vagrant# kubectl describe nodes
Name: vagrant
Roles: master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/instance-type=k3s
beta.kubernetes.io/os=linux
k3s.io/hostname=vagrant
k3s.io/internal-ip=10.0.2.15
kubernetes.io/arch=amd64
kubernetes.io/hostname=vagrant
kubernetes.io/os=linux
node-role.kubernetes.io/master=true
node.kubernetes.io/instance-type=k3s
Annotations: flannel.alpha.coreos.com/backend-data: {"VtepMAC":"be:9d:d9:21:40:d7"}
flannel.alpha.coreos.com/backend-type: vxlan
flannel.alpha.coreos.com/kube-subnet-manager: true
flannel.alpha.coreos.com/public-ip: 10.0.2.15
k3s.io/node-args: ["server"]
k3s.io/node-config-hash: YQUWTDZQW5X5N3TCQFORF2PABT2DSBR7PKNKWRBMPKHS26HXPUCQ====
k3s.io/node-env: {"K3S_DATA_DIR":"/var/lib/rancher/k3s/data/b9574be94e4edbdbb93a39a2cb1f4e4df3ba699171a8b86863d1e8c421c91f63"}
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sun, 03 Jan 2021 17:53:54 +0000
Taints: <none>
Unschedulable: false
Lease:
HolderIdentity: vagrant
AcquireTime: <unset>
RenewTime: Sun, 03 Jan 2021 18:36:45 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
NetworkUnavailable False Sun, 03 Jan 2021 17:54:06 +0000 Sun, 03 Jan 2021 17:54:06 +0000 FlannelIsUp Flannel is running on this node
MemoryPressure False Sun, 03 Jan 2021 18:31:55 +0000 Sun, 03 Jan 2021 17:53:54 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Sun, 03 Jan 2021 18:31:55 +0000 Sun, 03 Jan 2021 17:53:54 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Sun, 03 Jan 2021 18:31:55 +0000 Sun, 03 Jan 2021 17:53:54 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Sun, 03 Jan 2021 18:31:55 +0000 Sun, 03 Jan 2021 17:54:04 +0000 KubeletReady kubelet is posting ready status. AppArmor enabled
Addresses:
InternalIP: 10.0.2.15
Hostname: vagrant
Capacity:
cpu: 1
ephemeral-storage: 81052112Ki
hugepages-2Mi: 0
memory: 4039424Ki
pods: 110
Allocatable:
cpu: 1
ephemeral-storage: 78847494492
hugepages-2Mi: 0
memory: 4039424Ki
pods: 110
System Info:
Machine ID: a87eceb6fc564af5a2cadee62e5cf6ff
System UUID: FB8CA06F-A295-4738-8250-C8112EF1879A
Boot ID: 6530cfe8-b50f-4232-948c-18c9cd8806c3
Kernel Version: 4.15.0-76-generic
OS Image: Ubuntu 18.04.4 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: containerd://1.4.3-k3s1
Kubelet Version: v1.19.5+k3s2
Kube-Proxy Version: v1.19.5+k3s2
PodCIDR: 10.42.0.0/24
PodCIDRs: 10.42.0.0/24
ProviderID: k3s://vagrant
Non-terminated Pods: (6 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system local-path-provisioner-7ff9579c6-m97g9 0 (0%) 0 (0%) 0 (0%) 0 (0%) 42m
kube-system metrics-server-7b4f8b595-xwx5h 0 (0%) 0 (0%) 0 (0%) 0 (0%) 42m
kube-system coredns-66c464876b-nqmmg 100m (10%) 0 (0%) 70Mi (1%) 170Mi (4%) 42m
kube-system svclb-traefik-lwnsd 0 (0%) 0 (0%) 0 (0%) 0 (0%) 42m
kube-system traefik-5dd496474-2z8bp 0 (0%) 0 (0%) 0 (0%) 0 (0%) 42m
default pod-with-limits 500m (50%) 500m (50%) 20Mi (0%) 20Mi (0%) 102s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 600m (60%) 500m (50%)
memory 90Mi (2%) 190Mi (4%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 42m kubelet Starting kubelet.
Warning InvalidDiskCapacity 42m kubelet invalid capacity 0 on image filesystem
Normal NodeAllocatableEnforced 42m kubelet Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 42m (x2 over 42m) kubelet Node vagrant status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 42m (x2 over 42m) kubelet Node vagrant status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 42m (x2 over 42m) kubelet Node vagrant status is now: NodeHasSufficientPID
Normal Starting 42m kube-proxy Starting kube-proxy.
Normal NodeReady 42m kubelet Node vagrant status is now: NodeReady

--

--

Maciej

DevOps Consultant. I’m strongly focused on automation, security, and reliability.