patternkubernetesMinor
Why is there insufficient memory on kubernetes node
Viewed 0 times
whykubernetesnodeinsufficientmemorythere
Problem
I have an elasticsearch cluster in a kubernetes cluster. I have the data pods going to memory optimized nodes which are tainted so that only the elasticsearch data pods get scheduled to the. Right now I have 3 memory optimized ec2 instances for these data pods. They are r5.2Xlarge's which have 64G of memory. Here is the output of one of these r5 nodes. (they all look the same)
```
Capacity:
attachable-volumes-aws-ebs: 25
cpu: 8
ephemeral-storage: 32461564Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 65049812Ki
pods: 110
Allocatable:
attachable-volumes-aws-ebs: 25
cpu: 8
ephemeral-storage: 29916577333
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 64947412Ki
pods: 110
System Info:
Machine ID: ec223b5ea23ea6bd5b06e8ed0a733d2d
System UUID: ec223b5e-a23e-a6bd-5b06-e8ed0a733d2d
Boot ID: 798aca5f-d9e1-4c9f-b75d-e16f7ba2d514
Kernel Version: 5.4.0-1024-aws
OS Image: Ubuntu 20.04.1 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://19.3.11
Kubelet Version: v1.18.10
Kube-Proxy Version: v1.18.10
Non-terminated Pods: (5 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
amazon-cloudwatch fluentd-cloudwatch-tzsv4 100m (1%) 0 (0%) 200Mi (0%) 400Mi (0%) 21d
default prometheus-prometheus-node-exporter-tvmd4
```
Capacity:
attachable-volumes-aws-ebs: 25
cpu: 8
ephemeral-storage: 32461564Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 65049812Ki
pods: 110
Allocatable:
attachable-volumes-aws-ebs: 25
cpu: 8
ephemeral-storage: 29916577333
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 64947412Ki
pods: 110
System Info:
Machine ID: ec223b5ea23ea6bd5b06e8ed0a733d2d
System UUID: ec223b5e-a23e-a6bd-5b06-e8ed0a733d2d
Boot ID: 798aca5f-d9e1-4c9f-b75d-e16f7ba2d514
Kernel Version: 5.4.0-1024-aws
OS Image: Ubuntu 20.04.1 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://19.3.11
Kubelet Version: v1.18.10
Kube-Proxy Version: v1.18.10
Non-terminated Pods: (5 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
amazon-cloudwatch fluentd-cloudwatch-tzsv4 100m (1%) 0 (0%) 200Mi (0%) 400Mi (0%) 21d
default prometheus-prometheus-node-exporter-tvmd4
Solution
3 Insufficient memory, 3 node(s) didn't match pod affinity/anti-affinity, 3 node(s) didn't satisfy existing pods anti-affinity rules
This means that ES trying to find a different node to deploy separately all the pods of ES. But the cause of your node count is not enough to run one pod on each node, the other pods remain pending state.
For more information read here
So from here, you have a 2 choice Neo))
This means that ES trying to find a different node to deploy separately all the pods of ES. But the cause of your node count is not enough to run one pod on each node, the other pods remain pending state.
For more information read here
So from here, you have a 2 choice Neo))
- Add more nodes until all pods will have been scheduled on the different nodes
- Minimize your ES sts pod count to 3
Context
StackExchange DevOps Q#13488, answer score: 2
Revisions (0)
No revisions yet.