Skip to main content

Technology: kubernetes

IDTitleDescriptionCategoryTechnologyTags
CRE-2025-0071
High
Impact: 9/10
Mitigation: 8/10
CoreDNS unavailableCoreDNS deployment is unavailable or has no ready endpoints, indicating an imminent cluster-wide DNS outage.Kubernetes ProblemskubernetesKubernetesNetworkingDNSHigh Availability
CRE-2025-0106
High
Impact: 9/10
Mitigation: 7/10
Ambient CNI Sandbox Creation FailureDetects when the Istio CNI plugin fails to set up a pod's network sandbox in Ambient mode. Two common root causes are: 1. **No ztunnel connection** (CNI cannot contact the node-level ztunnel agent).Istio Ambient TroubleshootingkubernetesIstioCNIAmbient
CRE-2025-0108
High
Impact: 9/10
Mitigation: 6/10
Ambient mode readiness probe failuresIn Ambient mode, Istio applies a SNAT rule so that kubelet probe traffic appears from 169.254.7.127 and is bypassed by the data-plane. If you see **Readiness probe failed** events begin only after enabling Ambient, it almost always means that SNAT/bypass isn't working in your CNI or networking environment.Istio Ambient TroubleshootingkubernetesIstioAmbientCNI
CRE-2025-0119
High
Impact: 8/10
Mitigation: 7/10
Kubernetes Pod Disruption Budget (PDB) Violation During Rolling UpdatesDuring rolling updates, when a deployment's maxUnavailable setting conflicts with a Pod Disruption Budget's minAvailable requirement, it can cause service outages by terminating too many pods simultaneously, violating the availability guarantees. This can also occur during node drains, cluster autoscaling, or maintenance operations.Kubernetes ProblemskubernetesK8sKnown ProblemMisconfigurationOperational errorHigh Availability
CRE-2025-0125
High
Impact: 9/10
Mitigation: 6/10
Kubelet EventedPLEG Panic Causes NodeFailureDetects a critical kubelet panic in the EventedPLEG subsystem under rapid pod launch pressure. When triggered, the node's kubelet crashes, the node becomes NotReady and all resident pods are evicted resulting in a full node-level outage until manual intervention.Kubernetes ProblemskubernetesKubernetesKubeletPanic
CRE-2025-0127
Medium
Impact: 3/10
Mitigation: 3/10
Container exited 127 due to command not found (bad entrypoint/command)Exit code 127 indicates the configured command/entrypoint was not found in the image or PATH. New or misconfigured deployments commonly hit this and immediately crash.Configuration ProblemkubernetesK8sExit CodeCommandEntrypointStartup Failure
CRE-2025-0134
Medium
Impact: 6/10
Mitigation: 2/10
Container exited 134 due to SIGABRT / assertion failureExit code 134 indicates the process aborted via SIGABRT, commonly due to failed assertions, allocator checks (e.g., glibc detecting heap corruption), or explicit abort() calls.RuntimekubernetesK8sExit CodeSIGABRTAssertionNative
CRE-2025-0137
High
Impact: 6/10
Mitigation: 2/10
Pod terminated with Exit Code 137 due to OOMKilled (memory limit exceeded)The container exceeded its memory limit and was killed by the kernel OOM killer. Kubernetes reports a terminated state with Reason=OOMKilled and exitCode=137. This often manifests as CrashLoopBackOff under sustained memory pressure.Memory ProblemskubernetesK8sExit CodeOut of MemoryMemoryCrash LoopReliability
CRE-2025-0139
Medium
Impact: 7/10
Mitigation: 2/10
Container exited 139 due to segmentation fault (SIGSEGV)Exit code 139 indicates SIGSEGV (invalid memory access) in native/runtime code. Frequently caused by unsafe pointer operations, ABI/library mismatches, or native extensions.RuntimekubernetesK8sExit CodeSegfaultNativeReliability