Skip to main content

Tag: Networking

Problems within networking components, such as interface misconfigurations or routing errors

IDTitleDescriptionCategoryTechnologyTags
CRE-2025-0027
Low
Impact: 7/10
Mitigation: 2/10
Neutron Open Virtual Network (OVN) and Virtual Interface (VIF) allows port binding to dead agents, causing VIF plug timeouts
In OpenStack deployments using Neutron with the OVN ML2 driver, ports could be bound to agents that were not alive. This behavior led to virtual machines experiencing network interface plug timeouts during provisioning, as the port binding would not complete successfully.
Networking ProblemsneutronNeutronOvnTimeoutNetworkingOpenstackKnown IssuePublic
CRE-2025-0054
Low
Impact: 7/10
Mitigation: 5/10
NGINX upstream connection timeout
NGINX reports an upstream timeout error when it cannot establish or maintain a connection to backend services within the configured timeout threshold. This occurs when backend services are unresponsive, overloaded, or when the timeout values are set too low for normal operation conditions.The error indicates that NGINX attempted to proxy a request to an upstream server, but the connection or read operation timed out before completion.
Proxy Timeout ProblemsnginxNginxTimeoutProxyBackend IssueNetworking
CRE-2025-0071
High
Impact: 9/10
Mitigation: 8/10
CoreDNS unavailable
CoreDNS deployment is unavailable or has no ready endpoints, indicating an imminent cluster-wide DNS outage.
Kubernetes ProblemskubernetesKubernetesNetworkingDNSHigh Availability
CRE-2025-0112
Critical
Impact: 10/10
Mitigation: 4/10
AWS VPC CNI Node IP Pool Depletion Crisis
Critical AWS VPC CNI node IP pool depletion detected causing cascading pod scheduling failures.This pattern indicates severe subnet IP address exhaustion combined with ENI allocation failures,leading to complete cluster networking breakdown. The failure sequence shows ipamd errors,kubelet scheduling failures, and controller-level pod creation blocks that render clustersunable to deploy new workloads, scale existing services, or recover from node failures.This represents one of the most severe Kubernetes infrastructure failures, often requiringimmediate manual intervention including subnet expansion, secondary CIDR provisioning,or emergency workload termination to restore cluster functionality.
VPC CNI Problemsaws-vpc-cniAWSEKSKubernetesNetworkingVPC CNIAWS CNIIP ExhaustionENI AllocationSubnet ExhaustionPod Scheduling FailureCluster ParalysisAWS API LimitsKnown ProblemCritical InfrastructureService OutageCascading FailureCapacity ExceededScalability IssueRevenue ImpactCompliance ViolationThreshold ExceededInfrastructurePublic
CRE-2025-0122
Critical
Impact: 10/10
Mitigation: 6/10
AWS VPC CNI IP Address Exhaustion Crisis
Critical AWS VPC CNI IP address exhaustion detected. This pattern indicates cascading failureswhere subnet IP exhaustion leads to ENI allocation failures, pod scheduling failures, andcomplete service unavailability. The failure sequence shows IP allocation errors, ENI attachmentfailures, and resulting pod startup failures that affect cluster scalability and workload deployment.
Networking Problemsaws-vpc-cniAWSVPC CNIKubernetesNetworkingIP ExhaustionENI AllocationPod SchedulingCluster ScalingHigh AvailabilityService Unavailability