Tag: AWS
Amazon Web Services
ID | Title | Description | Category | Technology | Tags |
---|---|---|---|---|---|
CRE-2025-0026 Low Impact: 6/10 Mitigation: 1/10 | AWS EBS CSI Driver fails to detach volume when VolumeAttachment has empty nodeName | In clusters using the AWS EBS CSI driver, the controller may fail to detach a volume if the associated VolumeAttachment resource has an empty `spec.nodeName`. This results in a log error and skipped detachment, which may block PVC reuse or node cleanup. | Storage | eks-nodeagent | ebscsiAWSStoragePublic |
CRE-2025-0029 Low Impact: 6/10 Mitigation: 5/10 | Loki fails to retrieve AWS credentials when specifying S3 endpoint with IRSA | - When deploying Grafana Loki with AWS S3 as the storage backend and specifying a custom S3 endpoint (e.g., for FIPS compliance or GovCloud regions), Loki may fail to retrieve AWS credentials via IAM Roles for Service Accounts (IRSA). This results in errors during startup or when attempting to upload index tables, preventing Loki from functioning correctly. | Storage | loki | LokiS3AWSIrsaStorageAuthenticationHelmPublic |
CRE-2025-0057 Low Impact: 3/10 Mitigation: 1/10 | Verbose Logging in AWS Network Policy Agent During Policy Verdicts | - When using AWS Network Policy Agent with VPC CNI addon v1.17.1, the log message `failed to get caller` may appear frequently. - This behavior correlates with policy verdicts being evaluated, and the volume increases in environments with higher traffic or more active policies. - The issue does not indicate functional failure, but it increases log volume and may obscure real issues. | Logging Problems | eks-nodeagent | AWSVPC CNILog Noise |
CRE-2025-0061 Medium Impact: 7/10 Mitigation: 4/10 | Karpenter Stability Issues on EKS During Leader Election | - EKS may be able to handle steady, predictable scale, but struggles during large‑scale auto scaling events when many workloads and nodes are spinning up or down simultaneously. - This instability affects components that implement leader election using the Kubernetes API, such as: - aws‑load‑balancer‑controller - karpenter - keda‑operator - ebs‑csi‑controller - efs‑csi‑controller | Stability Problems | karpenter | KarpenterKEDAAWSEKS |
CRE-2025-0112 Critical Impact: 10/10 Mitigation: 4/10 | AWS VPC CNI Node IP Pool Depletion Crisis | Critical AWS VPC CNI node IP pool depletion detected causing cascading pod scheduling failures. This pattern indicates severe subnet IP address exhaustion combined with ENI allocation failures, leading to complete cluster networking breakdown. The failure sequence shows ipamd errors, kubelet scheduling failures, and controller-level pod creation blocks that render clusters unable to deploy new workloads, scale existing services, or recover from node failures. This represents one of the most severe Kubernetes infrastructure failures, often requiring immediate manual intervention including subnet expansion, secondary CIDR provisioning, or emergency workload termination to restore cluster functionality. | VPC CNI Problems | aws-vpc-cni | AWSEKSKubernetesNetworkingVPC CNIAWS CNIIP ExhaustionENI AllocationSubnet ExhaustionPod Scheduling FailureCluster ParalysisAWS API LimitsKnown ProblemCritical InfrastructureService OutageCascading FailureCapacity ExceededScalability IssueRevenue ImpactCompliance ViolationThreshold ExceededInfrastructurePublic |
CRE-2025-0122 Critical Impact: 10/10 Mitigation: 6/10 | AWS VPC CNI IP Address Exhaustion Crisis | Critical AWS VPC CNI IP address exhaustion detected. This pattern indicates cascading failures where subnet IP exhaustion leads to ENI allocation failures, pod scheduling failures, and complete service unavailability. The failure sequence shows IP allocation errors, ENI attachment failures, and resulting pod startup failures that affect cluster scalability and workload deployment. | Networking Problems | aws-vpc-cni | AWSVPC CNIKubernetesNetworkingIP ExhaustionENI AllocationPod SchedulingCluster ScalingHigh AvailabilityService Unavailability |