CRE-2024-0007
The underlying Erlang process, Mnesia, is overloaded (`** WARNING ** Mnesia is overloaded`).
The underlying Erlang process, Mnesia, is overloaded (`** WARNING ** Mnesia is overloaded`).
A RabbitMQ node has entered the “memory alarm” state because the total memory used by the Erlang VM (plus allocated binaries, ETS tables,
The Erlang VM has reported a **`busy_dist_port`** condition, meaning the send buffer of a distribution port (used for inter\-node traffic inside a
The Google Kubernetes Engine metrics agent is failing to export metrics.
OVN daemons (e.g., ovn\-controller) are stuck in a tight poll loop, driving CPU to 100 %. Logs show “Dropped … due to excessive rate” or
KEDA allows for fine\-grained autoscaling (including to/from zero) for event driven Kubernetes workloads. KEDA serves as a Kubernetes Metrics Server and allows users to define autoscaling rules using a dedicated Kubernetes custom resource definition.
When a NGINX upstream becomes unreachable or its DNS entry disappears, NGINX requests begin to fail.
When the configured replication factor for a Kafka topic is greater than the actual number of brokers in the cluster, Kafka repeatedly fails to assign partitions and logs replication\-related errors. This results in persistent warnings or an `InvalidReplicationFactorException` when the broker tries to create internal or user\-defined topics.
Critical AWS VPC CNI node IP pool depletion detected causing cascading pod scheduling failures.
During rolling updates, when a deployment's maxUnavailable setting conflicts with
There is a known issue in the Strimzi Kafka Topic Operator where the operator thread can become blocked. This can cause the operator to stop processing events and can lead to a backlog of events. This can cause the operator to become unresponsive and can lead to liveness probe failures and restarts of the Strimzi Kafka Topic Operator.
Telepresence 2.5.x versions suffer from a critical TLS handshake error between the mutating webhook and the agent injector.
The ingress\-nginx controller worker processes are crashing because there are too many for the limits specified for this deployment.