Skip to main content

Tag: Data Loss

Problems where data is lost or dropped due to system failures or processing errors

IDTitleDescriptionCategoryTechnologyTags
CRE-2025-0033
Low
Impact: 7/10
Mitigation: 4/10
OpenTelemetry Collector refuses to scrape due to memory pressureThe OpenTelemetry Collector may refuse to ingest metrics during a Prometheus scrape if it exceeds its configured memory limits. When the `memory_limiter` processor is enabled, the Collector actively drops data to prevent out-of-memory errors, resulting in log messages indicating that data was refused due to high memory usage.Observability Problemsopentelemetry-collectorOtel CollectorPrometheusMemoryMetricsBackpressureData LossKnown IssuePublic
CRE-2025-0070
Critical
Impact: 10/10
Mitigation: 6/10
Kafka Under-Replicated Partitions CrisisCritical Kafka cluster degradation detected: Multiple partitions have lost replicas due to broker failure, resulting in an under-replicated state. This pattern indicates a broker has become unavailable, causing partition leadership changes and In-Sync Replica (ISR) shrinkage across multiple topics.Message Queue ProblemskafkaKafkaReplicationData LossHigh AvailabilityBroker FailureCluster Degradation
CRE-2025-0073
High
Impact: 9/10
Mitigation: 6/10
Redis Rejects Writes Due to Reaching 'maxmemory' LimitThe Redis instance has reached its configured 'maxmemory' limit. Because its active memory management policy does not permit the eviction of existing keys to free up space (as is the case when the 'noeviction' policy is in effect, which is often the default), Redis rejects new write commands by sending an "OOM command not allowed" error to the client.Database Problemsredis-cliRedisRedis CLIMemory PressureMemoryData LossPublic