Skip to main content

Technology: opentelemetry-collector

IDTitleDescriptionCategoryTechnologyTags
CRE-2025-0033
Low
Impact: 7/10
Mitigation: 4/10
OpenTelemetry Collector refuses to scrape due to memory pressureThe OpenTelemetry Collector may refuse to ingest metrics during a Prometheus scrape if it exceeds its configured memory limits. When the `memory_limiter` processor is enabled, the Collector actively drops data to prevent out-of-memory errors, resulting in log messages indicating that data was refused due to high memory usage.Observability Problemsopentelemetry-collectorOtel CollectorPrometheusMemoryMetricsBackpressureData LossKnown IssuePublic
CRE-2025-0036
Low
Impact: 6/10
Mitigation: 3/10
OpenTelemetry Collector drops data due to 413 Payload Too Large from exporter targetThe OpenTelemetry Collector may drop telemetry data when an exporter backend responds with a 413 Payload Too Large error. This typically happens when large batches of metrics, logs, or traces exceed the maximum payload size accepted by the backend. By default, the collector drops these payloads unless retry behavior is explicitly enabled.Observability Problemsopentelemetry-collectorOtel CollectorExporterPayloadBatchDropObservabilityTelemetryKnown IssuePublic
CRE-2025-0037
Low
Impact: 8/10
Mitigation: 4/10
OpenTelemetry Collector panics on nil attribute value in Prometheus Remote Write translatorThe OpenTelemetry Collector can panic due to a nil pointer dereference in the Prometheus Remote Write exporter. The issue occurs when attribute values are assumed to be strings, but the internal representation is nil or incompatible, leading to a runtime `SIGSEGV` segmentation fault and crashing the collector.Observability Problemsopentelemetry-collectorCrashPrometheusOtel CollectorExporterPanicTranslationAttributeNil PointerKnown IssuePublic
CRE-2025-0039
Medium
Impact: 5/10
Mitigation: 3/10
OpenTelemetry Collector exporter experiences retryable errors due to backend unavailabilityThe OpenTelemetry Collector may intermittently fail to export telemetry data when the backend API is unavailable or overloaded. These failures manifest as timeouts (`context deadline exceeded`) or transient HTTP 502 responses. While retry logic is typically enabled, repeated failures can introduce delay or backpressure.Observability Problemsopentelemetry-collectorOtel CollectorExporterTimeoutRetryNetworkTelemetryKnown IssuePublic