Tag: Otel Collector
Failures in OpenTelemetry Collector pipelines or exporters.
ID | Title | Description | Category | Technology | Tags |
---|---|---|---|---|---|
CRE-2025-0033 Low Impact: 7/10 Mitigation: 4/10 | OpenTelemetry Collector refuses to scrape due to memory pressure | The OpenTelemetry Collector may refuse to ingest metrics during a Prometheus scrape if it exceeds its configured memory limits. When the `memory_limiter` processor is enabled, the Collector actively drops data to prevent out-of-memory errors, resulting in log messages indicating that data was refused due to high memory usage. | Observability Problems | opentelemetry-collector | Otel CollectorPrometheusMemoryMetricsBackpressureData LossKnown IssuePublic |
CRE-2025-0036 Low Impact: 6/10 Mitigation: 3/10 | OpenTelemetry Collector drops data due to 413 Payload Too Large from exporter target | The OpenTelemetry Collector may drop telemetry data when an exporter backend responds with a 413 Payload Too Large error. This typically happens when large batches of metrics, logs, or traces exceed the maximum payload size accepted by the backend. By default, the collector drops these payloads unless retry behavior is explicitly enabled. | Observability Problems | opentelemetry-collector | Otel CollectorExporterPayloadBatchDropObservabilityTelemetryKnown IssuePublic |
CRE-2025-0037 Low Impact: 8/10 Mitigation: 4/10 | OpenTelemetry Collector panics on nil attribute value in Prometheus Remote Write translator | The OpenTelemetry Collector can panic due to a nil pointer dereference in the Prometheus Remote Write exporter. The issue occurs when attribute values are assumed to be strings, but the internal representation is nil or incompatible, leading to a runtime `SIGSEGV` segmentation fault and crashing the collector. | Observability Problems | opentelemetry-collector | CrashPrometheusOtel CollectorExporterPanicTranslationAttributeNil PointerKnown IssuePublic |
CRE-2025-0039 Medium Impact: 5/10 Mitigation: 3/10 | OpenTelemetry Collector exporter experiences retryable errors due to backend unavailability | The OpenTelemetry Collector may intermittently fail to export telemetry data when the backend API is unavailable or overloaded. These failures manifest as timeouts (`context deadline exceeded`) or transient HTTP 502 responses. While retry logic is typically enabled, repeated failures can introduce delay or backpressure. | Observability Problems | opentelemetry-collector | Otel CollectorExporterTimeoutRetryNetworkTelemetryKnown IssuePublic |