CRE-2024-0008
RabbitMQ memory alarmHighImpact: 9/10Mitigation: 6/10
Description
A RabbitMQ node has entered the “memory alarm” state because the total memory used by the Erlang VM (plus allocated binaries, ETS tables, and processes) has exceeded the configured `vm_memory_high_watermark`. While the alarm is active the broker applies flow-control, blocking publishers and pausing most ingress activity to protect itself from running out of RAM.
Mitigation
- Inspect memory usage to identify queues or processes consuming RAM: `rabbitmq-diagnostics memory_breakdown -n <node>` - Purge or truncate unused / dead-letter queues holding many messages. - Scale or speed up consumers so messages are ACKed and cleared from memory faster. - Enable “lazy queues” or set TTLs / max-length limits for queues that can tolerate disk-based storage. - Increase the node’s memory allocation or raise `vm_memory_high_watermark` (e.g., to 0.8) **only** after confirming the host has sufficient physical RAM. - In Kubernetes, also raise the Pod memory limit so the broker can use the additional headroom without OOM-killing. - Long-term: review message sizes, batch patterns, and queue topology to prevent unbounded growth.