Skip to main content

CRE-2024-0008

RabbitMQ memory alarmHigh
Impact: 9/10
Mitigation: 6/10

CRE-2024-0008View on GitHub

Description

A RabbitMQ node has entered the “memory alarm” state because the total memory used by the Erlang VM (plus allocated binaries, ETS tables,\nand processes) has exceeded the configured `vm_memory_high_watermark`. While the alarm is active the broker\napplies flow-control, blocking publishers and pausing most ingress activity to protect itself from running out of RAM.\n

Mitigation

- Inspect memory usage to identify queues or processes consuming RAM: \n `rabbitmq-diagnostics memory_breakdown -n <node>` \n- Purge or truncate unused / dead-letter queues holding many messages. \n- Scale or speed up consumers so messages are ACKed and cleared from memory faster. \n- Enable “lazy queues” or set TTLs / max-length limits for queues that can tolerate disk-based storage. \n- Increase the node’s memory allocation or raise `vm_memory_high_watermark` (e.g., to 0.8) **only** after confirming the host has sufficient physical RAM. \n- In Kubernetes, also raise the Pod memory limit so the broker can use the additional headroom without OOM-killing. \n- Long-term: review message sizes, batch patterns, and queue topology to prevent unbounded growth.\n

References