CRE-2025-0162
Stable Diffusion WebUI CUDA Out of Memory CrashHighImpact: 8/10Mitigation: 7/10
CRE-2025-0162View on GitHub
Description
Detects critical CUDA out of memory errors in Stable Diffusion WebUI that cause image generation failures and application crashes. This occurs when GPU VRAM is exhausted during model loading or image generation, resulting in complete task failure and potential WebUI instability.
Cause
- Insufficient GPU VRAM for requested image resolution or batch size
- Memory fragmentation preventing large contiguous allocations
- Model loading exceeding available VRAM capacity
- Concurrent GPU processes consuming memory
- High-resolution image generation without memory optimization flags
Mitigation
IMMEDIATE ACTIONS:
- Restart Stable Diffusion WebUI
- Clear GPU memory: nvidia-smi --gpu-reset
- Add memory optimization flags: --medvram or --lowvram
CONFIGURATION FIXES:
- For 4-6GB VRAM: Add --medvram to webui-user.bat
- For 2-4GB VRAM: Add --lowvram to webui-user.bat
- Enable xformers: --xformers for memory efficiency
- Add --always-batch-cond-uncond for batch processing
RUNTIME ADJUSTMENTS:
- Reduce image resolution (512x512 instead of 1024x1024)
- Decrease batch size to 1
- Lower batch count for multiple generations
- Set PYTORCH_CUDA_ALLOC_CONF=garbage_collection_threshold:0.9,max_split_size_mb:512
PREVENTION:
- Monitor GPU memory usage with nvidia-smi
- Implement gradual resolution scaling
- Use cloud services for high-resolution generation
- Upgrade to GPU with minimum 8GB VRAM