Can I run Qwen 2.5 72B (Q4_K_M (GGUF 4-bit)) on NVIDIA RTX 3090?

cancel
Fail/OOM
This GPU doesn't have enough VRAM
GPU VRAM
24.0GB
Required
36.0GB
Headroom
-12.0GB

VRAM Usage

0GB 100% used 24.0GB

info Technical Analysis

NVIDIA RTX 3090 cannot run Qwen 2.5 72B (72.00B) in this configuration. The model requires 36.0GB but only 24.0GB is available, leaving you 12.0GB short.

lightbulb Recommendation

Consider using a more aggressive quantization (Q4_K_M, Q3_K_M) to reduce VRAM requirements, or upgrade to a GPU with more VRAM. Cloud GPU services like RunPod or Vast.ai offer affordable options.

tune Recommended Settings

Batch_Size
None
Context_Length
None
Inference_Framework
llama.cpp or vLLM

help Frequently Asked Questions

Can I run Qwen 2.5 72B (72.00B) on NVIDIA RTX 3090? expand_more
NVIDIA RTX 3090 (24.0GB VRAM) cannot run Qwen 2.5 72B (72.00B) which requires 36.0GB. You are 12.0GB short. Consider using a more aggressive quantization (like Q4_K_M or Q3_K_M) or upgrading to a GPU with more VRAM.
How much VRAM does Qwen 2.5 72B (72.00B) need? expand_more
Qwen 2.5 72B (72.00B) requires approximately 36.0GB of VRAM.
What performance can I expect? expand_more
Estimated None tokens per second.