smart_toy
Qwen Large Language Models

Qwen 2.5 32B (32.00B)

Parameters
32.00B
VRAM (FP16)
64.0GB
VRAM (INT4)
16.0GB
Context
131072

tune Quantization Options

Quantization VRAM Required Min GPU
FP16 (Half Precision) 64.0GB A100 / H100
INT8 (8-bit Integer) 32.0GB A6000 / 2x 4090
Q4_K_M (GGUF 4-bit) 16.0GB RTX 4080
q3_k_m 12.8GB RTX 4080

Model Details

Family Qwen
Category Large Language Models
Parameters 32.00B
Context Length 131072