Can I run SDXL 1.0 on NVIDIA RTX 3080 10GB?

thumb_up
Good
Yes, you can run this model!
GPU VRAM
10.0GB
Required
8.0GB
Headroom
+2.0GB

VRAM Usage

0GB 80% used 10.0GB

Performance Estimate

Tokens/sec ~63.0
Batch size 2

info Technical Analysis

NVIDIA RTX 3080 10GB is well-suited for running SDXL 1.0. The 10.0GB VRAM provides adequate headroom (2.0GB) beyond the 8.0GB requirement for standard inference workloads.

lightbulb Recommendation

This is a solid configuration for SDXL 1.0. Use standard settings and you should experience good performance for most use cases.

tune Recommended Settings

Batch_Size
2
Context_Length
77
Inference_Framework
llama.cpp or vLLM

help Frequently Asked Questions

Can I run SDXL 1.0 on NVIDIA RTX 3080 10GB? expand_more
NVIDIA RTX 3080 10GB with 10.0GB VRAM can run SDXL 1.0 (6.60B) (8.0GB required) with 2.0GB headroom. This provides comfortable room for inference with standard settings.
How much VRAM does SDXL 1.0 need? expand_more
SDXL 1.0 requires approximately 8.0GB of VRAM.
What performance can I expect? expand_more
Estimated 63 tokens per second.