Can I run Phi-3 Medium 14B (INT8 (8-bit Integer)) on NVIDIA A100 80GB?

check_circle
Perfect
Yes, you can run this model!
GPU VRAM
80.0GB
Required
14.0GB
Headroom
+66.0GB

VRAM Usage

0GB 18% used 80.0GB

Performance Estimate

Tokens/sec ~78.0
Batch size 23
Context 128000K

info Technical Analysis

The NVIDIA A100 80GB is exceptionally well-suited for running the Phi-3 Medium 14B model, especially when employing INT8 quantization. Phi-3 Medium 14B, in its full FP16 precision, requires approximately 28GB of VRAM. However, with INT8 quantization, this requirement is reduced to a mere 14GB. The A100's substantial 80GB of HBM2e memory provides a significant VRAM headroom of 66GB, ensuring ample space for the model, its working memory, and potentially multiple model instances or larger batch sizes. The A100's impressive 2.0 TB/s memory bandwidth further contributes to efficient data transfer, minimizing bottlenecks during inference.

Furthermore, the A100's architecture, based on NVIDIA Ampere, boasts 6912 CUDA cores and 432 Tensor Cores, specifically designed to accelerate deep learning workloads. These Tensor Cores are particularly effective at accelerating matrix multiplications, a core operation in transformer-based models like Phi-3 Medium. This hardware acceleration, combined with the high memory bandwidth and ample VRAM, translates into excellent performance for inference tasks. Expect high throughput and low latency, making the A100 an ideal choice for deploying Phi-3 Medium in production environments.

lightbulb Recommendation

Given the substantial VRAM headroom, consider experimenting with larger batch sizes to maximize throughput. Start with the estimated batch size of 23 and incrementally increase it until you observe diminishing returns or encounter memory limitations. Explore using the full context length of 128000 tokens to leverage the model's capabilities for longer sequences. Also, ensure you're using optimized libraries and frameworks like vLLM or NVIDIA's TensorRT to further enhance performance. If you are not using INT8, consider it for a significant performance boost with minimal impact on model accuracy.

If you encounter performance bottlenecks, profile your code to identify the specific operations causing delays. Consider using techniques like tensor parallelism or pipeline parallelism to distribute the workload across multiple GPUs if scaling beyond a single A100 is necessary. Also, ensure your data loading and preprocessing pipelines are optimized to prevent them from becoming bottlenecks.

tune Recommended Settings

Batch_Size
23
Context_Length
128000
Other_Settings
['Enable CUDA graphs', 'Use Paged Attention', 'Optimize data loading pipeline']
Inference_Framework
vLLM
Quantization_Suggested
INT8

help Frequently Asked Questions

Is Phi-3 Medium 14B (14.00B) compatible with NVIDIA A100 80GB? expand_more
Yes, Phi-3 Medium 14B is perfectly compatible with the NVIDIA A100 80GB, offering substantial VRAM headroom even with FP16 precision and excellent performance, especially with INT8 quantization.
What VRAM is needed for Phi-3 Medium 14B (14.00B)? expand_more
Phi-3 Medium 14B requires approximately 28GB of VRAM in FP16 precision and only 14GB of VRAM when quantized to INT8.
How fast will Phi-3 Medium 14B (14.00B) run on NVIDIA A100 80GB? expand_more
You can expect approximately 78 tokens per second with the suggested configuration. Actual performance will vary based on batch size, context length, and the specific inference framework used.