Can I run Phi-3 Medium 14B (INT8 (8-bit Integer)) on NVIDIA H100 PCIe?

check_circle
Perfect
Yes, you can run this model!
GPU VRAM
80.0GB
Required
14.0GB
Headroom
+66.0GB

VRAM Usage

0GB 18% used 80.0GB

Performance Estimate

Tokens/sec ~78.0
Batch size 23
Context 128000K

info Technical Analysis

The NVIDIA H100 PCIe, with its 80GB of HBM2e memory and 2.0 TB/s memory bandwidth, offers ample resources for running the Phi-3 Medium 14B model, especially when quantized to INT8. Phi-3 Medium 14B in INT8 precision requires approximately 14GB of VRAM, leaving a substantial 66GB of headroom on the H100. This large VRAM margin ensures that even with larger context lengths or increased batch sizes, the model should operate comfortably within the GPU's memory capacity. The H100's Hopper architecture, featuring 14592 CUDA cores and 456 Tensor Cores, is well-suited for the computational demands of large language models. The high memory bandwidth of the H100 is crucial for efficiently transferring model weights and intermediate activations, which directly impacts inference speed.

The estimated tokens/second rate of 78 suggests efficient utilization of the H100's computational resources. This performance is influenced by factors such as the specific inference framework used and the level of optimization applied. The estimated batch size of 23 indicates the number of independent sequences that can be processed in parallel, further enhancing throughput. The H100's Tensor Cores are specifically designed to accelerate matrix multiplications, a core operation in deep learning, contributing to the overall performance gain. The combination of high VRAM, high memory bandwidth, and specialized hardware accelerators makes the H100 an excellent platform for deploying and running large language models like Phi-3 Medium 14B.

Given the substantial VRAM headroom, users can explore increasing the context length to fully leverage Phi-3 Medium's 128k token capacity. This can be beneficial for applications requiring long-form content generation or analysis. Furthermore, optimizing the inference pipeline with techniques like kernel fusion and quantization-aware training can potentially boost the tokens/second rate even further. The H100's PCIe interface provides sufficient bandwidth for data transfer between the host system and the GPU, ensuring that data bottlenecks are minimized.

lightbulb Recommendation

For optimal performance, start with an inference framework like vLLM or NVIDIA's TensorRT, as they are designed to maximize GPU utilization. Ensure that you are using the latest NVIDIA drivers and CUDA toolkit for the best compatibility and performance. Monitor GPU utilization and memory usage to fine-tune the batch size and context length for your specific workload. Experiment with different quantization levels to balance memory footprint and accuracy.

Consider using techniques like speculative decoding or continuous batching to further improve throughput, especially in production environments. Profile your application to identify any potential bottlenecks and optimize accordingly. If you encounter performance issues, reduce the batch size or context length to alleviate memory pressure. Additionally, explore using distributed inference across multiple GPUs if you need to handle even larger models or higher throughput requirements.

tune Recommended Settings

Batch_Size
23 (start here, adjust based on memory usage)
Context_Length
Up to 128000 (depending on memory usage and perfo…
Other_Settings
['Use the latest NVIDIA drivers', 'Enable CUDA graphs for reduced CPU overhead', 'Profile application for bottlenecks', 'Experiment with different optimization flags in the inference framework']
Inference_Framework
vLLM or TensorRT
Quantization_Suggested
INT8 (currently used and recommended)

help Frequently Asked Questions

Is Phi-3 Medium 14B (14.00B) compatible with NVIDIA H100 PCIe? expand_more
Yes, Phi-3 Medium 14B is fully compatible with the NVIDIA H100 PCIe, offering substantial VRAM headroom.
What VRAM is needed for Phi-3 Medium 14B (14.00B)? expand_more
When quantized to INT8, Phi-3 Medium 14B requires approximately 14GB of VRAM.
How fast will Phi-3 Medium 14B (14.00B) run on NVIDIA H100 PCIe? expand_more
You can expect an estimated throughput of around 78 tokens/second with optimized settings. Actual performance may vary based on the inference framework, batch size, and context length.