The NVIDIA H100 PCIe, with its substantial 80GB of HBM2e VRAM and 2.0 TB/s memory bandwidth, is exceptionally well-suited for running the Phi-3 Small 7B model. Phi-3 Small 7B, even in its unquantized FP16 precision, requires only 14GB of VRAM. When quantized to q3_k_m, the VRAM footprint shrinks dramatically to just 2.8GB. This leaves a massive 77.2GB of VRAM headroom, ensuring ample space for larger batch sizes, extended context lengths, and other memory-intensive operations. The H100's 14592 CUDA cores and 456 Tensor Cores further contribute to its ability to efficiently process the model's computations, resulting in high throughput.
Given the H100's capabilities and the model's relatively small size (especially after quantization), you can experiment with larger batch sizes to maximize throughput. Start with a batch size of 32, as indicated, and incrementally increase it until you observe diminishing returns or encounter memory constraints (which are unlikely with this setup). Explore different inference frameworks like `vLLM` or `text-generation-inference` which are optimized for high throughput and low latency. Consider using techniques like speculative decoding to further improve the tokens/sec rate. If you aren't already using it, confirm that you have the latest NVIDIA drivers installed to ensure optimal performance and compatibility.