The NVIDIA H100 PCIe, with its massive 80GB of HBM2e VRAM and 2.0 TB/s memory bandwidth, is exceptionally well-suited for running the Phi-3 Small 7B model. Phi-3 Small, in its INT8 quantized form, requires only 7GB of VRAM. This leaves a substantial 73GB of VRAM headroom, enabling large batch sizes and the ability to handle very long context lengths (up to 128,000 tokens) without encountering memory limitations. The H100's 14592 CUDA cores and 456 Tensor Cores further accelerate the model's computations, leading to high throughput and low latency during inference. The Hopper architecture's advancements in tensor processing contribute significantly to the model's efficient execution.
Given the abundant VRAM and computational power of the H100, focus on maximizing throughput by experimenting with larger batch sizes. Start with a batch size of 32 as suggested, and incrementally increase it to find the optimal balance between latency and throughput for your specific application. Consider using techniques like speculative decoding or continuous batching to further improve performance. Profile the model's execution to identify any potential bottlenecks and fine-tune the configuration accordingly. Also, consider using the full 128k context window to experiment with long-context applications.