Can I run BGE-Large-EN on NVIDIA H100 SXM?

check_circle
Perfect
Yes, you can run this model!
GPU VRAM
80.0GB
Required
0.7GB
Headroom
+79.3GB

VRAM Usage

0GB 1% used 80.0GB

Performance Estimate

Tokens/sec ~135.0
Batch size 32

info Technical Analysis

The NVIDIA H100 SXM, with its massive 80GB of HBM3 memory and 3.35 TB/s memory bandwidth, is exceptionally well-suited for running the BGE-Large-EN embedding model. BGE-Large-EN, at only 0.33B parameters, requires a mere 0.7GB of VRAM in FP16 precision. This leaves a substantial 79.3GB of VRAM headroom on the H100, allowing for large batch sizes and concurrent execution of multiple model instances. The H100's Hopper architecture, featuring 16896 CUDA cores and 528 Tensor Cores, provides ample computational resources for accelerating the matrix multiplications and other operations inherent in embedding generation.

Given the vast VRAM and computational resources available, the primary performance bottleneck is likely to be memory bandwidth rather than compute. The H100's high memory bandwidth mitigates this, enabling rapid data transfer between the GPU and memory. The estimated tokens/second of 135 is a reasonable starting point, but this can be significantly increased through optimizations. The large VRAM capacity allows for aggressive batching, further improving throughput. The Hopper architecture's Tensor Cores are specifically designed to accelerate the types of matrix operations used in these models, contributing to the high performance.

lightbulb Recommendation

For optimal performance, leverage the H100's capabilities by maximizing batch size. Start with a batch size of 32 and experiment with larger values until you observe diminishing returns in throughput. Utilize a high-performance inference framework like vLLM or NVIDIA's TensorRT to further optimize performance. Consider using mixed-precision inference (e.g., FP16 or BF16) to potentially improve throughput without significant loss in accuracy. Monitor GPU utilization and memory usage to identify any bottlenecks and adjust settings accordingly.

If you encounter performance limitations despite these optimizations, investigate potential CPU bottlenecks in data preprocessing or post-processing. Ensure that data loading and preprocessing are efficiently pipelined to keep the GPU fed with data. Finally, benchmark different inference frameworks and configurations to determine the optimal setup for your specific use case.

tune Recommended Settings

Batch_Size
32
Context_Length
512
Other_Settings
['Enable CUDA graphs', 'Use asynchronous data loading', 'Profile performance with Nsight Systems']
Inference_Framework
vLLM
Quantization_Suggested
FP16

help Frequently Asked Questions

Is BGE-Large-EN compatible with NVIDIA H100 SXM? expand_more
Yes, BGE-Large-EN is fully compatible with the NVIDIA H100 SXM.
What VRAM is needed for BGE-Large-EN? expand_more
BGE-Large-EN requires approximately 0.7GB of VRAM in FP16 precision.
How fast will BGE-Large-EN run on NVIDIA H100 SXM? expand_more
You can expect excellent performance, potentially around 135 tokens/sec, which can be further optimized with larger batch sizes and optimized inference frameworks.