smart_toy
Mistral Large Language Models

Mistral Large 2 (123.00B)

Parameters
123.00B
VRAM (FP16)
246.0GB
VRAM (INT4)
62.0GB
Context
128000

tune Quantization Options

Quantization VRAM Required Min GPU
No quantization options available

Model Details

Family Mistral
Category Large Language Models
Parameters 123.00B
Context Length 128000