Introducing Mistral Large 2: Vertex AI's Latest Breakthrough in Large Language Models
Mistral Large 2 (mistral-large@2407) is the latest iteration of Mistral AI's flagship large language model, offering significant advancements over its predecessor. This post explores the key features and benefits of Mistral Large 2, now available on Vertex AI.
Model Specifications
Context Window: 128k context window.
Parameters: 123 billion parameters, optimized for single-node inference with high throughput.
Languages: Supports dozens of languages including English, French, German, Spanish, Italian, Portuguese, Arabic, Hindi, Russian, Chinese, Japanese, and Korean.
Coding Languages: Trained on over 80 coding languages such as Python, Java, C, C++, JavaScript, and Bash.
Performance and Capabilities
Reasoning and Problem-Solving: Enhanced reasoning capabilities with improved performance on mathematical benchmarks. The model is trained to minimize "hallucinations" and to admit when it lacks sufficient information.
Code Generation: Excels in code generation, mathematics, and reasoning.
Instruction Following: Improved instruction-following and conversational abilities, particularly in following precise instructions and handling long multi-turn conversations.
Multilingual Support: Performs well in multilingual tasks, outperforming its predecessor and other models like Llama 3.1 in multilingual benchmarks.
Availability and Access
Vertex AI: Available through Vertex AI's managed API, allowing users to access the model via the Vertex AI API endpoint.
Other Platforms: Also available on Azure AI Studio, Amazon Bedrock, and IBM watsonx.ai.
Licensing
Research License: Released under the Mistral Research License for research and non-commercial use.
Commercial License: Requires a Mistral Commercial License for commercial use.
Benchmarks
MMLU: Achieves an accuracy of 84.0% on the MMLU benchmark, setting a new performance/cost frontier among open models.
Other Benchmarks: Performs well on benchmarks like MT-Bench, Wild Bench, Arena Hard, GSM8K, and Humaneval.
Usage
Streaming Responses: Supports streaming responses using server-sent events (SSE) to reduce latency.
Cost Efficiency: Designed to be cost-efficient, with a focus on generating concise responses to facilitate faster interactions and reduce inference costs.
Mistral Large 2 represents a significant advancement in AI capabilities, particularly in multilingual support, code generation, and reasoning. Explore its potential by integrating it into your projects through Vertex AI or other supported platforms.