Introducing Mistral.mistral-large-2407-v1:0 on Amazon Bedrock

Introducing Mistral.mistral-large-2407-v1:0 on Amazon Bedrock

The Mistral AI Large 2 (24.07) model, known as mistral.mistral-large-2407-v1:0, is a powerful new addition to the Amazon Bedrock platform. This model is available in the us-west-2 AWS Region and can be deployed as a fully managed endpoint on AWS Bedrock, a serverless service.

Capabilities and Improvements

  • Multilingual Support: The model supports dozens of languages, including English, French, German, Spanish, Italian, Chinese, Japanese, Korean, Portuguese, Dutch, Polish, Arabic, and Hindi.
  • Reasoning and Knowledge: Enhanced reasoning capabilities and training to minimize hallucinations provide more reliable and accurate outputs.
  • Coding Capabilities: Proficient in over 80 programming languages, such as Python, Java, C, C++, JavaScript, Bash, Swift, and Fortran.
  • Agentic Capabilities: Can natively call functions and output JSON, enabling seamless interaction with external systems, APIs, and tools.

Technical Specifications

  • Context Window: Supports a context window of 128,000 tokens, significantly larger than the previous version (32,000 tokens).
  • JSON Output: Offers a native JSON output mode, making it easier to integrate responses into applications.

Getting Started

To use the model, you need an AWS account, an IAM principal with sufficient permissions, and a local code environment set up with the AWS CLI and boto3 library. You must request access to the model through the Amazon Bedrock console and configure your authentication credentials properly.

Usage Example

The model can be accessed using the AWS Bedrock Converse API. Below is an example of how to invoke the model using Python:

import boto3
import json

bedrock = boto3.client('bedrock-runtime', 'us-west-2')
response = bedrock.converse(
    modelId='mistral.mistral-large-2407-v1:0',
    messages=[{'role': 'user', 'content': [{'text': 'Which LLM are you?'}]}]
)
print(json.dumps(json.loads(response['body']), indent=4))

Read more

Introducing Featherless AI's Qwerky-QwQ-32B: A Powerful New Reasoning-Focused LLM

Introducing Featherless AI's Qwerky-QwQ-32B: A Powerful New Reasoning-Focused LLM

Featherless AI has launched its latest large language model (LLM), Qwerky-QwQ-32B, marking an important advancement in AI reasoning capabilities. Developed by the Alibaba Qwen team, this 32-billion parameter model is designed to deliver exceptional performance in complex reasoning, mathematics, coding, and structured problem-solving tasks. Why Choose Qwerky-QwQ-32B? * Enhanced Reasoning: Qwerky-QwQ-32B