Introducing Meta Llama 3.1 70B Instruct: A New Era of Large Language Models

Introducing Meta Llama 3.1 70B Instruct: A New Era of Large Language Models

We are thrilled to announce the launch of Meta Llama 3.1 70B Instruct, a cutting-edge large language model developed by Meta and supported by Databricks. Starting July 23, 2024, this model will replace the Meta-Llama-3-70B-Instruct in Foundation Model APIs pay-per-token endpoints.

Model Details:

  • Parameters: 70 billion parameters.
  • Context Length: 128,000 tokens.
  • Languages: Supports ten languages.
  • Optimization: Designed for dialogue use cases, aligning with human preferences for helpfulness and safety.

Pricing:

  • Input Price: $1.00002 per 1M tokens.
  • Output Price: $2.99999 per 1M tokens.

Usage:

The Meta Llama 3.1 70B Instruct is available via Databricks' Foundation Model APIs, enabling easy deployment and integration into various applications. Enterprises can customize the model with their proprietary data using Databricks Model Serving and Mosaic AI.

Performance and Safety:

  • Accuracy: While highly accurate, the model may occasionally omit facts or produce false information. For scenarios requiring high accuracy, we recommend using retrieval augmented generation (RAG).
  • Safety: The model aligns with human preferences for safety, but users should be cautious about potential biases or offensive outputs.

Licensing:

The Meta Llama 3.1 is licensed under the LLAMA 3.1 Community License. Customers are responsible for ensuring compliance with this license.

Integration and Tools:

  • Databricks Integration: Seamlessly integrates with Databricks tools like RAG and synthetic data generation.
  • SQL Invocation: Invoke the model directly from SQL using the ai_query SQL function.

Community and Development:

Future versions of the model will be improved based on community feedback to enhance safety and performance. The model is designed to help enterprises build high-quality GenAI applications without sacrificing ownership and customization.

Read more