Exploring the Latest Perplexity AI Model: LLama-3.1-Sonar-Small-128K-Online

Exploring the Latest Perplexity AI Model: LLama-3.1-Sonar-Small-128K-Online

In July 2024, Perplexity AI introduced one of its latest models, LLama-3.1-Sonar-Small-128K-Online. Part of the new and improved Sonar model series, this model is designed for specific tasks and comes in various configurations, including small and large models.

Model Specifications

The LLama-3.1-Sonar-Small-128K-Online model is tailored to meet diverse needs with its specialized configuration. As part of the Sonar series, it offers enhanced performance for specific use cases.

How to Use the Model

To utilize this model, you will need to set up an API key with Perplexity AI. The following Python code snippet illustrates how to query the model:

from llama_index.core.llms import ChatMessage

pplx_api_key = "your-perplexity-api-key"
llm = Perplexity(api_key=pplx_api_key, model="llama-3.1-sonar-small-128k-online")

messages_dict = [
    {"role": "system", "content": "Provide a concise summary of the given topic."},
    {"role": "user", "content": "Tell me about the latest Perplexity models."}
]

messages = [ChatMessage(**msg) for msg in messages_dict]
response = llm.chat(messages)
print(str(response))

This example demonstrates how to generate a concise summary using the model.

Pricing Structure

The pricing model for LLama-3.1-Sonar-Small-128K-Online includes a fixed price per request plus a variable fee based on the number of input and output tokens. This flexible pricing allows users to manage costs effectively based on their usage.

Integration Capabilities

The model can be seamlessly integrated into various applications via the Perplexity API. Users have successfully implemented it in Retrieval-Augmented Generation (RAG) solutions, showcasing its versatility and efficiency.

For more detailed information, you would typically refer to the official Perplexity AI documentation. However, please note that there might be some technical issues with the provided links at the moment.

Read more