Introducing the Llama 3.2 1B Model on Fireworks AI

Introducing the Llama 3.2 1B Model on Fireworks AI

Fireworks AI is excited to announce the availability of the Llama 3.2 1B (text-only) model. This new language model is designed to cater to a wide range of tasks, including retrieval, summarization, personal information management, multilingual knowledge retrieval, and rewriting tasks.

Model Availability and Capabilities

The Llama 3.2 1B model is now accessible on Fireworks AI. It offers developers the opportunity to leverage its robust capabilities efficiently. With a performance rate of approximately 500 tokens per second, it is well-suited for real-time applications.

Fine-Tuning and Customization

Developers can fine-tune the Llama 3.2 1B model on Fireworks AI. Released under open and permissive licensing by Meta, it allows significant customization to meet specific use case requirements.

Multimodal Capabilities

Although the Llama 3.2 1B model is text-only, Fireworks AI also supports multimodal models like Llama 3.2 11B Vision and Llama 3.2 90B Vision. These models, which will be available for fine-tuning soon, extend capabilities to include image understanding and visual reasoning tasks such as image captioning, visual question answering, and document visual analysis.

Deployment and Pricing

Fireworks AI offers flexible deployment options, including serverless, on-demand, and enterprise reserved configurations. The pricing is competitive, with both text-only and multimodal models priced at $0.10 per 1M tokens. Images are counted as text tokens, typically equating to 6400 tokens per image depending on resolution and model.

Integration and Usage

Getting started is simple. Developers can sign up for an account on Fireworks AI, obtain an API key, and use the Fireworks AI Python package. Below is an example of how to instantiate the Fireworks client and use the chat completions API to call the Llama 3.2 model:

pip install --upgrade fireworks-ai

from fireworks import FireworksClient

client = FireworksClient(api_key='your_api_key')
response = client.chat_completions(model='llama-v3p2-1b-instruct', messages=[{'role': 'user', 'content': 'Hello, Llama!'}])
print(response['choices'][0]['message']['content'])

Tools and Safety

The Llama 3.2 models come equipped with robust tools for creating custom agents and new agentic behaviors. Fireworks AI prioritizes security and safety measures, continuing to champion openness in AI and fostering innovation and safety in the ecosystem.

Read more