Introducing Together AI's Latest LLM: together-ai-81.1b-110b
Together AI continues to push the boundaries of generative artificial intelligence with the introduction of their new large language model, the together-ai-81.1b-110b. This state-of-the-art model is designed to provide high-quality, efficient, and cost-effective AI services for a variety of applications.
Key Features and Pricing
- Input and Output Price: $1.80 per 1M tokens
- Max Tokens: Mode: chat
Together AI Platform
Together AI specializes in providing comprehensive solutions for building, training, and running AI models. Their platform offers access to private data and dedicated GPU clusters, allowing for large-scale training and custom model development.
Model and Inference Support
The Together AI platform supports a wide variety of models, including leading families like LLaMA. They offer multiple inference endpoints, such as Together Reference, Together Turbo, and Together Lite, which deliver different balances of performance, quality, and cost.
Enterprise Solutions
With the Together Enterprise Platform, businesses can train, fine-tune, and run inference on any model in any environment, including virtual private clouds (VPC) and on-premise infrastructure. This platform provides 2-3x faster inference speeds, up to 50% lower operational costs, and continuous model optimization.
Inference Engine 2.0
The Together Inference Engine 2.0 includes new Turbo and Lite endpoints, capable of processing over 400 tokens per second on Meta LLaMA 3 8B. It incorporates advanced features such as FlashAttention-3, faster GEMM & MHA kernels, and speculative decoding, outperforming other commercial solutions.
Conclusion
The together-ai-81.1b-110b model is a testament to Together AI's commitment to innovation and excellence in the field of generative AI. With competitive pricing and advanced features, it is poised to meet the diverse needs of businesses and developers alike.