Introducing Vertex AI's New LLM: Code-Bison32k

Introducing Vertex AI's New LLM: Code-Bison32k

Google Cloud's Vertex AI has recently unveiled its latest large language model (LLM), the Vertex AI PaLM 2 for Text 32k (text-bison-32k). This state-of-the-art model is a game-changer in the world of natural language processing, offering advanced capabilities for various language tasks such as classification, summarization, and extraction.

Model Specifications

The text-bison-32k model boasts impressive specifications, making it a powerful tool for developers and data scientists:

  • Maximum Input Tokens: 32,768 (input + output)
  • Maximum Output Tokens: 8,192
  • Training Data: Up to August 2023

Usage and Importing

The text-bison-32k model is now available as part of the PaLM API. It's designed to be versatile, catering to various natural language processing tasks. However, it is currently in the preview stage, which means you might need specific import statements to use it with the Python SDK. To get started, simply import the model using the following code:

from vertexai.preview.language_models import TextGenerationModel

Capabilities

The text-bison-32k model is fine-tuned to follow natural language instructions, making it highly effective for supervised tuning. However, it does not support reinforcement learning from human feedback (RLHF) or distillation. This makes it a straightforward yet powerful tool for your language processing needs.

Conclusion

The Vertex AI PaLM 2 for Text 32k model is an exciting addition to the Vertex AI lineup, offering substantial enhancements in natural language processing capabilities. Whether you're working on classification, summarization, or extraction tasks, this model provides the tools you need to achieve high-quality results. For more detailed information, refer to the Vertex AI documentation and the specific model details provided in the PaLM API models section.

Read more