Introducing Vertex AI's New LLM: Code-Bison32k

Introducing Vertex AI's New LLM: Code-Bison32k

Google Cloud's Vertex AI has unveiled a new addition to its PaLM family of models: the Vertex AI PaLM 2 for Text 32k (text-bison-32k). This model is designed to handle a variety of language tasks, including classification, summarization, and extraction, all while following natural language instructions with precision.

Key Features

  • Maximum Tokens: With the ability to support up to 32,768 tokens (input + output), and a maximum output of 8,192 tokens, text-bison-32k is primed for complex tasks requiring extensive input and output.
  • Training Data: The model has been trained on data up to August 2023, ensuring it incorporates the latest linguistic trends and information.

Tuning and Usage

The model supports supervised tuning, allowing you to fine-tune it for specific tasks. However, it does not support reinforcement learning from human feedback (RLHF) or distillation at this stage.

To use this model via the Python SDK, import it from the vertexai.preview.language_models module:

from vertexai.preview.language_models import text_bison_32k

This step is crucial to avoid errors related to its preview status.

Important Considerations

Currently, the model is in the preview stage, which may mean limited stability and support across different environments. While it is not yet classified as a legacy model, it belongs to a model family that could eventually be discontinued in favor of more stable versions.

Conclusion

Vertex AI's text-bison-32k offers a robust solution for various language tasks, backed by the latest training data and capable of handling large token counts. For more detailed guidance, refer to the official Vertex AI documentation and tutorials.

Read more