Unlock the Power of Text Generation with Vertex AI's Text-Bison Model
Vertex AI's Generative AI introduces text-bison, a foundation model optimized for various text generation tasks. This powerful tool can be fine-tuned for specific needs using Parameter Efficient Fine Tuning (PEFT) techniques, making it a versatile addition to your AI toolkit.
Training and Fine-Tuning
Text-Bison employs PEFT techniques for fine-tuning, adding adapter layers to the model rather than updating all weights. This method allows for task-specific tuning without the need to rebuild the entire model. For instance, fine-tuning for 300 epochs using 8 A100 80GB GPUs can be completed in approximately 40 minutes.
Resource Usage
Tuning jobs for Text-Bison can utilize either 8 A100 80GB GPUs in us-central1
or 64 cores of TPU v3 pod in europe-west4
(available upon request). The estimated costs are around $40.22/hour for the GPUs and $64/hour for the TPU v3 64 cores.
Model Specifications
Text-Bison is part of the broader set of models available in Vertex AI's Generative AI. These models typically have adjustable parameters such as maximum input and output tokens, allowing for flexible deployment based on your requirements.
Deployment
Once fine-tuned, the adapter weights are uploaded to a bucket and deployed to an endpoint in the customer's project. Importantly, only the adapter weights are loaded at runtime, not the entire model, ensuring efficient resource utilization.
Usage Examples
To leverage the capabilities of Text-Bison, simply send POST requests to the model endpoint with parameters such as temperature, max output tokens, and top-k/top-p values. This allows for tailored text generation based on your specific needs.