Getting Started with the code-bison@002 Model in Vertex AI
The code-bison@002
model, part of the Codey models in Vertex AI, is a powerful tool for code generation. Released on December 6, 2023, and available until October 9, 2024, this pre-trained large language model can generate code based on given prompts.
How to Use the `code-bison@002` Model
Using the code-bison@002
model is straightforward. You can send a POST request to the Vertex AI API with the appropriate request body. Here’s an example:
{ "instances": [ { "prefix": "Write a function that checks if a year is a leap year." } ], "parameters": { "temperature": 0.5, "maxOutputTokens": 256, "candidateCount": 1 }}
You can send this JSON via curl
or any other HTTP client to get the generated code.
Fine-Tuning the `code-bison@002` Model
For more specialized tasks, you can fine-tune the code-bison@002
model with your own dataset using the Vertex AI SDK for Python. Here’s a sample code for fine-tuning the model:
from vertexai.language_models import CodeGenerationModeltraining_data = "gs://training/sample_data.jsonl"model = CodeGenerationModel.from_pretrained("code-bison@002")model = model.tune_model( training_data=training_data, train_steps=100, tuning_job_location="<LOCATION>", tuned_model_location="<LOCATION>", model_display_name="custom-code-gen-model", accelerator_type="GPU")
This process involves specifying the training data, the number of training steps, and other parameters.
Model Lifecycle
Always refer to the model documentation for the latest compatibility and lifecycle information. Newer models may offer improved performance, and techniques like LoRA can enhance fine-tuning efficiency.
Accessing the Model
The code-bison@002
model can be accessed and fine-tuned using the Vertex AI SDK for Python, as well as through HTTP requests using tools like curl
or PowerShell.
This guide should help you get started with using and fine-tuning the code-bison@002
model in Vertex AI.