airflow.providers.google.cloud.hooks.vertex_ai.generative_model

This module contains a Google Cloud Vertex AI Generative Model hook.

Classes

GenerativeModelHook

Hook for Google Cloud Vertex AI Generative Model APIs.

Module Contents

class airflow.providers.google.cloud.hooks.vertex_ai.generative_model.GenerativeModelHook(gcp_conn_id='google_cloud_default', impersonation_chain=None, **kwargs)[source]

Bases: airflow.providers.google.common.hooks.base_google.GoogleBaseHook

Hook for Google Cloud Vertex AI Generative Model APIs.

get_text_embedding_model(pretrained_model)[source]

Return a Model Garden Model object based on Text Embedding.

get_generative_model(pretrained_model, system_instruction=None, generation_config=None, safety_settings=None, tools=None)[source]

Return a Generative Model object.

get_eval_task(dataset, metrics, experiment)[source]

Return an EvalTask object.

get_cached_context_model(cached_content_name)[source]

Return a Generative Model with Cached Context.

run_evaluation(pretrained_model, eval_dataset, metrics, experiment_name, experiment_run_name, prompt_template, location, generation_config=None, safety_settings=None, system_instruction=None, tools=None, project_id=PROVIDE_PROJECT_ID)[source]

Use the Rapid Evaluation API to evaluate a model.

Parameters:
  • project_id (str) – Required. The ID of the Google Cloud project that the service belongs to.

  • location (str) – Required. The ID of the Google Cloud location that the service belongs to.

  • pretrained_model (str) – Required. A pre-trained model optimized for performing natural language tasks such as classification, summarization, extraction, content creation, and ideation.

  • eval_dataset (dict) – Required. A fixed dataset for evaluating a model against. Adheres to Rapid Evaluation API.

  • metrics (list) – Required. A list of evaluation metrics to be used in the experiment. Adheres to Rapid Evaluation API.

  • experiment_name (str) – Required. The name of the evaluation experiment.

  • experiment_run_name (str) – Required. The specific run name or ID for this experiment.

  • prompt_template (str) – Required. The template used to format the model’s prompts during evaluation. Adheres to Rapid Evaluation API.

  • generation_config (dict | None) – Optional. A dictionary containing generation parameters for the model.

  • safety_settings (dict | None) – Optional. A dictionary specifying harm category thresholds for blocking model outputs.

  • system_instruction (str | None) – Optional. An instruction given to the model to guide its behavior.

  • tools (list | None) – Optional. A list of tools available to the model during evaluation, such as a data store.

Was this entry helpful?