airflow.providers.google.cloud.hooks.vertex_ai.generative_model

This module contains a Google Cloud Vertex AI Generative Model hook.

Module Contents

Classes

GenerativeModelHook

Hook for Google Cloud Vertex AI Generative Model APIs.

class airflow.providers.google.cloud.hooks.vertex_ai.generative_model.GenerativeModelHook(gcp_conn_id='google_cloud_default', impersonation_chain=None, **kwargs)[source]

Bases: airflow.providers.google.common.hooks.base_google.GoogleBaseHook

Hook for Google Cloud Vertex AI Generative Model APIs.

get_text_generation_model(pretrained_model)[source]

Return a Model Garden Model object based on Text Generation.

get_text_embedding_model(pretrained_model)[source]

Return a Model Garden Model object based on Text Embedding.

get_generative_model(pretrained_model)[source]

Return a Generative Model object.

get_generative_model_part(content_gcs_path, content_mime_type=None)[source]

Return a Generative Model Part object.

prompt_language_model(prompt, pretrained_model, temperature, max_output_tokens, top_p, top_k, location, project_id=PROVIDE_PROJECT_ID)[source]

Use the Vertex AI PaLM API to generate natural language text.

Parameters
  • prompt (str) – Required. Inputs or queries that a user or a program gives to the Vertex AI PaLM API, in order to elicit a specific response.

  • pretrained_model (str) – A pre-trained model optimized for performing natural language tasks such as classification, summarization, extraction, content creation, and ideation.

  • temperature (float) – Temperature controls the degree of randomness in token selection.

  • max_output_tokens (int) – Token limit determines the maximum amount of text output.

  • top_p (float) – Tokens are selected from most probable to least until the sum of their probabilities equals the top_p value. Defaults to 0.8.

  • top_k (int) – A top_k of 1 means the selected token is the most probable among all tokens.

  • location (str) – Required. The ID of the Google Cloud location that the service belongs to.

  • project_id (str) – Required. The ID of the Google Cloud project that the service belongs to.

generate_text_embeddings(prompt, pretrained_model, location, project_id=PROVIDE_PROJECT_ID)[source]

Use the Vertex AI PaLM API to generate text embeddings.

Parameters
  • prompt (str) – Required. Inputs or queries that a user or a program gives to the Vertex AI PaLM API, in order to elicit a specific response.

  • pretrained_model (str) – A pre-trained model optimized for generating text embeddings.

  • location (str) – Required. The ID of the Google Cloud location that the service belongs to.

  • project_id (str) – Required. The ID of the Google Cloud project that the service belongs to.

prompt_multimodal_model(prompt, location, pretrained_model='gemini-pro', project_id=PROVIDE_PROJECT_ID)[source]

Use the Vertex AI Gemini Pro foundation model to generate natural language text.

Parameters
  • prompt (str) – Required. Inputs or queries that a user or a program gives to the Multi-modal model, in order to elicit a specific response.

  • pretrained_model (str) – By default uses the pre-trained model gemini-pro, supporting prompts with text-only input, including natural language tasks, multi-turn text and code chat, and code generation. It can output text and code.

  • location (str) – Required. The ID of the Google Cloud location that the service belongs to.

  • project_id (str) – Required. The ID of the Google Cloud project that the service belongs to.

prompt_multimodal_model_with_media(prompt, location, media_gcs_path, mime_type, pretrained_model='gemini-pro-vision', project_id=PROVIDE_PROJECT_ID)[source]

Use the Vertex AI Gemini Pro foundation model to generate natural language text.

Parameters
  • prompt (str) – Required. Inputs or queries that a user or a program gives to the Multi-modal model, in order to elicit a specific response.

  • pretrained_model (str) – By default uses the pre-trained model gemini-pro-vision, supporting prompts with text-only input, including natural language tasks, multi-turn text and code chat, and code generation. It can output text and code.

  • media_gcs_path (str) – A GCS path to a content file such as an image or a video. Can be passed to the multi-modal model as part of the prompt. Used with vision models.

  • mime_type (str) – Validates the media type presented by the file in the media_gcs_path.

  • location (str) – Required. The ID of the Google Cloud location that the service belongs to.

  • project_id (str) – Required. The ID of the Google Cloud project that the service belongs to.

Was this entry helpful?