airflow.providers.google.cloud.operators.vertex_ai.hyperparameter_tuning_job

This module contains Google Vertex AI operators.

Module Contents

Classes

CreateHyperparameterTuningJobOperator

Create Hyperparameter Tuning job.

GetHyperparameterTuningJobOperator

Gets a HyperparameterTuningJob.

DeleteHyperparameterTuningJobOperator

Deletes a HyperparameterTuningJob.

ListHyperparameterTuningJobOperator

Lists HyperparameterTuningJobs in a Location.

class airflow.providers.google.cloud.operators.vertex_ai.hyperparameter_tuning_job.CreateHyperparameterTuningJobOperator(*, project_id, region, display_name, metric_spec, parameter_spec, max_trial_count, parallel_trial_count, worker_pool_specs, base_output_dir=None, custom_job_labels=None, custom_job_encryption_spec_key_name=None, staging_bucket=None, max_failed_trial_count=0, search_algorithm=None, measurement_selection='best', hyperparameter_tuning_job_labels=None, hyperparameter_tuning_job_encryption_spec_key_name=None, service_account=None, network=None, timeout=None, restart_job_on_worker_restart=False, enable_web_access=False, tensorboard=None, sync=True, gcp_conn_id='google_cloud_default', impersonation_chain=None, deferrable=conf.getboolean('operators', 'default_deferrable', fallback=False), poll_interval=10, **kwargs)[source]

Bases: airflow.providers.google.cloud.operators.cloud_base.GoogleCloudBaseOperator

Create Hyperparameter Tuning job.

Parameters
  • project_id (str) – Required. The ID of the Google Cloud project that the service belongs to.

  • region (str) – Required. The ID of the Google Cloud region that the service belongs to.

  • display_name (str) – Required. The user-defined name of the HyperparameterTuningJob. The name can be up to 128 characters long and can be consist of any UTF-8 characters.

  • metric_spec (dict[str, str]) – Required. Dictionary representing metrics to optimize. The dictionary key is the metric_id, which is reported by your training job, and the dictionary value is the optimization goal of the metric(‘minimize’ or ‘maximize’). example: metric_spec = {‘loss’: ‘minimize’, ‘accuracy’: ‘maximize’}

  • parameter_spec (dict[str, google.cloud.aiplatform.hyperparameter_tuning._ParameterSpec]) – Required. Dictionary representing parameters to optimize. The dictionary key is the metric_id, which is passed into your training job as a command line key word argument, and the dictionary value is the parameter specification of the metric.

  • max_trial_count (int) – Required. The desired total number of Trials.

  • parallel_trial_count (int) – Required. The desired number of Trials to run in parallel.

  • worker_pool_specs (list[dict] | list[google.cloud.aiplatform.gapic.WorkerPoolSpec]) – Required. The spec of the worker pools including machine type and Docker image. Can be provided as a list of dictionaries or list of WorkerPoolSpec proto messages.

  • base_output_dir (str | None) – Optional. GCS output directory of job. If not provided a timestamped directory in the staging directory will be used.

  • custom_job_labels (dict[str, str] | None) – Optional. The labels with user-defined metadata to organize CustomJobs. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.

  • custom_job_encryption_spec_key_name (str | None) – Optional.Customer-managed encryption key name for a CustomJob. If this is set, then all resources created by the CustomJob will be encrypted with the provided encryption key.

  • staging_bucket (str | None) – Optional. Bucket for produced custom job artifacts. Overrides staging_bucket set in aiplatform.init.

  • max_failed_trial_count (int) – Optional. The number of failed Trials that need to be seen before failing the HyperparameterTuningJob. If set to 0, Vertex AI decides how many Trials must fail before the whole job fails.

  • search_algorithm (str | None) –

    The search algorithm specified for the Study. Accepts one of the following: None - If you do not specify an algorithm, your job uses the default Vertex AI algorithm. The default algorithm applies Bayesian optimization to arrive at the optimal solution with a more effective search over the parameter space.

    ’grid’ - A simple grid search within the feasible space. This option is particularly useful if you want to specify a quantity of trials that is greater than the number of points in the feasible space. In such cases, if you do not specify a grid search, the Vertex AI default algorithm may generate duplicate suggestions. To use grid search, all parameter specs must be of type IntegerParameterSpec, CategoricalParameterSpace, or DiscreteParameterSpec.

    ’random’ - A simple random search within the feasible space.

  • measurement_selection (str | None) – This indicates which measurement to use if/when the service automatically selects the final measurement from previously reported intermediate measurements. Accepts: ‘best’, ‘last’ Choose this based on two considerations: A) Do you expect your measurements to monotonically improve? If so, choose ‘last’. On the other hand, if you’re in a situation where your system can “over-train” and you expect the performance to get better for a while but then start declining, choose ‘best’. B) Are your measurements significantly noisy and/or irreproducible? If so, ‘best’ will tend to be over-optimistic, and it may be better to choose ‘last’. If both or neither of (A) and (B) apply, it doesn’t matter which selection type is chosen.

  • hyperparameter_tuning_job_labels (dict[str, str] | None) – Optional. The labels with user-defined metadata to organize HyperparameterTuningJobs. Label keys and values can be no longer than 64 characters (Unicode codepoints), can only contain lowercase letters, numeric characters, underscores and dashes. International characters are allowed. See https://goo.gl/xmQnxf for more information and examples of labels.

  • hyperparameter_tuning_job_encryption_spec_key_name (str | None) – Optional. Customer-managed encryption key options for a HyperparameterTuningJob. If this is set, then all resources created by the HyperparameterTuningJob will be encrypted with the provided encryption key.

  • service_account (str | None) – Optional. Specifies the service account for workload run-as account. Users submitting jobs must have act-as permission on this run-as account.

  • network (str | None) – Optional. The full name of the Compute Engine network to which the job should be peered. For example, projects/12345/global/networks/myVPC. Private services access must already be configured for the network. If left unspecified, the job is not peered with any network.

  • timeout (int | None) – The maximum job running time in seconds. The default is 7 days.

  • restart_job_on_worker_restart (bool) – Restarts the entire CustomJob if a worker gets restarted. This feature can be used by distributed training jobs that are not resilient to workers leaving and joining a job.

  • enable_web_access (bool) – Whether you want Vertex AI to enable interactive shell access to training containers. https://cloud.google.com/vertex-ai/docs/training/monitor-debug-interactive-shell

  • tensorboard (str | None) – Optional. The name of a Vertex AI [Tensorboard][google.cloud.aiplatform.v1beta1.Tensorboard] resource to which this CustomJob will upload Tensorboard logs. Format: projects/{project}/locations/{location}/tensorboards/{tensorboard} The training script should write Tensorboard to following Vertex AI environment variable: AIP_TENSORBOARD_LOG_DIR service_account is required with provided tensorboard. For more information on configuring your service account please visit: https://cloud.google.com/vertex-ai/docs/experiments/tensorboard-training

  • sync (bool) – (Deprecated) Whether to execute this method synchronously. If False, this method will unblock, and it will be executed in a concurrent Future.

  • gcp_conn_id (str) – The connection ID to use connecting to Google Cloud.

  • impersonation_chain (str | Sequence[str] | None) – Optional service account to impersonate using short-term credentials, or chained list of accounts required to get the access_token of the last account in the list, which will be impersonated in the request. If set as a string, the account must grant the originating account the Service Account Token Creator IAM role. If set as a sequence, the identities from the list must grant Service Account Token Creator IAM role to the directly preceding identity, with first account from the list granting this role to the originating account (templated).

  • deferrable (bool) – Run operator in the deferrable mode.

  • poll_interval (int) – Interval size which defines how often job status is checked in deferrable mode.

template_fields = ['region', 'project_id', 'impersonation_chain'][source]
execute(context)[source]

Derive when creating an operator.

Context is the same dictionary used as when rendering jinja templates.

Refer to get_template_context for more context.

on_kill()[source]

Act as a callback called when the operator is killed; cancel any running job.

execute_complete(context, event)[source]
class airflow.providers.google.cloud.operators.vertex_ai.hyperparameter_tuning_job.GetHyperparameterTuningJobOperator(*, region, project_id, hyperparameter_tuning_job_id, retry=DEFAULT, timeout=None, metadata=(), gcp_conn_id='google_cloud_default', impersonation_chain=None, **kwargs)[source]

Bases: airflow.providers.google.cloud.operators.cloud_base.GoogleCloudBaseOperator

Gets a HyperparameterTuningJob.

Parameters
  • project_id (str) – Required. The ID of the Google Cloud project that the service belongs to.

  • region (str) – Required. The ID of the Google Cloud region that the service belongs to.

  • hyperparameter_tuning_job_id (str) – Required. The name of the HyperparameterTuningJob resource.

  • retry (google.api_core.retry.Retry | google.api_core.gapic_v1.method._MethodDefault) – Designation of what errors, if any, should be retried.

  • timeout (float | None) – The timeout for this request.

  • metadata (Sequence[tuple[str, str]]) – Strings which should be sent along with the request as metadata.

  • gcp_conn_id (str) – The connection ID to use connecting to Google Cloud.

  • impersonation_chain (str | Sequence[str] | None) – Optional service account to impersonate using short-term credentials, or chained list of accounts required to get the access_token of the last account in the list, which will be impersonated in the request. If set as a string, the account must grant the originating account the Service Account Token Creator IAM role. If set as a sequence, the identities from the list must grant Service Account Token Creator IAM role to the directly preceding identity, with first account from the list granting this role to the originating account (templated).

template_fields = ('region', 'hyperparameter_tuning_job_id', 'project_id', 'impersonation_chain')[source]
execute(context)[source]

Derive when creating an operator.

Context is the same dictionary used as when rendering jinja templates.

Refer to get_template_context for more context.

class airflow.providers.google.cloud.operators.vertex_ai.hyperparameter_tuning_job.DeleteHyperparameterTuningJobOperator(*, hyperparameter_tuning_job_id, region, project_id, retry=DEFAULT, timeout=None, metadata=(), gcp_conn_id='google_cloud_default', impersonation_chain=None, **kwargs)[source]

Bases: airflow.providers.google.cloud.operators.cloud_base.GoogleCloudBaseOperator

Deletes a HyperparameterTuningJob.

Parameters
  • project_id (str) – Required. The ID of the Google Cloud project that the service belongs to.

  • region (str) – Required. The ID of the Google Cloud region that the service belongs to.

  • hyperparameter_tuning_job_id (str) – Required. The name of the HyperparameterTuningJob resource to be deleted.

  • retry (google.api_core.retry.Retry | google.api_core.gapic_v1.method._MethodDefault) – Designation of what errors, if any, should be retried.

  • timeout (float | None) – The timeout for this request.

  • metadata (Sequence[tuple[str, str]]) – Strings which should be sent along with the request as metadata.

template_fields = ('region', 'project_id', 'hyperparameter_tuning_job_id', 'impersonation_chain')[source]
execute(context)[source]

Derive when creating an operator.

Context is the same dictionary used as when rendering jinja templates.

Refer to get_template_context for more context.

class airflow.providers.google.cloud.operators.vertex_ai.hyperparameter_tuning_job.ListHyperparameterTuningJobOperator(*, region, project_id, page_size=None, page_token=None, filter=None, read_mask=None, retry=DEFAULT, timeout=None, metadata=(), gcp_conn_id='google_cloud_default', impersonation_chain=None, **kwargs)[source]

Bases: airflow.providers.google.cloud.operators.cloud_base.GoogleCloudBaseOperator

Lists HyperparameterTuningJobs in a Location.

Parameters
  • project_id (str) – Required. The ID of the Google Cloud project that the service belongs to.

  • region (str) – Required. The ID of the Google Cloud region that the service belongs to.

  • filter (str | None) – The standard list filter. Supported fields: - display_name supports = and !=. - state supports = and !=. - model_display_name supports = and != Some examples of using the filter are: - state="JOB_STATE_SUCCEEDED" AND display_name="my_job" - state="JOB_STATE_RUNNING" OR display_name="my_job" - NOT display_name="my_job" - state="JOB_STATE_FAILED"

  • page_size (int | None) – The standard list page size.

  • page_token (str | None) – The standard list page token.

  • read_mask (str | None) – Mask specifying which fields to read.

  • retry (google.api_core.retry.Retry | google.api_core.gapic_v1.method._MethodDefault) – Designation of what errors, if any, should be retried.

  • timeout (float | None) – The timeout for this request.

  • metadata (Sequence[tuple[str, str]]) – Strings which should be sent along with the request as metadata.

template_fields = ['region', 'project_id', 'impersonation_chain'][source]
execute(context)[source]

Derive when creating an operator.

Context is the same dictionary used as when rendering jinja templates.

Refer to get_template_context for more context.

Was this entry helpful?