airflow.providers.google.cloud.hooks.dataproc
¶
This module contains a Google Cloud Dataproc hook.
Module Contents¶
Classes¶
A helper class for building Dataproc job. |
|
Hook for Google Cloud Dataproc APIs. |
- class airflow.providers.google.cloud.hooks.dataproc.DataProcJobBuilder(project_id, task_id, cluster_name, job_type, properties=None)[source]¶
A helper class for building Dataproc job.
- add_labels(self, labels=None)[source]¶
Set labels for Dataproc job.
- Parameters
labels (Optional[dict]) -- Labels for the job query.
- add_variables(self, variables=None)[source]¶
Set variables for Dataproc job.
- Parameters
variables (Optional[Dict]) -- Variables for the job query.
- add_args(self, args=None)[source]¶
Set args for Dataproc job.
- Parameters
args (Optional[List[str]]) -- Args for the job query.
- add_query(self, query)[source]¶
Set query for Dataproc job.
- Parameters
query (str) -- query for the job.
- add_query_uri(self, query_uri)[source]¶
Set query uri for Dataproc job.
- Parameters
query_uri (str) -- URI for the job query.
- add_jar_file_uris(self, jars=None)[source]¶
Set jars uris for Dataproc job.
- Parameters
jars (Optional[List[str]]) -- List of jars URIs
- add_archive_uris(self, archives=None)[source]¶
Set archives uris for Dataproc job.
- Parameters
archives (Optional[List[str]]) -- List of archives URIs
- add_file_uris(self, files=None)[source]¶
Set file uris for Dataproc job.
- Parameters
files (Optional[List[str]]) -- List of files URIs
- add_python_file_uris(self, pyfiles=None)[source]¶
Set python file uris for Dataproc job.
- Parameters
pyfiles (Optional[List[str]]) -- List of python files URIs
- set_python_main(self, main)[source]¶
Set Dataproc main python file uri.
- Parameters
main (str) -- URI for the python main file.
- class airflow.providers.google.cloud.hooks.dataproc.DataprocHook(gcp_conn_id='google_cloud_default', delegate_to=None, impersonation_chain=None)[source]¶
Bases:
airflow.providers.google.common.hooks.base_google.GoogleBaseHook
Hook for Google Cloud Dataproc APIs.
All the methods in the hook where project_id is used must be called with keyword arguments rather than positional.
- wait_for_operation(self, operation, timeout=None)[source]¶
Waits for long-lasting operation to complete.
- create_cluster(self, region, project_id, cluster_name, cluster_config=None, virtual_cluster_config=None, labels=None, request_id=None, retry=DEFAULT, timeout=None, metadata=())[source]¶
Creates a cluster in a project.
- Parameters
project_id (str) -- Required. The ID of the Google Cloud project that the cluster belongs to.
region (str) -- Required. The Cloud Dataproc region in which to handle the request.
cluster_name (str) -- Name of the cluster to create
labels (Optional[Dict[str, str]]) -- Labels that will be assigned to created cluster
cluster_config (Union[Dict, google.cloud.dataproc_v1.Cluster, None]) -- Required. The cluster config to create. If a dict is provided, it must be of the same form as the protobuf message
ClusterConfig
virtual_cluster_config (Optional[Dict]) -- Optional. The virtual cluster config, used when creating a Dataproc cluster that does not directly control the underlying compute resources, for example, when creating a Dataproc-on-GKE cluster
VirtualClusterConfig
request_id (Optional[str]) -- Optional. A unique id used to identify the request. If the server receives two
CreateClusterRequest
requests with the same id, then the second request will be ignored and the firstgoogle.longrunning.Operation
created and stored in the backend is returned.retry (Union[google.api_core.retry.Retry, google.api_core.gapic_v1.method._MethodDefault]) -- A retry object used to retry requests. If
None
is specified, requests will not be retried.timeout (Optional[float]) -- The amount of time, in seconds, to wait for the request to complete. Note that if
retry
is specified, the timeout applies to each individual attempt.metadata (Sequence[Tuple[str, str]]) -- Additional metadata that is provided to the method.
- delete_cluster(self, region, cluster_name, project_id, cluster_uuid=None, request_id=None, retry=DEFAULT, timeout=None, metadata=())[source]¶
Deletes a cluster in a project.
- Parameters
project_id (str) -- Required. The ID of the Google Cloud project that the cluster belongs to.
region (str) -- Required. The Cloud Dataproc region in which to handle the request.
cluster_name (str) -- Required. The cluster name.
cluster_uuid (Optional[str]) -- Optional. Specifying the
cluster_uuid
means the RPC should fail if cluster with specified UUID does not exist.request_id (Optional[str]) -- Optional. A unique id used to identify the request. If the server receives two
DeleteClusterRequest
requests with the same id, then the second request will be ignored and the firstgoogle.longrunning.Operation
created and stored in the backend is returned.retry (Union[google.api_core.retry.Retry, google.api_core.gapic_v1.method._MethodDefault]) -- A retry object used to retry requests. If
None
is specified, requests will not be retried.timeout (Optional[float]) -- The amount of time, in seconds, to wait for the request to complete. Note that if
retry
is specified, the timeout applies to each individual attempt.metadata (Sequence[Tuple[str, str]]) -- Additional metadata that is provided to the method.
- diagnose_cluster(self, region, cluster_name, project_id, retry=DEFAULT, timeout=None, metadata=())[source]¶
Gets cluster diagnostic information. After the operation completes GCS uri to diagnose is returned
- Parameters
project_id (str) -- Required. The ID of the Google Cloud project that the cluster belongs to.
region (str) -- Required. The Cloud Dataproc region in which to handle the request.
cluster_name (str) -- Required. The cluster name.
retry (Union[google.api_core.retry.Retry, google.api_core.gapic_v1.method._MethodDefault]) -- A retry object used to retry requests. If
None
is specified, requests will not be retried.timeout (Optional[float]) -- The amount of time, in seconds, to wait for the request to complete. Note that if
retry
is specified, the timeout applies to each individual attempt.metadata (Sequence[Tuple[str, str]]) -- Additional metadata that is provided to the method.
- get_cluster(self, region, cluster_name, project_id, retry=DEFAULT, timeout=None, metadata=())[source]¶
Gets the resource representation for a cluster in a project.
- Parameters
project_id (str) -- Required. The ID of the Google Cloud project that the cluster belongs to.
region (str) -- Required. The Cloud Dataproc region in which to handle the request.
cluster_name (str) -- Required. The cluster name.
retry (Union[google.api_core.retry.Retry, google.api_core.gapic_v1.method._MethodDefault]) -- A retry object used to retry requests. If
None
is specified, requests will not be retried.timeout (Optional[float]) -- The amount of time, in seconds, to wait for the request to complete. Note that if
retry
is specified, the timeout applies to each individual attempt.metadata (Sequence[Tuple[str, str]]) -- Additional metadata that is provided to the method.
- list_clusters(self, region, filter_, project_id, page_size=None, retry=DEFAULT, timeout=None, metadata=())[source]¶
Lists all regions/{region}/clusters in a project.
- Parameters
project_id (str) -- Required. The ID of the Google Cloud project that the cluster belongs to.
region (str) -- Required. The Cloud Dataproc region in which to handle the request.
filter -- Optional. A filter constraining the clusters to list. Filters are case-sensitive.
page_size (Optional[int]) -- The maximum number of resources contained in the underlying API response. If page streaming is performed per- resource, this parameter does not affect the return value. If page streaming is performed per-page, this determines the maximum number of resources in a page.
retry (Union[google.api_core.retry.Retry, google.api_core.gapic_v1.method._MethodDefault]) -- A retry object used to retry requests. If
None
is specified, requests will not be retried.timeout (Optional[float]) -- The amount of time, in seconds, to wait for the request to complete. Note that if
retry
is specified, the timeout applies to each individual attempt.metadata (Sequence[Tuple[str, str]]) -- Additional metadata that is provided to the method.
- update_cluster(self, cluster_name, cluster, update_mask, project_id, region, graceful_decommission_timeout=None, request_id=None, retry=DEFAULT, timeout=None, metadata=())[source]¶
Updates a cluster in a project.
- Parameters
project_id (str) -- Required. The ID of the Google Cloud project the cluster belongs to.
region (str) -- Required. The Cloud Dataproc region in which to handle the request.
cluster_name (str) -- Required. The cluster name.
cluster (Union[Dict, google.cloud.dataproc_v1.Cluster]) --
Required. The changes to the cluster.
If a dict is provided, it must be of the same form as the protobuf message
Cluster
update_mask (Union[Dict, google.protobuf.field_mask_pb2.FieldMask]) --
Required. Specifies the path, relative to
Cluster
, of the field to update. For example, to change the number of workers in a cluster to 5, theupdate_mask
parameter would be specified asconfig.worker_config.num_instances
, and thePATCH
request body would specify the new value, as follows:{ "config":{ "workerConfig":{ "numInstances":"5" } } }
Similarly, to change the number of preemptible workers in a cluster to 5, the
update_mask
parameter would beconfig.secondary_worker_config.num_instances
, and thePATCH
request body would be set as follows:{ "config":{ "secondaryWorkerConfig":{ "numInstances":"5" } } }
If a dict is provided, it must be of the same form as the protobuf message
FieldMask
graceful_decommission_timeout (Optional[Union[Dict, google.protobuf.duration_pb2.Duration]]) --
Optional. Timeout for graceful YARN decommissioning. Graceful decommissioning allows removing nodes from the cluster without interrupting jobs in progress. Timeout specifies how long to wait for jobs in progress to finish before forcefully removing nodes (and potentially interrupting jobs). Default timeout is 0 (for forceful decommission), and the maximum allowed timeout is 1 day.
Only supported on Dataproc image versions 1.2 and higher.
If a dict is provided, it must be of the same form as the protobuf message
Duration
request_id (Optional[str]) -- Optional. A unique id used to identify the request. If the server receives two
UpdateClusterRequest
requests with the same id, then the second request will be ignored and the firstgoogle.longrunning.Operation
created and stored in the backend is returned.retry (Union[google.api_core.retry.Retry, google.api_core.gapic_v1.method._MethodDefault]) -- A retry object used to retry requests. If
None
is specified, requests will not be retried.timeout (Optional[float]) -- The amount of time, in seconds, to wait for the request to complete. Note that if
retry
is specified, the timeout applies to each individual attempt.metadata (Sequence[Tuple[str, str]]) -- Additional metadata that is provided to the method.
- create_workflow_template(self, template, project_id, region, retry=DEFAULT, timeout=None, metadata=())[source]¶
Creates new workflow template.
- Parameters
project_id (str) -- Required. The ID of the Google Cloud project the cluster belongs to.
region (str) -- Required. The Cloud Dataproc region in which to handle the request.
template (Union[Dict, google.cloud.dataproc_v1.WorkflowTemplate]) -- The Dataproc workflow template to create. If a dict is provided, it must be of the same form as the protobuf message WorkflowTemplate.
retry (Union[google.api_core.retry.Retry, google.api_core.gapic_v1.method._MethodDefault]) -- A retry object used to retry requests. If
None
is specified, requests will not be retried.timeout (Optional[float]) -- The amount of time, in seconds, to wait for the request to complete. Note that if
retry
is specified, the timeout applies to each individual attempt.metadata (Sequence[Tuple[str, str]]) -- Additional metadata that is provided to the method.
- instantiate_workflow_template(self, template_name, project_id, region, version=None, request_id=None, parameters=None, retry=DEFAULT, timeout=None, metadata=())[source]¶
Instantiates a template and begins execution.
- Parameters
template_name (str) -- Name of template to instantiate.
project_id (str) -- Required. The ID of the Google Cloud project the cluster belongs to.
region (str) -- Required. The Cloud Dataproc region in which to handle the request.
version (Optional[int]) -- Optional. The version of workflow template to instantiate. If specified, the workflow will be instantiated only if the current version of the workflow template has the supplied version. This option cannot be used to instantiate a previous version of workflow template.
request_id (Optional[str]) -- Optional. A tag that prevents multiple concurrent workflow instances with the same tag from running. This mitigates risk of concurrent instances started due to retries.
parameters (Optional[Dict[str, str]]) -- Optional. Map from parameter names to values that should be used for those parameters. Values may not exceed 100 characters.
retry (Union[google.api_core.retry.Retry, google.api_core.gapic_v1.method._MethodDefault]) -- A retry object used to retry requests. If
None
is specified, requests will not be retried.timeout (Optional[float]) -- The amount of time, in seconds, to wait for the request to complete. Note that if
retry
is specified, the timeout applies to each individual attempt.metadata (Sequence[Tuple[str, str]]) -- Additional metadata that is provided to the method.
- instantiate_inline_workflow_template(self, template, project_id, region, request_id=None, retry=DEFAULT, timeout=None, metadata=())[source]¶
Instantiates a template and begins execution.
- Parameters
template (Union[Dict, google.cloud.dataproc_v1.WorkflowTemplate]) -- The workflow template to instantiate. If a dict is provided, it must be of the same form as the protobuf message WorkflowTemplate
project_id (str) -- Required. The ID of the Google Cloud project the cluster belongs to.
region (str) -- Required. The Cloud Dataproc region in which to handle the request.
request_id (Optional[str]) -- Optional. A tag that prevents multiple concurrent workflow instances with the same tag from running. This mitigates risk of concurrent instances started due to retries.
retry (Union[google.api_core.retry.Retry, google.api_core.gapic_v1.method._MethodDefault]) -- A retry object used to retry requests. If
None
is specified, requests will not be retried.timeout (Optional[float]) -- The amount of time, in seconds, to wait for the request to complete. Note that if
retry
is specified, the timeout applies to each individual attempt.metadata (Sequence[Tuple[str, str]]) -- Additional metadata that is provided to the method.
- wait_for_job(self, job_id, project_id, region, wait_time=10, timeout=None)[source]¶
Helper method which polls a job to check if it finishes.
- Parameters
job_id (str) -- Id of the Dataproc job
project_id (str) -- Required. The ID of the Google Cloud project the cluster belongs to.
region (str) -- Required. The Cloud Dataproc region in which to handle the request.
wait_time (int) -- Number of seconds between checks
timeout (Optional[int]) -- How many seconds wait for job to be ready. Used only if
asynchronous
is False
- get_job(self, job_id, project_id, region, retry=DEFAULT, timeout=None, metadata=())[source]¶
Gets the resource representation for a job in a project.
- Parameters
job_id (str) -- Id of the Dataproc job
project_id (str) -- Required. The ID of the Google Cloud project the cluster belongs to.
region (str) -- Required. The Cloud Dataproc region in which to handle the request.
retry (Union[google.api_core.retry.Retry, google.api_core.gapic_v1.method._MethodDefault]) -- A retry object used to retry requests. If
None
is specified, requests will not be retried.timeout (Optional[float]) -- The amount of time, in seconds, to wait for the request to complete. Note that if
retry
is specified, the timeout applies to each individual attempt.metadata (Sequence[Tuple[str, str]]) -- Additional metadata that is provided to the method.
- submit_job(self, job, project_id, region, request_id=None, retry=DEFAULT, timeout=None, metadata=())[source]¶
Submits a job to a cluster.
- Parameters
job (Union[dict, google.cloud.dataproc_v1.Job]) -- The job resource. If a dict is provided, it must be of the same form as the protobuf message Job
project_id (str) -- Required. The ID of the Google Cloud project the cluster belongs to.
region (str) -- Required. The Cloud Dataproc region in which to handle the request.
request_id (Optional[str]) -- Optional. A tag that prevents multiple concurrent workflow instances with the same tag from running. This mitigates risk of concurrent instances started due to retries.
retry (Union[google.api_core.retry.Retry, google.api_core.gapic_v1.method._MethodDefault]) -- A retry object used to retry requests. If
None
is specified, requests will not be retried.timeout (Optional[float]) -- The amount of time, in seconds, to wait for the request to complete. Note that if
retry
is specified, the timeout applies to each individual attempt.metadata (Sequence[Tuple[str, str]]) -- Additional metadata that is provided to the method.
- cancel_job(self, job_id, project_id, region=None, retry=DEFAULT, timeout=None, metadata=())[source]¶
Starts a job cancellation request.
- Parameters
project_id (str) -- Required. The ID of the Google Cloud project that the job belongs to.
region (Optional[str]) -- Required. The Cloud Dataproc region in which to handle the request.
job_id (str) -- Required. The job ID.
retry (Union[google.api_core.retry.Retry, google.api_core.gapic_v1.method._MethodDefault]) -- A retry object used to retry requests. If
None
is specified, requests will not be retried.timeout (Optional[float]) -- The amount of time, in seconds, to wait for the request to complete. Note that if
retry
is specified, the timeout applies to each individual attempt.metadata (Sequence[Tuple[str, str]]) -- Additional metadata that is provided to the method.
- create_batch(self, region, project_id, batch, batch_id=None, request_id=None, retry=DEFAULT, timeout=None, metadata=())[source]¶
Creates a batch workload.
- Parameters
project_id (str) -- Required. The ID of the Google Cloud project that the cluster belongs to.
region (str) -- Required. The Cloud Dataproc region in which to handle the request.
batch (Union[Dict, google.cloud.dataproc_v1.Batch]) -- Required. The batch to create.
batch_id (Optional[str]) -- Optional. The ID to use for the batch, which will become the final component of the batch's resource name. This value must be 4-63 characters. Valid characters are /[a-z][0-9]-/.
request_id (Optional[str]) -- Optional. A unique id used to identify the request. If the server receives two
CreateBatchRequest
requests with the same id, then the second request will be ignored and the firstgoogle.longrunning.Operation
created and stored in the backend is returned.retry (Union[google.api_core.retry.Retry, google.api_core.gapic_v1.method._MethodDefault]) -- A retry object used to retry requests. If
None
is specified, requests will not be retried.timeout (Optional[float]) -- The amount of time, in seconds, to wait for the request to complete. Note that if
retry
is specified, the timeout applies to each individual attempt.metadata (Sequence[Tuple[str, str]]) -- Additional metadata that is provided to the method.
- delete_batch(self, batch_id, region, project_id, retry=DEFAULT, timeout=None, metadata=())[source]¶
Deletes the batch workload resource.
- Parameters
batch_id (str) -- Required. The ID to use for the batch, which will become the final component of the batch's resource name. This value must be 4-63 characters. Valid characters are /[a-z][0-9]-/.
project_id (str) -- Required. The ID of the Google Cloud project that the cluster belongs to.
region (str) -- Required. The Cloud Dataproc region in which to handle the request.
retry (Union[google.api_core.retry.Retry, google.api_core.gapic_v1.method._MethodDefault]) -- A retry object used to retry requests. If
None
is specified, requests will not be retried.timeout (Optional[float]) -- The amount of time, in seconds, to wait for the request to complete. Note that if
retry
is specified, the timeout applies to each individual attempt.metadata (Sequence[Tuple[str, str]]) -- Additional metadata that is provided to the method.
- get_batch(self, batch_id, region, project_id, retry=DEFAULT, timeout=None, metadata=())[source]¶
Gets the batch workload resource representation.
- Parameters
batch_id (str) -- Required. The ID to use for the batch, which will become the final component of the batch's resource name. This value must be 4-63 characters. Valid characters are /[a-z][0-9]-/.
project_id (str) -- Required. The ID of the Google Cloud project that the cluster belongs to.
region (str) -- Required. The Cloud Dataproc region in which to handle the request.
retry (Union[google.api_core.retry.Retry, google.api_core.gapic_v1.method._MethodDefault]) -- A retry object used to retry requests. If
None
is specified, requests will not be retried.timeout (Optional[float]) -- The amount of time, in seconds, to wait for the request to complete. Note that if
retry
is specified, the timeout applies to each individual attempt.metadata (Sequence[Tuple[str, str]]) -- Additional metadata that is provided to the method.
- list_batches(self, region, project_id, page_size=None, page_token=None, retry=DEFAULT, timeout=None, metadata=())[source]¶
Lists batch workloads.
- Parameters
project_id (str) -- Required. The ID of the Google Cloud project that the cluster belongs to.
region (str) -- Required. The Cloud Dataproc region in which to handle the request.
page_size (Optional[int]) -- Optional. The maximum number of batches to return in each response. The service may return fewer than this value. The default page size is 20; the maximum page size is 1000.
page_token (Optional[str]) -- Optional. A page token received from a previous
ListBatches
call. Provide this token to retrieve the subsequent page.retry (Union[google.api_core.retry.Retry, google.api_core.gapic_v1.method._MethodDefault]) -- A retry object used to retry requests. If
None
is specified, requests will not be retried.timeout (Optional[float]) -- The amount of time, in seconds, to wait for the request to complete. Note that if
retry
is specified, the timeout applies to each individual attempt.metadata (Sequence[Tuple[str, str]]) -- Additional metadata that is provided to the method.