airflow.contrib.operators.dataproc_operator

This module contains Google Dataproc operators.

Module Contents

class airflow.contrib.operators.dataproc_operator.DataprocOperationBaseOperator(project_id, region='global', gcp_conn_id='google_cloud_default', delegate_to=None, *args, **kwargs)[source]

Bases: airflow.models.BaseOperator

The base class for operators that poll on a Dataproc Operation.

execute(self, context)[source]
start(self, context)[source]
class airflow.contrib.operators.dataproc_operator.DataprocClusterCreateOperator(project_id, cluster_name, num_workers, zone=None, network_uri=None, subnetwork_uri=None, internal_ip_only=None, tags=None, storage_bucket=None, init_actions_uris=None, init_action_timeout='10m', metadata=None, custom_image=None, custom_image_project_id=None, image_version=None, autoscaling_policy=None, properties=None, optional_components=None, num_masters=1, master_machine_type='n1-standard-4', master_disk_type='pd-standard', master_disk_size=500, worker_machine_type='n1-standard-4', worker_disk_type='pd-standard', worker_disk_size=500, num_preemptible_workers=0, labels=None, region='global', service_account=None, service_account_scopes=None, idle_delete_ttl=None, auto_delete_time=None, auto_delete_ttl=None, customer_managed_key=None, *args, **kwargs)[source]

Bases: airflow.contrib.operators.dataproc_operator.DataprocOperationBaseOperator

Create a new cluster on Google Cloud Dataproc. The operator will wait until the creation is successful or an error occurs in the creation process.

The parameters allow to configure the cluster. Please refer to

https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.clusters

for a detailed explanation on the different parameters. Most of the configuration parameters detailed in the link are available as a parameter to this operator.

Parameters
  • cluster_name (str) – The name of the DataProc cluster to create. (templated)

  • project_id (str) – The ID of the google cloud project in which to create the cluster. (templated)

  • num_workers (int) – The # of workers to spin up. If set to zero will spin up cluster in a single node mode

  • storage_bucket (str) – The storage bucket to use, setting to None lets dataproc generate a custom one for you

  • init_actions_uris (list[str]) – List of GCS uri’s containing dataproc initialization scripts

  • init_action_timeout (str) – Amount of time executable scripts in init_actions_uris has to complete

  • metadata (dict) – dict of key-value google compute engine metadata entries to add to all instances

  • image_version (str) – the version of software inside the Dataproc cluster

  • custom_image (str) – custom Dataproc image for more info see https://cloud.google.com/dataproc/docs/guides/dataproc-images

  • custom_image_project_id (str) – project id for the custom Dataproc image, for more info see https://cloud.google.com/dataproc/docs/guides/dataproc-images

  • autoscaling_policy (str) – The autoscaling policy used by the cluster. Only resource names including projectid and location (region) are valid. Example: projects/[projectId]/locations/[dataproc_region]/autoscalingPolicies/[policy_id]

  • properties (dict) – dict of properties to set on config files (e.g. spark-defaults.conf), see https://cloud.google.com/dataproc/docs/reference/rest/v1/projects.regions.clusters#SoftwareConfig

  • optional_components (list[str]) – List of optional cluster components, for more info see https://cloud.google.com/dataproc/docs/reference/rest/v1/ClusterConfig#Component

  • num_masters (int) – The # of master nodes to spin up

  • master_machine_type (str) – Compute engine machine type to use for the master node

  • master_disk_type (str) – Type of the boot disk for the master node (default is pd-standard). Valid values: pd-ssd (Persistent Disk Solid State Drive) or pd-standard (Persistent Disk Hard Disk Drive).

  • master_disk_size (int) – Disk size for the master node

  • worker_machine_type (str) – Compute engine machine type to use for the worker nodes

  • worker_disk_type (str) – Type of the boot disk for the worker node (default is pd-standard). Valid values: pd-ssd (Persistent Disk Solid State Drive) or pd-standard (Persistent Disk Hard Disk Drive).

  • worker_disk_size (int) – Disk size for the worker nodes

  • num_preemptible_workers (int) – The # of preemptible worker nodes to spin up

  • labels (dict) – dict of labels to add to the cluster

  • zone (str) – The zone where the cluster will be located. Set to None to auto-zone. (templated)

  • network_uri (str) – The network uri to be used for machine communication, cannot be specified with subnetwork_uri

  • subnetwork_uri (str) – The subnetwork uri to be used for machine communication, cannot be specified with network_uri

  • internal_ip_only (bool) – If true, all instances in the cluster will only have internal IP addresses. This can only be enabled for subnetwork enabled networks

  • tags (list[str]) – The GCE tags to add to all instances

  • region (str) – leave as ‘global’, might become relevant in the future. (templated)

  • gcp_conn_id (str) – The connection ID to use connecting to Google Cloud Platform.

  • delegate_to (str) – The account to impersonate, if any. For this to work, the service account making the request must have domain-wide delegation enabled.

  • service_account (str) – The service account of the dataproc instances.

  • service_account_scopes (list[str]) – The URIs of service account scopes to be included.

  • idle_delete_ttl (int) – The longest duration that cluster would keep alive while staying idle. Passing this threshold will cause cluster to be auto-deleted. A duration in seconds.

  • auto_delete_time (datetime.datetime) – The time when cluster will be auto-deleted.

  • auto_delete_ttl (int) – The life duration of cluster, the cluster will be auto-deleted at the end of this duration. A duration in seconds. (If auto_delete_time is set this parameter will be ignored)

  • customer_managed_key (str) – The customer-managed key used for disk encryption projects/[PROJECT_STORING_KEYS]/locations/[LOCATION]/keyRings/[KEY_RING_NAME]/cryptoKeys/[KEY_NAME] # noqa # pylint: disable=line-too-long

template_fields = ['cluster_name', 'project_id', 'zone', 'region'][source]
_get_init_action_timeout(self)[source]
_build_gce_cluster_config(self, cluster_data)[source]
_build_lifecycle_config(self, cluster_data)[source]
_build_cluster_data(self)[source]
start(self)[source]

Create a new cluster on Google Cloud Dataproc.

class airflow.contrib.operators.dataproc_operator.DataprocClusterScaleOperator(cluster_name, project_id, region='global', num_workers=2, num_preemptible_workers=0, graceful_decommission_timeout=None, *args, **kwargs)[source]

Bases: airflow.contrib.operators.dataproc_operator.DataprocOperationBaseOperator

Scale, up or down, a cluster on Google Cloud Dataproc. The operator will wait until the cluster is re-scaled.

Example:

t1 = DataprocClusterScaleOperator(
        task_id='dataproc_scale',
        project_id='my-project',
        cluster_name='cluster-1',
        num_workers=10,
        num_preemptible_workers=10,
        graceful_decommission_timeout='1h',
        dag=dag)

See also

For more detail on about scaling clusters have a look at the reference: https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/scaling-clusters

Parameters
  • cluster_name (str) – The name of the cluster to scale. (templated)

  • project_id (str) – The ID of the google cloud project in which the cluster runs. (templated)

  • region (str) – The region for the dataproc cluster. (templated)

  • gcp_conn_id (str) – The connection ID to use connecting to Google Cloud Platform.

  • num_workers (int) – The new number of workers

  • num_preemptible_workers (int) – The new number of preemptible workers

  • graceful_decommission_timeout (str) – Timeout for graceful YARN decomissioning. Maximum value is 1d

  • delegate_to (str) – The account to impersonate, if any. For this to work, the service account making the request must have domain-wide delegation enabled.

template_fields = ['cluster_name', 'project_id', 'region'][source]
_build_scale_cluster_data(self)[source]
static _get_graceful_decommission_timeout(timeout)[source]
start(self)[source]

Scale, up or down, a cluster on Google Cloud Dataproc.

class airflow.contrib.operators.dataproc_operator.DataprocClusterDeleteOperator(cluster_name, project_id, region='global', *args, **kwargs)[source]

Bases: airflow.contrib.operators.dataproc_operator.DataprocOperationBaseOperator

Delete a cluster on Google Cloud Dataproc. The operator will wait until the cluster is destroyed.

Parameters
  • cluster_name (str) – The name of the cluster to delete. (templated)

  • project_id (str) – The ID of the google cloud project in which the cluster runs. (templated)

  • region (str) – leave as ‘global’, might become relevant in the future. (templated)

  • gcp_conn_id (str) – The connection ID to use connecting to Google Cloud Platform.

  • delegate_to (str) – The account to impersonate, if any. For this to work, the service account making the request must have domain-wide delegation enabled.

template_fields = ['cluster_name', 'project_id', 'region'][source]
start(self)[source]

Delete a cluster on Google Cloud Dataproc.

class airflow.contrib.operators.dataproc_operator.DataProcJobBaseOperator(job_name='{{task.task_id}}_{{ds_nodash}}', cluster_name='cluster-1', dataproc_properties=None, dataproc_jars=None, gcp_conn_id='google_cloud_default', delegate_to=None, labels=None, region='global', job_error_states=None, *args, **kwargs)[source]

Bases: airflow.models.BaseOperator

The base class for operators that launch job on DataProc.

Parameters
  • job_name (str) – The job name used in the DataProc cluster. This name by default is the task_id appended with the execution data, but can be templated. The name will always be appended with a random number to avoid name clashes.

  • cluster_name (str) – The name of the DataProc cluster.

  • dataproc_properties (dict) – Map for the Hive properties. Ideal to put in default arguments (templated)

  • dataproc_jars (list) – HCFS URIs of jar files to add to the CLASSPATH of the Hive server and Hadoop MapReduce (MR) tasks. Can contain Hive SerDes and UDFs. (templated)

  • gcp_conn_id (str) – The connection ID to use connecting to Google Cloud Platform.

  • delegate_to (str) – The account to impersonate, if any. For this to work, the service account making the request must have domain-wide delegation enabled.

  • labels (dict) – The labels to associate with this job. Label keys must contain 1 to 63 characters, and must conform to RFC 1035. Label values may be empty, but, if present, must contain 1 to 63 characters, and must conform to RFC 1035. No more than 32 labels can be associated with a job.

  • region (str) – The specified region where the dataproc cluster is created.

  • job_error_states (set) – Job states that should be considered error states. Any states in this set will result in an error being raised and failure of the task. Eg, if the CANCELLED state should also be considered a task failure, pass in {'ERROR', 'CANCELLED'}. Possible values are currently only 'ERROR' and 'CANCELLED', but could change in the future. Defaults to {'ERROR'}.

Variables

dataproc_job_id (str) – The actual “jobId” as submitted to the Dataproc API. This is useful for identifying or linking to the job in the Google Cloud Console Dataproc UI, as the actual “jobId” submitted to the Dataproc API is appended with an 8 character random string.

job_type =[source]
create_job_template(self)[source]

Initialize self.job_template with default values

execute(self, context)[source]

Build self.job based on the job template, and submit it. :raises AirflowException if no template has been initialized (see create_job_template)

on_kill(self)[source]

Callback called when the operator is killed. Cancel any running job.

class airflow.contrib.operators.dataproc_operator.DataProcPigOperator(query=None, query_uri=None, variables=None, dataproc_pig_properties=None, dataproc_pig_jars=None, *args, **kwargs)[source]

Bases: airflow.contrib.operators.dataproc_operator.DataProcJobBaseOperator

Start a Pig query Job on a Cloud DataProc cluster. The parameters of the operation will be passed to the cluster.

It’s a good practice to define dataproc_* parameters in the default_args of the dag like the cluster name and UDFs.

default_args = {
    'cluster_name': 'cluster-1',
    'dataproc_pig_jars': [
        'gs://example/udf/jar/datafu/1.2.0/datafu.jar',
        'gs://example/udf/jar/gpig/1.2/gpig.jar'
    ]
}

You can pass a pig script as string or file reference. Use variables to pass on variables for the pig script to be resolved on the cluster or use the parameters to be resolved in the script as template parameters.

Example:

t1 = DataProcPigOperator(
        task_id='dataproc_pig',
        query='a_pig_script.pig',
        variables={'out': 'gs://example/output/{{ds}}'},
        dag=dag)

See also

For more detail on about job submission have a look at the reference: https://cloud.google.com/dataproc/reference/rest/v1/projects.regions.jobs

Parameters
  • query (str) – The query or reference to the query file (pg or pig extension). (templated)

  • query_uri (str) – The HCFS URI of the script that contains the Pig queries.

  • variables (dict) – Map of named parameters for the query. (templated)

  • dataproc_pig_properties (dict) – Map for the Pig properties. Ideal to put in default arguments (templated)

  • dataproc_pig_jars (list) – HCFS URIs of jar files to add to the CLASSPATH of the Pig Client and Hadoop MapReduce (MR) tasks. Can contain Pig UDFs. (templated)

template_fields = ['query', 'variables', 'job_name', 'cluster_name', 'region', 'dataproc_jars', 'dataproc_properties'][source]
template_ext = ['.pg', '.pig'][source]
ui_color = #0273d4[source]
job_type = pigJob[source]
execute(self, context)[source]
class airflow.contrib.operators.dataproc_operator.DataProcHiveOperator(query=None, query_uri=None, variables=None, dataproc_hive_properties=None, dataproc_hive_jars=None, *args, **kwargs)[source]

Bases: airflow.contrib.operators.dataproc_operator.DataProcJobBaseOperator

Start a Hive query Job on a Cloud DataProc cluster.

Parameters
  • query (str) – The query or reference to the query file (q extension).

  • query_uri (str) – The HCFS URI of the script that contains the Hive queries.

  • variables (dict) – Map of named parameters for the query.

  • dataproc_hive_properties (dict) – Map for the Pig properties. Ideal to put in default arguments (templated)

  • dataproc_hive_jars (list) – HCFS URIs of jar files to add to the CLASSPATH of the Hive server and Hadoop MapReduce (MR) tasks. Can contain Hive SerDes and UDFs. (templated)

template_fields = ['query', 'variables', 'job_name', 'cluster_name', 'region', 'dataproc_jars', 'dataproc_properties'][source]
template_ext = ['.q', '.hql'][source]
ui_color = #0273d4[source]
job_type = hiveJob[source]
execute(self, context)[source]
class airflow.contrib.operators.dataproc_operator.DataProcSparkSqlOperator(query=None, query_uri=None, variables=None, dataproc_spark_properties=None, dataproc_spark_jars=None, *args, **kwargs)[source]

Bases: airflow.contrib.operators.dataproc_operator.DataProcJobBaseOperator

Start a Spark SQL query Job on a Cloud DataProc cluster.

Parameters
  • query (str) – The query or reference to the query file (q extension). (templated)

  • query_uri (str) – The HCFS URI of the script that contains the SQL queries.

  • variables (dict) – Map of named parameters for the query. (templated)

  • dataproc_spark_properties (dict) – Map for the Pig properties. Ideal to put in default arguments (templated)

  • dataproc_spark_jars (list) – HCFS URIs of jar files to be added to the Spark CLASSPATH. (templated)

template_fields = ['query', 'variables', 'job_name', 'cluster_name', 'region', 'dataproc_jars', 'dataproc_properties'][source]
template_ext = ['.q'][source]
ui_color = #0273d4[source]
job_type = sparkSqlJob[source]
execute(self, context)[source]
class airflow.contrib.operators.dataproc_operator.DataProcSparkOperator(main_jar=None, main_class=None, arguments=None, archives=None, files=None, dataproc_spark_properties=None, dataproc_spark_jars=None, *args, **kwargs)[source]

Bases: airflow.contrib.operators.dataproc_operator.DataProcJobBaseOperator

Start a Spark Job on a Cloud DataProc cluster.

Parameters
  • main_jar (str) – The HCFS URI of the jar file that contains the main class (use this or the main_class, not both together).

  • main_class (str) – Name of the job class. (use this or the main_jar, not both together).

  • arguments (list) – Arguments for the job. (templated)

  • archives (list) – List of archived files that will be unpacked in the work directory. Should be stored in Cloud Storage.

  • files (list) – List of files to be copied to the working directory

  • dataproc_spark_properties (dict) – Map for the Pig properties. Ideal to put in default arguments (templated)

  • dataproc_spark_jars (list) – HCFS URIs of files to be copied to the working directory of Spark drivers and distributed tasks. Useful for naively parallel tasks. (templated)

template_fields = ['arguments', 'job_name', 'cluster_name', 'region', 'dataproc_jars', 'dataproc_properties'][source]
ui_color = #0273d4[source]
job_type = sparkJob[source]
execute(self, context)[source]
class airflow.contrib.operators.dataproc_operator.DataProcHadoopOperator(main_jar=None, main_class=None, arguments=None, archives=None, files=None, dataproc_hadoop_properties=None, dataproc_hadoop_jars=None, *args, **kwargs)[source]

Bases: airflow.contrib.operators.dataproc_operator.DataProcJobBaseOperator

Start a Hadoop Job on a Cloud DataProc cluster.

Parameters
  • main_jar (str) – The HCFS URI of the jar file containing the main class (use this or the main_class, not both together).

  • main_class (str) – Name of the job class. (use this or the main_jar, not both together).

  • arguments (list) – Arguments for the job. (templated)

  • archives (list) – List of archived files that will be unpacked in the work directory. Should be stored in Cloud Storage.

  • files (list) – List of files to be copied to the working directory

  • dataproc_hadoop_properties (dict) – Map for the Pig properties. Ideal to put in default arguments (tempplated)

  • dataproc_hadoop_jars (list) – Jar file URIs to add to the CLASSPATHs of the Hadoop driver and tasks. (templated)

template_fields = ['arguments', 'job_name', 'cluster_name', 'region', 'dataproc_jars', 'dataproc_properties'][source]
ui_color = #0273d4[source]
job_type = hadoopJob[source]
execute(self, context)[source]
class airflow.contrib.operators.dataproc_operator.DataProcPySparkOperator(main, arguments=None, archives=None, pyfiles=None, files=None, dataproc_pyspark_properties=None, dataproc_pyspark_jars=None, *args, **kwargs)[source]

Bases: airflow.contrib.operators.dataproc_operator.DataProcJobBaseOperator

Start a PySpark Job on a Cloud DataProc cluster.

Parameters
  • main (str) – [Required] The Hadoop Compatible Filesystem (HCFS) URI of the main Python file to use as the driver. Must be a .py file.

  • arguments (list) – Arguments for the job. (templated)

  • archives (list) – List of archived files that will be unpacked in the work directory. Should be stored in Cloud Storage.

  • files (list) – List of files to be copied to the working directory

  • pyfiles (list) – List of Python files to pass to the PySpark framework. Supported file types: .py, .egg, and .zip

  • dataproc_pyspark_properties (dict) – Map for the Pig properties. Ideal to put in default arguments (templated)

  • dataproc_pyspark_jars (list) – HCFS URIs of jar files to add to the CLASSPATHs of the Python driver and tasks. (templated)

template_fields = ['arguments', 'job_name', 'cluster_name', 'region', 'dataproc_jars', 'dataproc_properties'][source]
ui_color = #0273d4[source]
job_type = pysparkJob[source]
static _generate_temp_filename(filename)[source]
_upload_file_temp(self, bucket, local_file)[source]

Upload a local file to a Google Cloud Storage bucket.

execute(self, context)[source]
class airflow.contrib.operators.dataproc_operator.DataprocWorkflowTemplateInstantiateOperator(template_id, *args, **kwargs)[source]

Bases: airflow.contrib.operators.dataproc_operator.DataprocOperationBaseOperator

Instantiate a WorkflowTemplate on Google Cloud Dataproc. The operator will wait until the WorkflowTemplate is finished executing.

Parameters
  • template_id (str) – The id of the template. (templated)

  • project_id (str) – The ID of the google cloud project in which the template runs

  • region (str) – leave as ‘global’, might become relevant in the future

  • gcp_conn_id (str) – The connection ID to use connecting to Google Cloud Platform.

  • delegate_to (str) – The account to impersonate, if any. For this to work, the service account making the request must have domain-wide delegation enabled.

template_fields = ['template_id'][source]
start(self)[source]

Instantiate a WorkflowTemplate on Google Cloud Dataproc.

class airflow.contrib.operators.dataproc_operator.DataprocWorkflowTemplateInstantiateInlineOperator(template, *args, **kwargs)[source]

Bases: airflow.contrib.operators.dataproc_operator.DataprocOperationBaseOperator

Instantiate a WorkflowTemplate Inline on Google Cloud Dataproc. The operator will wait until the WorkflowTemplate is finished executing.

Parameters
  • template (map) – The template contents. (templated)

  • project_id (str) – The ID of the google cloud project in which the template runs

  • region (str) – leave as ‘global’, might become relevant in the future

  • gcp_conn_id (str) – The connection ID to use connecting to Google Cloud Platform.

  • delegate_to (str) – The account to impersonate, if any. For this to work, the service account making the request must have domain-wide delegation enabled.

template_fields = ['template'][source]
start(self)[source]

Instantiate a WorkflowTemplate Inline on Google Cloud Dataproc.

Was this entry helpful?