Google Cloud Dataproc Operators

Dataproc is a managed Apache Spark and Apache Hadoop service that lets you take advantage of open source data tools for batch processing, querying, streaming and machine learning. Dataproc automation helps you create clusters quickly, manage them easily, and save money by turning clusters off when you don’t need them.

For more information about the service visit Dataproc production documentation <Product documentation

Create a Cluster

Before you create a dataproc cluster you need to define the cluster. It describes the identifying information, config, and status of a cluster of Compute Engine instances. For more information about the available fields to pass when creating a cluster, visit Dataproc create cluster API.

A cluster configuration can look as followed:

airflow/providers/google/cloud/example_dags/example_dataproc.pyView Source


CLUSTER_CONFIG = {
    "master_config": {
        "num_instances": 1,
        "machine_type_uri": "n1-standard-4",
        "disk_config": {"boot_disk_type": "pd-standard", "boot_disk_size_gb": 1024},
    },
    "worker_config": {
        "num_instances": 2,
        "machine_type_uri": "n1-standard-4",
        "disk_config": {"boot_disk_type": "pd-standard", "boot_disk_size_gb": 1024},
    },
}

Copy to clipboard

With this configuration we can create the cluster: DataprocCreateClusterOperator

airflow/providers/google/cloud/example_dags/example_dataproc.pyView Source

create_cluster = DataprocCreateClusterOperator(
    task_id="create_cluster",
    project_id=PROJECT_ID,
    cluster_config=CLUSTER_CONFIG,
    region=REGION,
    cluster_name=CLUSTER_NAME,
)
Copy to clipboard

Update a cluster

You can scale the cluster up or down by providing a cluster config and a updateMask. In the updateMask argument you specifies the path, relative to Cluster, of the field to update. For more information on updateMask and other parameters take a look at Dataproc update cluster API.

An example of a new cluster config and the updateMask:

airflow/providers/google/cloud/example_dags/example_dataproc.pyView Source

CLUSTER_UPDATE = {
    "config": {"worker_config": {"num_instances": 3}, "secondary_worker_config": {"num_instances": 3}}
}
UPDATE_MASK = {
    "paths": ["config.worker_config.num_instances", "config.secondary_worker_config.num_instances"]
}
Copy to clipboard

To update a cluster you can use: DataprocUpdateClusterOperator

airflow/providers/google/cloud/example_dags/example_dataproc.pyView Source

scale_cluster = DataprocUpdateClusterOperator(
    task_id="scale_cluster",
    cluster_name=CLUSTER_NAME,
    cluster=CLUSTER_UPDATE,
    update_mask=UPDATE_MASK,
    graceful_decommission_timeout=TIMEOUT,
    project_id=PROJECT_ID,
    location=REGION,
)
Copy to clipboard

Deleting a cluster

To delete a cluster you can use:

DataprocDeleteClusterOperator.

airflow/providers/google/cloud/example_dags/example_dataproc.pyView Source

delete_cluster = DataprocDeleteClusterOperator(
    task_id="delete_cluster", project_id=PROJECT_ID, cluster_name=CLUSTER_NAME, region=REGION
)
Copy to clipboard

Submit a job to a cluster

Dataproc supports submitting jobs of different big data components. The list currently includes Spark, Hadoop, Pig and Hive. For more information on versions and images take a look at Cloud Dataproc Image version list

To submit a job to the cluster you need a provide a job source file. The job source file can be on GCS, the cluster or on your local file system. You can specify a file:/// path to refer to a local file on a cluster’s master node.

The job configuration can be submitted by using: DataprocSubmitJobOperator.

airflow/providers/google/cloud/example_dags/example_dataproc.pyView Source

pyspark_task = DataprocSubmitJobOperator(
    task_id="pyspark_task", job=PYSPARK_JOB, location=REGION, project_id=PROJECT_ID
)
Copy to clipboard

Examples of job configurations to submit

We have provided an example for every framework below. There are more arguments to provide in the jobs than the examples show. For the complete list of arguments take a look at DataProc Job arguments

Example of the configuration for a PySpark Job:

airflow/providers/google/cloud/example_dags/example_dataproc.pyView Source

PYSPARK_JOB = {
    "reference": {"project_id": PROJECT_ID},
    "placement": {"cluster_name": CLUSTER_NAME},
    "pyspark_job": {"main_python_file_uri": PYSPARK_URI},
}
Copy to clipboard

Example of the configuration for a SparkSQl Job:

airflow/providers/google/cloud/example_dags/example_dataproc.pyView Source

SPARK_SQL_JOB = {
    "reference": {"project_id": PROJECT_ID},
    "placement": {"cluster_name": CLUSTER_NAME},
    "spark_sql_job": {"query_list": {"queries": ["SHOW DATABASES;"]}},
}
Copy to clipboard

Example of the configuration for a Spark Job:

airflow/providers/google/cloud/example_dags/example_dataproc.pyView Source

SPARK_JOB = {
    "reference": {"project_id": PROJECT_ID},
    "placement": {"cluster_name": CLUSTER_NAME},
    "spark_job": {
        "jar_file_uris": ["file:///usr/lib/spark/examples/jars/spark-examples.jar"],
        "main_class": "org.apache.spark.examples.SparkPi",
    },
}
Copy to clipboard

Example of the configuration for a Hive Job:

airflow/providers/google/cloud/example_dags/example_dataproc.pyView Source

HIVE_JOB = {
    "reference": {"project_id": PROJECT_ID},
    "placement": {"cluster_name": CLUSTER_NAME},
    "hive_job": {"query_list": {"queries": ["SHOW DATABASES;"]}},
}
Copy to clipboard

Example of the configuration for a Hadoop Job:

airflow/providers/google/cloud/example_dags/example_dataproc.pyView Source

HADOOP_JOB = {
    "reference": {"project_id": PROJECT_ID},
    "placement": {"cluster_name": CLUSTER_NAME},
    "hadoop_job": {
        "main_jar_file_uri": "file:///usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar",
        "args": ["wordcount", "gs://pub/shakespeare/rose.txt", OUTPUT_PATH],
    },
}
Copy to clipboard

Example of the configuration for a Pig Job:

airflow/providers/google/cloud/example_dags/example_dataproc.pyView Source

PIG_JOB = {
    "reference": {"project_id": PROJECT_ID},
    "placement": {"cluster_name": CLUSTER_NAME},
    "pig_job": {"query_list": {"queries": ["define sin HiveUDF('sin');"]}},
}
Copy to clipboard

Example of the configuration for a SparkR:

airflow/providers/google/cloud/example_dags/example_dataproc.pyView Source

SPARKR_JOB = {
    "reference": {"project_id": PROJECT_ID},
    "placement": {"cluster_name": CLUSTER_NAME},
    "spark_r_job": {"main_r_file_uri": SPARKR_URI},
}
Copy to clipboard

Working with workflows templates

Dataproc supports creating workflow templates that can be triggered later on.

A workflow template can be created using: DataprocCreateWorkflowTemplateOperator.

airflow/providers/google/cloud/example_dags/example_dataproc.pyView Source

create_workflow_template = DataprocCreateWorkflowTemplateOperator(
    task_id="create_workflow_template",
    template=WORKFLOW_TEMPLATE,
    project_id=PROJECT_ID,
    location=REGION,
)
Copy to clipboard

Once a workflow is created users can trigger it using DataprocInstantiateWorkflowTemplateOperator:

airflow/providers/google/cloud/example_dags/example_dataproc.pyView Source

trigger_workflow = DataprocInstantiateWorkflowTemplateOperator(
    task_id="trigger_workflow", region=REGION, project_id=PROJECT_ID, template_id=WORKFLOW_NAME
)
Copy to clipboard

References

For further information, take a look at:

Was this entry helpful?