Amazon EMR on EKS Operators

Amazon EMR on EKS provides a deployment option for Amazon EMR that allows you to run open-source big data frameworks on Amazon Elastic Kubernetes Service (Amazon EKS).

Airflow provides the EMRContainerOperator to submit Spark jobs to your EMR on EKS virtual cluster.

Prerequisite Tasks

To use these operators, you must do a few things:

This example assumes that you already have an EMR on EKS virtual cluster configured. See the EMR on EKS Getting Started guide for more information.

Run a Spark job on EMR on EKS

Purpose

The EMRContainerOperator will submit a new job to an EMR on EKS virtual cluster and wait for the job to complete. The example job below calculates the mathematical constant Pi, and monitors the progress with EMRContainerSensor. In a production job, you would usually refer to a Spark script on Amazon S3.

Job configuration

To create a job for EMR on EKS, you need to specify your virtual cluster ID, the release of EMR you want to use, your IAM execution role, and Spark submit parameters.

You can also optionally provide configuration overrides such as Spark, Hive, or Log4j properties as well as monitoring configuration that sends Spark logs to S3 or Cloudwatch.

In the example, we show how to add an applicationConfiguration to use the AWS Glue data catalog and monitoringConfiguration to send logs to the /aws/emr-eks-spark log group in CloudWatch. Refer to the EMR on EKS guide for more details on job configuration.

airflow/providers/amazon/aws/example_dags/example_emr_eks_job.pyView Source

JOB_DRIVER_ARG = {
    "sparkSubmitJobDriver": {
        "entryPoint": "local:///usr/lib/spark/examples/src/main/python/pi.py",
        "sparkSubmitParameters": "--conf spark.executors.instances=2 --conf spark.executors.memory=2G --conf spark.executor.cores=2 --conf spark.driver.cores=1",  # noqa: E501
    }
}

CONFIGURATION_OVERRIDES_ARG = {
    "applicationConfiguration": [
        {
            "classification": "spark-defaults",
            "properties": {
                "spark.hadoop.hive.metastore.client.factory.class": "com.amazonaws.glue.catalog.metastore.AWSGlueDataCatalogHiveClientFactory",  # noqa: E501
            },
        }
    ],
    "monitoringConfiguration": {
        "cloudWatchMonitoringConfiguration": {
            "logGroupName": "/aws/emr-eks-spark",
            "logStreamNamePrefix": "airflow",
        }
    },
}

We pass the virtual_cluster_id and execution_role_arn values as operator parameters, but you can store them in a connection or provide them in the DAG. Your AWS region should be defined either in the aws_default connection as {"region_name": "us-east-1"} or a custom connection name that gets passed to the operator with the aws_conn_id parameter.

airflow/providers/amazon/aws/example_dags/example_emr_eks_job.pyView Source

job_starter = EMRContainerOperator(
    task_id="start_job",
    virtual_cluster_id=VIRTUAL_CLUSTER_ID,
    execution_role_arn=JOB_ROLE_ARN,
    release_label="emr-6.3.0-latest",
    job_driver=JOB_DRIVER_ARG,
    configuration_overrides=CONFIGURATION_OVERRIDES_ARG,
    name="pi.py",
)

With the EMRContainerOperator, it will wait until the successful completion of the job or raise an AirflowException if there is an error. The operator returns the Job ID of the job run.

Reference

For further information, look at:

Was this entry helpful?