AWS Glue Operators

AWS Glue is a serverless data integration service that makes it easy to discover, prepare, and combine data for analytics, machine learning, and application development. AWS Glue provides all the capabilities needed for data integration so that you can start analyzing your data and putting it to use in minutes instead of months.

Prerequisite Tasks

To use these operators, you must do a few things:

AWS Glue Crawler Operator

AWS Glue Crawlers allow you to easily extract data from various data sources. To create a new AWS Glue Crawler or run an existing one you can use GlueCrawlerOperator.

airflow/providers/amazon/aws/example_dags/example_glue.py[source]

crawl_s3 = GlueCrawlerOperator(
    task_id='crawl_s3',
    config=GLUE_CRAWLER_CONFIG,
    wait_for_completion=False,
)

Note that the AWS IAM role included in the config needs access to the source data location (e.g. s3:PutObject access if data is stored in Amazon S3) as well as the AWSGlueServiceRole policy. See the References section below for a link to more details.

AWS Glue Crawler Sensor

To wait on the state of an AWS Glue Crawler execution until it reaches a terminal state you can use GlueCrawlerSensor.

airflow/providers/amazon/aws/example_dags/example_glue.py[source]

wait_for_crawl = GlueCrawlerSensor(task_id='wait_for_crawl', crawler_name=GLUE_CRAWLER_NAME)

AWS Glue Job Operator

To submit a new AWS Glue Job you can use GlueJobOperator.

airflow/providers/amazon/aws/example_dags/example_glue.py[source]

job_name = 'example_glue_job'
submit_glue_job = GlueJobOperator(
    task_id='submit_glue_job',
    job_name=job_name,
    wait_for_completion=False,
    script_location=f's3://{GLUE_EXAMPLE_S3_BUCKET}/etl_script.py',
    s3_bucket=GLUE_EXAMPLE_S3_BUCKET,
    iam_role_name=GLUE_CRAWLER_ROLE.split('/')[-1],
    create_job_kwargs={'GlueVersion': '3.0', 'NumberOfWorkers': 2, 'WorkerType': 'G.1X'},
)

Note that the same AWS IAM role used for the Crawler can be used here as well, but it will need policies to provide access to the output location for result data.

AWS Glue Job Sensor

To wait on the state of an AWS Glue Job until it reaches a terminal state you can use GlueJobSensor

airflow/providers/amazon/aws/example_dags/example_glue.py[source]

wait_for_job = GlueJobSensor(
    task_id='wait_for_job',
    job_name=job_name,
    # Job ID extracted from previous Glue Job Operator task
    run_id=submit_glue_job.output,
)

Reference

For further information, look at:

Was this entry helpful?