AWS Glue¶
AWS Glue is a serverless data integration service that makes it easy to discover, prepare, and combine data for analytics, machine learning, and application development. AWS Glue provides all the capabilities needed for data integration so that you can start analyzing your data and putting it to use in minutes instead of months.
Prerequisite Tasks¶
To use these operators, you must do a few things:
Create necessary resources using AWS Console or AWS CLI.
Install API libraries via pip.
pip install 'apache-airflow[amazon]'Detailed information is available Installation of Airflow®
Generic Parameters¶
- aws_conn_id
Reference to Amazon Web Services Connection ID. If this parameter is set to
None
then the default boto3 behaviour is used without a connection lookup. Otherwise use the credentials stored in the Connection. Default:aws_default
- region_name
AWS Region Name. If this parameter is set to
None
or omitted then region_name from AWS Connection Extra Parameter will be used. Otherwise use the specified value instead of the connection value. Default:None
- verify
Whether or not to verify SSL certificates.
False
- Do not validate SSL certificates.path/to/cert/bundle.pem - A filename of the CA cert bundle to use. You can specify this argument if you want to use a different CA cert bundle than the one used by botocore.
If this parameter is set to
None
or is omitted then verify from AWS Connection Extra Parameter will be used. Otherwise use the specified value instead of the connection value. Default:None
- botocore_config
The provided dictionary is used to construct a botocore.config.Config. This configuration can be used to configure Avoid Throttling exceptions, timeouts, etc.
{ "signature_version": "unsigned", "s3": { "us_east_1_regional_endpoint": True, }, "retries": { "mode": "standard", "max_attempts": 10, }, "connect_timeout": 300, "read_timeout": 300, "tcp_keepalive": True, }
If this parameter is set to
None
or omitted then config_kwargs from AWS Connection Extra Parameter will be used. Otherwise use the specified value instead of the connection value. Default:None
Note
Specifying an empty dictionary,
{}
, will overwrite the connection configuration for botocore.config.Config
Operators¶
Create an AWS Glue crawler¶
AWS Glue Crawlers allow you to easily extract data from various data sources.
To create a new AWS Glue Crawler or run an existing one you can
use GlueCrawlerOperator
.
crawl_s3 = GlueCrawlerOperator(
task_id="crawl_s3",
config=glue_crawler_config,
)
Note
The AWS IAM role included in the config
needs access to the source data location
(e.g. s3:PutObject access if data is stored in Amazon S3) as well as the AWSGlueServiceRole
policy. See the References section below for a link to more details.
Submit an AWS Glue job¶
To submit a new AWS Glue job you can use GlueJobOperator
.
submit_glue_job = GlueJobOperator(
task_id="submit_glue_job",
job_name=glue_job_name,
script_location=f"s3://{bucket_name}/etl_script.py",
s3_bucket=bucket_name,
iam_role_name=role_name,
create_job_kwargs={"GlueVersion": "3.0", "NumberOfWorkers": 2, "WorkerType": "G.1X"},
)
Note
The same AWS IAM role used for the crawler can be used here as well, but it will need policies to provide access to the output location for result data.
Create an AWS Glue Data Quality¶
AWS Glue Data Quality allows you to measure and monitor the quality
of your data so that you can make good business decisions.
To create a new AWS Glue Data Quality ruleset or update an existing one you can
use GlueDataQualityOperator
.
create_rule_set = GlueDataQualityOperator(
task_id="create_rule_set",
name=rule_set_name,
ruleset=RULE_SET,
data_quality_ruleset_kwargs={
"TargetTable": {
"TableName": athena_table,
"DatabaseName": athena_database,
}
},
)
Start a AWS Glue Data Quality Evaluation Run¶
To start a AWS Glue Data Quality ruleset evaluation run you can use
GlueDataQualityRuleSetEvaluationRunOperator
.
start_evaluation_run = GlueDataQualityRuleSetEvaluationRunOperator(
task_id="start_evaluation_run",
datasource={
"GlueTable": {
"TableName": athena_table,
"DatabaseName": athena_database,
}
},
role=test_context[ROLE_ARN_KEY],
rule_set_names=[rule_set_name],
)
Start a AWS Glue Data Quality Recommendation Run¶
To start a AWS Glue Data Quality rule recommendation run you can use
GlueDataQualityRuleRecommendationRunOperator
.
recommendation_run = GlueDataQualityRuleRecommendationRunOperator(
task_id="recommendation_run",
datasource={
"GlueTable": {
"TableName": athena_table,
"DatabaseName": athena_database,
}
},
role=test_context[ROLE_ARN_KEY],
recommendation_run_kwargs={"CreatedRulesetName": rule_set_name},
)
Sensors¶
Wait on an AWS Glue crawler state¶
To wait on the state of an AWS Glue crawler execution until it reaches a terminal state you can
use GlueCrawlerSensor
.
wait_for_crawl = GlueCrawlerSensor(
task_id="wait_for_crawl",
crawler_name=glue_crawler_name,
)
Wait on an AWS Glue job state¶
To wait on the state of an AWS Glue Job until it reaches a terminal state you can
use GlueJobSensor
wait_for_job = GlueJobSensor(
task_id="wait_for_job",
job_name=glue_job_name,
# Job ID extracted from previous Glue Job Operator task
run_id=submit_glue_job.output,
verbose=True, # prints glue job logs in airflow logs
)
Wait on an AWS Glue Data Quality Evaluation Run¶
To wait on the state of an AWS Glue Data Quality RuleSet Evaluation Run until it
reaches a terminal state you can use GlueDataQualityRuleSetEvaluationRunSensor
await_evaluation_run_sensor = GlueDataQualityRuleSetEvaluationRunSensor(
task_id="await_evaluation_run_sensor",
evaluation_run_id=start_evaluation_run.output,
)
Wait on an AWS Glue Data Quality Recommendation Run¶
To wait on the state of an AWS Glue Data Quality recommendation run until it
reaches a terminal state you can use GlueDataQualityRuleRecommendationRunSensor
await_recommendation_run_sensor = GlueDataQualityRuleRecommendationRunSensor(
task_id="await_recommendation_run_sensor",
recommendation_run_id=recommendation_run.output,
)
Wait on an AWS Glue Catalog Partition¶
To wait for a partition to show up in AWS Glue Catalog until it
reaches a terminal state you can use GlueCatalogPartitionSensor
wait_for_catalog_partition = GlueCatalogPartitionSensor(
task_id="wait_for_catalog_partition",
table_name="input",
database_name=glue_db_name,
expression="category='mixed'",
)