airflow.providers.amazon.aws.triggers.batch¶
Module Contents¶
Classes¶
| Asynchronously poll the boto3 API and wait for the Batch job to be in the SUCCEEDED state. | |
| Checks for the status of a submitted job_id to AWS Batch until it reaches a failure or a success state. | |
| Checks for the status of a submitted job_id to AWS Batch until it reaches a failure or a success state. | |
| Asynchronously poll the boto3 API and wait for the compute environment to be ready. | 
- class airflow.providers.amazon.aws.triggers.batch.BatchOperatorTrigger(job_id=None, max_retries=10, aws_conn_id='aws_default', region_name=None, poll_interval=30)[source]¶
- Bases: - airflow.triggers.base.BaseTrigger- Asynchronously poll the boto3 API and wait for the Batch job to be in the SUCCEEDED state. - Parameters
- job_id (str | None) – A unique identifier for the cluster. 
- max_retries (int) – The maximum number of attempts to be made. 
- aws_conn_id (str | None) – The Airflow connection used for AWS credentials. 
- region_name (str | None) – region name to use in AWS Hook 
- poll_interval (int) – The amount of time in seconds to wait between attempts. 
 
 - async run()[source]¶
- Runs the trigger in an asynchronous context. - The trigger should yield an Event whenever it wants to fire off an event, and return None if it is finished. Single-event triggers should thus yield and then immediately return. - If it yields, it is likely that it will be resumed very quickly, but it may not be (e.g. if the workload is being moved to another triggerer process, or a multi-event trigger was being used for a single-event task defer). - In either case, Trigger classes should assume they will be persisted, and then rely on cleanup() being called when they are no longer needed. 
 
- class airflow.providers.amazon.aws.triggers.batch.BatchSensorTrigger(job_id, region_name, aws_conn_id='aws_default', poke_interval=5)[source]¶
- Bases: - airflow.triggers.base.BaseTrigger- Checks for the status of a submitted job_id to AWS Batch until it reaches a failure or a success state. - BatchSensorTrigger is fired as deferred class with params to poll the job state in Triggerer. - Parameters
- job_id (str) – the job ID, to poll for job completion or not 
- region_name (str | None) – AWS region name to use Override the region_name in connection (if provided) 
- aws_conn_id (str | None) – connection id of AWS credentials / region name. If None, credential boto3 strategy will be used 
- poke_interval (float) – polling period in seconds to check for the status of the job 
 
 
- class airflow.providers.amazon.aws.triggers.batch.BatchJobTrigger(job_id, region_name=None, aws_conn_id='aws_default', waiter_delay=5, waiter_max_attempts=720)[source]¶
- Bases: - airflow.providers.amazon.aws.triggers.base.AwsBaseWaiterTrigger- Checks for the status of a submitted job_id to AWS Batch until it reaches a failure or a success state. - Parameters
- job_id (str | None) – the job ID, to poll for job completion or not 
- region_name (str | None) – AWS region name to use Override the region_name in connection (if provided) 
- aws_conn_id (str | None) – connection id of AWS credentials / region name. If None, credential boto3 strategy will be used 
- waiter_delay (int) – polling period in seconds to check for the status of the job 
- waiter_max_attempts (int) – The maximum number of attempts to be made. 
 
 
- class airflow.providers.amazon.aws.triggers.batch.BatchCreateComputeEnvironmentTrigger(compute_env_arn, waiter_delay=30, waiter_max_attempts=10, aws_conn_id='aws_default', region_name=None)[source]¶
- Bases: - airflow.providers.amazon.aws.triggers.base.AwsBaseWaiterTrigger- Asynchronously poll the boto3 API and wait for the compute environment to be ready. - Parameters
- compute_env_arn (str) – The ARN of the compute env. 
- waiter_max_attempts (int) – The maximum number of attempts to be made. 
- aws_conn_id (str | None) – The Airflow connection used for AWS credentials. 
- region_name (str | None) – region name to use in AWS Hook 
- waiter_delay (int) – The amount of time in seconds to wait between attempts. 
 
 
