airflow.providers.amazon.aws.triggers.ec2

Module Contents

Classes

EC2StateSensorTrigger

Poll the EC2 instance and yield a TriggerEvent once the state of the instance matches the target_state.

class airflow.providers.amazon.aws.triggers.ec2.EC2StateSensorTrigger(instance_id, target_state, aws_conn_id='aws_default', region_name=None, poll_interval=60)[source]

Bases: airflow.triggers.base.BaseTrigger

Poll the EC2 instance and yield a TriggerEvent once the state of the instance matches the target_state.

Parameters
  • instance_id (str) – id of the AWS EC2 instance

  • target_state (str) – target state of instance

  • aws_conn_id (str | None) – The Airflow connection used for AWS credentials. If this is None or empty then the default boto3 behaviour is used. If running Airflow in a distributed manner and aws_conn_id is None or empty, then default boto3 configuration would be used (and must be maintained on each worker node).

  • region_name (str | None) – (optional) aws region name associated with the client

  • poll_interval (int) – number of seconds to wait before attempting the next poll

serialize()[source]

Return the information needed to reconstruct this Trigger.

Returns

Tuple of (class path, keyword arguments needed to re-instantiate).

Return type

tuple[str, dict[str, Any]]

hook()[source]
async run()[source]

Run the trigger in an asynchronous context.

The trigger should yield an Event whenever it wants to fire off an event, and return None if it is finished. Single-event triggers should thus yield and then immediately return.

If it yields, it is likely that it will be resumed very quickly, but it may not be (e.g. if the workload is being moved to another triggerer process, or a multi-event trigger was being used for a single-event task defer).

In either case, Trigger classes should assume they will be persisted, and then rely on cleanup() being called when they are no longer needed.

Was this entry helpful?