airflow.providers.amazon.aws.triggers.redshift_data

Module Contents

Classes

RedshiftDataTrigger

RedshiftDataTrigger is fired as deferred class with params to run the task in triggerer.

class airflow.providers.amazon.aws.triggers.redshift_data.RedshiftDataTrigger(statement_id, task_id, poll_interval, aws_conn_id='aws_default', region_name=None, verify=None, botocore_config=None)[source]

Bases: airflow.triggers.base.BaseTrigger

RedshiftDataTrigger is fired as deferred class with params to run the task in triggerer.

Parameters
  • statement_id (str) – the UUID of the statement

  • task_id (str) – task ID of the Dag

  • poll_interval (int) – polling period in seconds to check for the status

  • aws_conn_id (str | None) – AWS connection ID for redshift

  • region_name (str | None) – aws region to use

serialize()[source]

Serializes RedshiftDataTrigger arguments and classpath.

hook()[source]
async run()[source]

Run the trigger in an asynchronous context.

The trigger should yield an Event whenever it wants to fire off an event, and return None if it is finished. Single-event triggers should thus yield and then immediately return.

If it yields, it is likely that it will be resumed very quickly, but it may not be (e.g. if the workload is being moved to another triggerer process, or a multi-event trigger was being used for a single-event task defer).

In either case, Trigger classes should assume they will be persisted, and then rely on cleanup() being called when they are no longer needed.

Was this entry helpful?