airflow.providers.amazon.aws.triggers.redshift_cluster

Module Contents

Classes

RedshiftCreateClusterTrigger

Trigger for RedshiftCreateClusterOperator.

RedshiftPauseClusterTrigger

Trigger for RedshiftPauseClusterOperator.

RedshiftCreateClusterSnapshotTrigger

Trigger for RedshiftCreateClusterSnapshotOperator.

RedshiftResumeClusterTrigger

Trigger for RedshiftResumeClusterOperator.

RedshiftDeleteClusterTrigger

Trigger for RedshiftDeleteClusterOperator.

class airflow.providers.amazon.aws.triggers.redshift_cluster.RedshiftCreateClusterTrigger(cluster_identifier, poll_interval, max_attempt, aws_conn_id)[source]

Bases: airflow.triggers.base.BaseTrigger

Trigger for RedshiftCreateClusterOperator.

The trigger will asynchronously poll the boto3 API and wait for the Redshift cluster to be in the available state.

Parameters
  • cluster_identifier (str) – A unique identifier for the cluster.

  • poll_interval (int) – The amount of time in seconds to wait between attempts.

  • max_attempt (int) – The maximum number of attempts to be made.

  • aws_conn_id (str) – The Airflow connection used for AWS credentials.

serialize()[source]

Returns the information needed to reconstruct this Trigger.

Returns

Tuple of (class path, keyword arguments needed to re-instantiate).

Return type

tuple[str, dict[str, Any]]

hook()[source]
async run()[source]

Runs the trigger in an asynchronous context.

The trigger should yield an Event whenever it wants to fire off an event, and return None if it is finished. Single-event triggers should thus yield and then immediately return.

If it yields, it is likely that it will be resumed very quickly, but it may not be (e.g. if the workload is being moved to another triggerer process, or a multi-event trigger was being used for a single-event task defer).

In either case, Trigger classes should assume they will be persisted, and then rely on cleanup() being called when they are no longer needed.

class airflow.providers.amazon.aws.triggers.redshift_cluster.RedshiftPauseClusterTrigger(cluster_identifier, poll_interval, max_attempts, aws_conn_id)[source]

Bases: airflow.triggers.base.BaseTrigger

Trigger for RedshiftPauseClusterOperator.

The trigger will asynchronously poll the boto3 API and wait for the Redshift cluster to be in the paused state.

Parameters
  • cluster_identifier (str) – A unique identifier for the cluster.

  • poll_interval (int) – The amount of time in seconds to wait between attempts.

  • max_attempts (int) – The maximum number of attempts to be made.

  • aws_conn_id (str) – The Airflow connection used for AWS credentials.

serialize()[source]

Returns the information needed to reconstruct this Trigger.

Returns

Tuple of (class path, keyword arguments needed to re-instantiate).

Return type

tuple[str, dict[str, Any]]

hook()[source]
async run()[source]

Runs the trigger in an asynchronous context.

The trigger should yield an Event whenever it wants to fire off an event, and return None if it is finished. Single-event triggers should thus yield and then immediately return.

If it yields, it is likely that it will be resumed very quickly, but it may not be (e.g. if the workload is being moved to another triggerer process, or a multi-event trigger was being used for a single-event task defer).

In either case, Trigger classes should assume they will be persisted, and then rely on cleanup() being called when they are no longer needed.

class airflow.providers.amazon.aws.triggers.redshift_cluster.RedshiftCreateClusterSnapshotTrigger(cluster_identifier, poll_interval, max_attempts, aws_conn_id)[source]

Bases: airflow.triggers.base.BaseTrigger

Trigger for RedshiftCreateClusterSnapshotOperator.

The trigger will asynchronously poll the boto3 API and wait for the Redshift cluster snapshot to be in the available state.

Parameters
  • cluster_identifier (str) – A unique identifier for the cluster.

  • poll_interval (int) – The amount of time in seconds to wait between attempts.

  • max_attempts (int) – The maximum number of attempts to be made.

  • aws_conn_id (str) – The Airflow connection used for AWS credentials.

serialize()[source]

Returns the information needed to reconstruct this Trigger.

Returns

Tuple of (class path, keyword arguments needed to re-instantiate).

Return type

tuple[str, dict[str, Any]]

hook()[source]
async run()[source]

Runs the trigger in an asynchronous context.

The trigger should yield an Event whenever it wants to fire off an event, and return None if it is finished. Single-event triggers should thus yield and then immediately return.

If it yields, it is likely that it will be resumed very quickly, but it may not be (e.g. if the workload is being moved to another triggerer process, or a multi-event trigger was being used for a single-event task defer).

In either case, Trigger classes should assume they will be persisted, and then rely on cleanup() being called when they are no longer needed.

class airflow.providers.amazon.aws.triggers.redshift_cluster.RedshiftResumeClusterTrigger(cluster_identifier, poll_interval, max_attempts, aws_conn_id)[source]

Bases: airflow.triggers.base.BaseTrigger

Trigger for RedshiftResumeClusterOperator.

The trigger will asynchronously poll the boto3 API and wait for the Redshift cluster to be in the available state.

Parameters
  • cluster_identifier (str) – A unique identifier for the cluster.

  • poll_interval (int) – The amount of time in seconds to wait between attempts.

  • max_attempts (int) – The maximum number of attempts to be made.

  • aws_conn_id (str) – The Airflow connection used for AWS credentials.

serialize()[source]

Returns the information needed to reconstruct this Trigger.

Returns

Tuple of (class path, keyword arguments needed to re-instantiate).

Return type

tuple[str, dict[str, Any]]

hook()[source]
async run()[source]

Runs the trigger in an asynchronous context.

The trigger should yield an Event whenever it wants to fire off an event, and return None if it is finished. Single-event triggers should thus yield and then immediately return.

If it yields, it is likely that it will be resumed very quickly, but it may not be (e.g. if the workload is being moved to another triggerer process, or a multi-event trigger was being used for a single-event task defer).

In either case, Trigger classes should assume they will be persisted, and then rely on cleanup() being called when they are no longer needed.

class airflow.providers.amazon.aws.triggers.redshift_cluster.RedshiftDeleteClusterTrigger(cluster_identifier, max_attempts=30, aws_conn_id='aws_default', poll_interval=30)[source]

Bases: airflow.triggers.base.BaseTrigger

Trigger for RedshiftDeleteClusterOperator.

Parameters
  • cluster_identifier (str) – A unique identifier for the cluster.

  • max_attempts (int) – The maximum number of attempts to be made.

  • aws_conn_id (str) – The Airflow connection used for AWS credentials.

  • poll_interval (int) – The amount of time in seconds to wait between attempts.

serialize()[source]

Returns the information needed to reconstruct this Trigger.

Returns

Tuple of (class path, keyword arguments needed to re-instantiate).

Return type

tuple[str, dict[str, Any]]

hook()[source]
async run()[source]

Runs the trigger in an asynchronous context.

The trigger should yield an Event whenever it wants to fire off an event, and return None if it is finished. Single-event triggers should thus yield and then immediately return.

If it yields, it is likely that it will be resumed very quickly, but it may not be (e.g. if the workload is being moved to another triggerer process, or a multi-event trigger was being used for a single-event task defer).

In either case, Trigger classes should assume they will be persisted, and then rely on cleanup() being called when they are no longer needed.

Was this entry helpful?