airflow.providers.amazon.aws.triggers.sagemaker

Module Contents

Classes

SageMakerTrigger

SageMakerTrigger is fired as deferred class with params to run the task in triggerer.

SageMakerPipelineTrigger

Trigger to wait for a sagemaker pipeline execution to finish.

SageMakerTrainingPrintLogTrigger

SageMakerTrainingPrintLogTrigger is fired as deferred class with params to run the task in triggerer.

class airflow.providers.amazon.aws.triggers.sagemaker.SageMakerTrigger(job_name, job_type, poke_interval=30, max_attempts=480, aws_conn_id='aws_default')[source]

Bases: airflow.triggers.base.BaseTrigger

SageMakerTrigger is fired as deferred class with params to run the task in triggerer.

Parameters
  • job_name (str) – name of the job to check status

  • job_type (str) – Type of the sagemaker job whether it is Transform or Training

  • poke_interval (int) – polling period in seconds to check for the status

  • max_attempts (int) – Number of times to poll for query state before returning the current state, defaults to None.

  • aws_conn_id (str | None) – AWS connection ID for sagemaker

serialize()[source]

Serialize SagemakerTrigger arguments and classpath.

hook()[source]
async run()[source]

Run the trigger in an asynchronous context.

The trigger should yield an Event whenever it wants to fire off an event, and return None if it is finished. Single-event triggers should thus yield and then immediately return.

If it yields, it is likely that it will be resumed very quickly, but it may not be (e.g. if the workload is being moved to another triggerer process, or a multi-event trigger was being used for a single-event task defer).

In either case, Trigger classes should assume they will be persisted, and then rely on cleanup() being called when they are no longer needed.

class airflow.providers.amazon.aws.triggers.sagemaker.SageMakerPipelineTrigger(waiter_type, pipeline_execution_arn, waiter_delay, waiter_max_attempts, aws_conn_id)[source]

Bases: airflow.triggers.base.BaseTrigger

Trigger to wait for a sagemaker pipeline execution to finish.

class Type[source]

Bases: enum.IntEnum

Type of waiter to use.

COMPLETE = 1[source]
STOPPED = 2[source]
serialize()[source]

Return the information needed to reconstruct this Trigger.

Returns

Tuple of (class path, keyword arguments needed to re-instantiate).

Return type

tuple[str, dict[str, Any]]

async run()[source]

Run the trigger in an asynchronous context.

The trigger should yield an Event whenever it wants to fire off an event, and return None if it is finished. Single-event triggers should thus yield and then immediately return.

If it yields, it is likely that it will be resumed very quickly, but it may not be (e.g. if the workload is being moved to another triggerer process, or a multi-event trigger was being used for a single-event task defer).

In either case, Trigger classes should assume they will be persisted, and then rely on cleanup() being called when they are no longer needed.

class airflow.providers.amazon.aws.triggers.sagemaker.SageMakerTrainingPrintLogTrigger(job_name, poke_interval, aws_conn_id='aws_default')[source]

Bases: airflow.triggers.base.BaseTrigger

SageMakerTrainingPrintLogTrigger is fired as deferred class with params to run the task in triggerer.

Parameters
  • job_name (str) – name of the job to check status

  • poke_interval (float) – polling period in seconds to check for the status

  • aws_conn_id (str | None) – AWS connection ID for sagemaker

serialize()[source]

Serialize SageMakerTrainingPrintLogTrigger arguments and classpath.

hook()[source]
async run()[source]

Make async connection to sagemaker async hook and gets job status for a job submitted by the operator.

Was this entry helpful?