airflow.providers.amazon.aws.operators.ecs

Module Contents

Classes

EcsBaseOperator

This is the base operator for all Elastic Container Service operators.

EcsCreateClusterOperator

Creates an AWS ECS cluster.

EcsDeleteClusterOperator

Deletes an AWS ECS cluster.

EcsDeregisterTaskDefinitionOperator

Deregister a task definition on AWS ECS.

EcsRegisterTaskDefinitionOperator

Register a task definition on AWS ECS.

EcsRunTaskOperator

Execute a task on AWS ECS (Elastic Container Service).

class airflow.providers.amazon.aws.operators.ecs.EcsBaseOperator(*, aws_conn_id='aws_default', region_name=None, verify=None, botocore_config=None, **kwargs)[source]

Bases: airflow.providers.amazon.aws.operators.base_aws.AwsBaseOperator[airflow.providers.amazon.aws.hooks.ecs.EcsHook]

This is the base operator for all Elastic Container Service operators.

aws_hook_class[source]
client()[source]

Create and return the EcsHook’s client.

abstract execute(context)[source]

Must overwrite in child classes.

class airflow.providers.amazon.aws.operators.ecs.EcsCreateClusterOperator(*, cluster_name, create_cluster_kwargs=None, wait_for_completion=True, waiter_delay=15, waiter_max_attempts=60, deferrable=conf.getboolean('operators', 'default_deferrable', fallback=False), **kwargs)[source]

Bases: EcsBaseOperator

Creates an AWS ECS cluster.

See also

For more information on how to use this operator, take a look at the guide: Create an AWS ECS Cluster

Parameters
  • cluster_name (str) – The name of your cluster. If you don’t specify a name for your cluster, you create a cluster that’s named default.

  • create_cluster_kwargs (dict | None) – Extra arguments for Cluster Creation.

  • wait_for_completion (bool) – If True, waits for creation of the cluster to complete. (default: True)

  • waiter_delay (int) – The amount of time in seconds to wait between attempts, if not set then the default waiter value will be used.

  • waiter_max_attempts (int) – The maximum number of attempts to be made, if not set then the default waiter value will be used.

  • deferrable (bool) – If True, the operator will wait asynchronously for the job to complete. This implies waiting for completion. This mode requires aiobotocore module to be installed. (default: False)

template_fields: Sequence[str][source]
execute(context)[source]

Must overwrite in child classes.

class airflow.providers.amazon.aws.operators.ecs.EcsDeleteClusterOperator(*, cluster_name, wait_for_completion=True, waiter_delay=15, waiter_max_attempts=60, deferrable=conf.getboolean('operators', 'default_deferrable', fallback=False), **kwargs)[source]

Bases: EcsBaseOperator

Deletes an AWS ECS cluster.

See also

For more information on how to use this operator, take a look at the guide: Delete an AWS ECS Cluster

Parameters
  • cluster_name (str) – The short name or full Amazon Resource Name (ARN) of the cluster to delete.

  • wait_for_completion (bool) – If True, waits for creation of the cluster to complete. (default: True)

  • waiter_delay (int) – The amount of time in seconds to wait between attempts, if not set then the default waiter value will be used.

  • waiter_max_attempts (int) – The maximum number of attempts to be made, if not set then the default waiter value will be used.

  • deferrable (bool) – If True, the operator will wait asynchronously for the job to complete. This implies waiting for completion. This mode requires aiobotocore module to be installed. (default: False)

template_fields: Sequence[str] = ('cluster_name', 'wait_for_completion', 'deferrable')[source]
execute(context)[source]

Must overwrite in child classes.

class airflow.providers.amazon.aws.operators.ecs.EcsDeregisterTaskDefinitionOperator(*, task_definition, **kwargs)[source]

Bases: EcsBaseOperator

Deregister a task definition on AWS ECS.

See also

For more information on how to use this operator, take a look at the guide: Deregister a Task Definition

Parameters

task_definition (str) – The family and revision (family:revision) or full Amazon Resource Name (ARN) of the task definition to deregister. If you use a family name, you must specify a revision.

template_fields: Sequence[str] = ('task_definition',)[source]
execute(context)[source]

Must overwrite in child classes.

class airflow.providers.amazon.aws.operators.ecs.EcsRegisterTaskDefinitionOperator(*, family, container_definitions, register_task_kwargs=None, **kwargs)[source]

Bases: EcsBaseOperator

Register a task definition on AWS ECS.

See also

For more information on how to use this operator, take a look at the guide: Register a Task Definition

Parameters
  • family (str) – The family name of a task definition to create.

  • container_definitions (list[dict]) – A list of container definitions in JSON format that describe the different containers that make up your task.

  • register_task_kwargs (dict | None) – Extra arguments for Register Task Definition.

template_fields: Sequence[str] = ('family', 'container_definitions', 'register_task_kwargs')[source]
execute(context)[source]

Must overwrite in child classes.

class airflow.providers.amazon.aws.operators.ecs.EcsRunTaskOperator(*, task_definition, cluster, overrides, launch_type='EC2', capacity_provider_strategy=None, group=None, placement_constraints=None, placement_strategy=None, platform_version=None, network_configuration=None, tags=None, awslogs_group=None, awslogs_region=None, awslogs_stream_prefix=None, awslogs_fetch_interval=timedelta(seconds=30), propagate_tags=None, quota_retry=None, reattach=False, number_logs_exception=10, wait_for_completion=True, waiter_delay=6, waiter_max_attempts=1000000, deferrable=conf.getboolean('operators', 'default_deferrable', fallback=False), **kwargs)[source]

Bases: EcsBaseOperator

Execute a task on AWS ECS (Elastic Container Service).

See also

For more information on how to use this operator, take a look at the guide: Run a Task Definition

Parameters
  • task_definition (str) – the task definition name on Elastic Container Service

  • cluster (str) – the cluster name on Elastic Container Service

  • overrides (dict) – the same parameter that boto3 will receive (templated): https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/ecs.html#ECS.Client.run_task

  • aws_conn_id – connection id of AWS credentials / region name. If None, credential boto3 strategy will be used (https://boto3.amazonaws.com/v1/documentation/api/latest/guide/configuration.html).

  • region – region name to use in AWS Hook. Override the region in connection (if provided)

  • launch_type (str) – the launch type on which to run your task (‘EC2’, ‘EXTERNAL’, or ‘FARGATE’)

  • capacity_provider_strategy (list | None) – the capacity provider strategy to use for the task. When capacity_provider_strategy is specified, the launch_type parameter is omitted. If no capacity_provider_strategy or launch_type is specified, the default capacity provider strategy for the cluster is used.

  • group (str | None) – the name of the task group associated with the task

  • placement_constraints (list | None) – an array of placement constraint objects to use for the task

  • placement_strategy (list | None) – an array of placement strategy objects to use for the task

  • platform_version (str | None) – the platform version on which your task is running

  • network_configuration (dict | None) – the network configuration for the task

  • tags (dict | None) – a dictionary of tags in the form of {‘tagKey’: ‘tagValue’}.

  • awslogs_group (str | None) – the CloudWatch group where your ECS container logs are stored. Only required if you want logs to be shown in the Airflow UI after your job has finished.

  • awslogs_region (str | None) – the region in which your CloudWatch logs are stored. If None, this is the same as the region parameter. If that is also None, this is the default AWS region based on your connection settings.

  • awslogs_stream_prefix (str | None) – the stream prefix that is used for the CloudWatch logs. This is usually based on some custom name combined with the name of the container. Only required if you want logs to be shown in the Airflow UI after your job has finished.

  • awslogs_fetch_interval (datetime.timedelta) – the interval that the ECS task log fetcher should wait in between each Cloudwatch logs fetches. If deferrable is set to True, that parameter is ignored and waiter_delay is used instead.

  • quota_retry (dict | None) – Config if and how to retry the launch of a new ECS task, to handle transient errors.

  • reattach (bool) – If set to True, will check if the task previously launched by the task_instance is already running. If so, the operator will attach to it instead of starting a new task. This is to avoid relaunching a new task when the connection drops between Airflow and ECS while the task is running (when the Airflow worker is restarted for example).

  • number_logs_exception (int) – Number of lines from the last Cloudwatch logs to return in the AirflowException if an ECS task is stopped (to receive Airflow alerts with the logs of what failed in the code running in ECS).

  • wait_for_completion (bool) – If True, waits for creation of the cluster to complete. (default: True)

  • waiter_delay (int) – The amount of time in seconds to wait between attempts, if not set then the default waiter value will be used.

  • waiter_max_attempts (int) – The maximum number of attempts to be made, if not set then the default waiter value will be used.

  • deferrable (bool) – If True, the operator will wait asynchronously for the job to complete. This implies waiting for completion. This mode requires aiobotocore module to be installed. (default: False)

ui_color = '#f0ede4'[source]
template_fields: Sequence[str] = ('task_definition', 'cluster', 'overrides', 'launch_type', 'capacity_provider_strategy',...[source]
template_fields_renderers[source]
execute(context)[source]

Must overwrite in child classes.

execute_complete(context, event=None)[source]
on_kill()[source]

Override this method to clean up subprocesses when a task instance gets killed.

Any use of the threading, subprocess or multiprocessing module within an operator needs to be cleaned up, or it will leave ghost processes behind.

Was this entry helpful?