airflow.executors.base_executor

Base executor - this is the base class for all the implemented executors.

Module Contents

Classes

BaseExecutor

Class to derive in order to interface with executor-type systems

Attributes

PARALLELISM

NOT_STARTED_MESSAGE

QUEUEING_ATTEMPTS

CommandType

QueuedTaskInstanceType

EventBufferValueType

TaskTuple

airflow.executors.base_executor.PARALLELISM :int[source]
airflow.executors.base_executor.NOT_STARTED_MESSAGE = The executor should be started first![source]
airflow.executors.base_executor.QUEUEING_ATTEMPTS = 5[source]
airflow.executors.base_executor.CommandType[source]
airflow.executors.base_executor.QueuedTaskInstanceType[source]
airflow.executors.base_executor.EventBufferValueType[source]
airflow.executors.base_executor.TaskTuple[source]
class airflow.executors.base_executor.BaseExecutor(parallelism=PARALLELISM)[source]

Bases: airflow.utils.log.logging_mixin.LoggingMixin

Class to derive in order to interface with executor-type systems like Celery, Kubernetes, Local, Sequential and the likes.

Parameters

parallelism (int) -- how many jobs should run at one time. Set to 0 for infinity

job_id :Union[None, int, str][source]
callback_sink :Optional[airflow.callbacks.base_callback_sink.BaseCallbackSink][source]
__repr__(self)[source]

Return repr(self).

start(self)[source]

Executors may need to get things started.

queue_command(self, task_instance, command, priority=1, queue=None)[source]

Queues command to task

queue_task_instance(self, task_instance, mark_success=False, pickle_id=None, ignore_all_deps=False, ignore_depends_on_past=False, ignore_task_deps=False, ignore_ti_state=False, pool=None, cfg_path=None)[source]

Queues task instance.

has_task(self, task_instance)[source]

Checks if a task is either queued or running in this executor.

Parameters

task_instance (airflow.models.taskinstance.TaskInstance) -- TaskInstance

Returns

True if the task is known to this executor

Return type

bool

sync(self)[source]

Sync will get called periodically by the heartbeat method. Executors should override this to perform gather statuses.

heartbeat(self)[source]

Heartbeat sent to trigger new jobs.

order_queued_tasks_by_priority(self)[source]

Orders the queued tasks by priority.

Returns

List of tuples from the queued_tasks according to the priority.

Return type

List[Tuple[airflow.models.taskinstance.TaskInstanceKey, QueuedTaskInstanceType]]

trigger_tasks(self, open_slots)[source]

Initiates async execution of the queued tasks, up to the number of available slots.

Parameters

open_slots (int) -- Number of open slots

change_state(self, key, state, info=None)[source]

Changes state of the task.

Parameters
fail(self, key, info=None)[source]

Set fail state for the event.

Parameters
success(self, key, info=None)[source]

Set success state for the event.

Parameters
get_event_buffer(self, dag_ids=None)[source]

Returns and flush the event buffer. In case dag_ids is specified it will only return and flush events for the given dag_ids. Otherwise it returns and flushes all events.

Parameters

dag_ids -- the dag_ids to return events for; returns all if given None.

Returns

a dict of events

Return type

Dict[airflow.models.taskinstance.TaskInstanceKey, EventBufferValueType]

abstract execute_async(self, key, command, queue=None, executor_config=None)[source]

This method will execute the command asynchronously.

Parameters
  • key (airflow.models.taskinstance.TaskInstanceKey) -- Unique key for the task instance

  • command (CommandType) -- Command to run

  • queue (Optional[str]) -- name of the queue

  • executor_config (Optional[Any]) -- Configuration passed to the executor.

abstract end(self)[source]

This method is called when the caller is done submitting job and wants to wait synchronously for the job submitted previously to be all done.

abstract terminate(self)[source]

This method is called when the daemon receives a SIGTERM

try_adopt_task_instances(self, tis)[source]

Try to adopt running task instances that have been abandoned by a SchedulerJob dying.

Anything that is not adopted will be cleared by the scheduler (and then become eligible for re-scheduling)

Returns

any TaskInstances that were unable to be adopted

Return type

list[airflow.models.TaskInstance]

property slots_available(self)[source]

Number of new tasks this executor instance can accept

static validate_command(command)[source]

Check if the command to execute is airflow command

debug_dump(self)[source]

Called in response to SIGUSR2 by the scheduler

send_callback(self, request)[source]

Sends callback for execution.

Provides a default implementation which sends the callback to the callback_sink object.

Parameters

request (airflow.callbacks.callback_requests.CallbackRequest) -- Callback request to be executed.

Was this entry helpful?