airflow.executors.base_executor

Base executor - this is the base class for all the implemented executors.

Module Contents

Classes

BaseExecutor

Class to derive in order to interface with executor-type systems

Attributes

PARALLELISM

NOT_STARTED_MESSAGE

QUEUEING_ATTEMPTS

CommandType

QueuedTaskInstanceType

EventBufferValueType

TaskTuple

airflow.executors.base_executor.PARALLELISM :int[source]
airflow.executors.base_executor.NOT_STARTED_MESSAGE = The executor should be started first![source]
airflow.executors.base_executor.QUEUEING_ATTEMPTS = 5[source]
airflow.executors.base_executor.CommandType[source]
airflow.executors.base_executor.QueuedTaskInstanceType[source]
airflow.executors.base_executor.EventBufferValueType[source]
airflow.executors.base_executor.TaskTuple[source]
class airflow.executors.base_executor.BaseExecutor(parallelism=PARALLELISM)[source]

Bases: airflow.utils.log.logging_mixin.LoggingMixin

Class to derive in order to interface with executor-type systems like Celery, Kubernetes, Local, Sequential and the likes.

Parameters

parallelism (int) – how many jobs should run at one time. Set to 0 for infinity

job_id :None | int | str[source]
callback_sink :BaseCallbackSink | None[source]
__repr__()[source]

Return repr(self).

start()[source]

Executors may need to get things started.

queue_command(task_instance, command, priority=1, queue=None)[source]

Queues command to task

queue_task_instance(task_instance, mark_success=False, pickle_id=None, ignore_all_deps=False, ignore_depends_on_past=False, ignore_task_deps=False, ignore_ti_state=False, pool=None, cfg_path=None)[source]

Queues task instance.

has_task(task_instance)[source]

Checks if a task is either queued or running in this executor.

Parameters

task_instance (airflow.models.taskinstance.TaskInstance) – TaskInstance

Returns

True if the task is known to this executor

Return type

bool

sync()[source]

Sync will get called periodically by the heartbeat method. Executors should override this to perform gather statuses.

heartbeat()[source]

Heartbeat sent to trigger new jobs.

order_queued_tasks_by_priority()[source]

Orders the queued tasks by priority.

Returns

List of tuples from the queued_tasks according to the priority.

Return type

list[tuple[airflow.models.taskinstance.TaskInstanceKey, QueuedTaskInstanceType]]

trigger_tasks(open_slots)[source]

Initiates async execution of the queued tasks, up to the number of available slots.

Parameters

open_slots (int) – Number of open slots

change_state(key, state, info=None)[source]

Changes state of the task.

Parameters
fail(key, info=None)[source]

Set fail state for the event.

Parameters
success(key, info=None)[source]

Set success state for the event.

Parameters
get_event_buffer(dag_ids=None)[source]

Returns and flush the event buffer. In case dag_ids is specified it will only return and flush events for the given dag_ids. Otherwise it returns and flushes all events.

Parameters

dag_ids – the dag_ids to return events for; returns all if given None.

Returns

a dict of events

Return type

dict[airflow.models.taskinstance.TaskInstanceKey, EventBufferValueType]

abstract execute_async(key, command, queue=None, executor_config=None)[source]

This method will execute the command asynchronously.

Parameters
  • key (airflow.models.taskinstance.TaskInstanceKey) – Unique key for the task instance

  • command (CommandType) – Command to run

  • queue (str | None) – name of the queue

  • executor_config (Any | None) – Configuration passed to the executor.

abstract end()[source]

This method is called when the caller is done submitting job and wants to wait synchronously for the job submitted previously to be all done.

abstract terminate()[source]

This method is called when the daemon receives a SIGTERM

try_adopt_task_instances(tis)[source]

Try to adopt running task instances that have been abandoned by a SchedulerJob dying.

Anything that is not adopted will be cleared by the scheduler (and then become eligible for re-scheduling)

Returns

any TaskInstances that were unable to be adopted

Return type

Sequence[airflow.models.TaskInstance]

property slots_available[source]

Number of new tasks this executor instance can accept

static validate_command(command)[source]

Back-compat method to Check if the command to execute is airflow command

Parameters

command (list[str]) – command to check

Returns

None

Return type

None

static validate_airflow_tasks_run_command(command)[source]

Check if the command to execute is airflow command

Returns tuple (dag_id,task_id) retrieved from the command (replaced with None values if missing)

debug_dump()[source]

Called in response to SIGUSR2 by the scheduler

send_callback(request)[source]

Sends callback for execution.

Provides a default implementation which sends the callback to the callback_sink object.

Parameters

request (airflow.callbacks.callback_requests.CallbackRequest) – Callback request to be executed.

Was this entry helpful?