airflow.executors.local_executor

LocalExecutor runs tasks by spawning processes in a controlled fashion in different modes. Given that BaseExecutor has the option to receive a parallelism parameter to limit the number of process spawned, when this parameter is 0 the number of processes that LocalExecutor can spawn is unlimited.

The following strategies are implemented: 1. Unlimited Parallelism (self.parallelism == 0): In this strategy, LocalExecutor will spawn a process every time execute_async is called, that is, every task submitted to the LocalExecutor will be executed in its own process. Once the task is executed and the result stored in the result_queue, the process terminates. There is no need for a task_queue in this approach, since as soon as a task is received a new process will be allocated to the task. Processes used in this strategy are of class LocalWorker.

2. Limited Parallelism (self.parallelism > 0): In this strategy, the LocalExecutor spawns the number of processes equal to the value of self.parallelism at start time, using a task_queue to coordinate the ingestion of tasks and the work distribution among the workers, which will take a task as soon as they are ready. During the lifecycle of the LocalExecutor, the worker processes are running waiting for tasks, once the LocalExecutor receives the call to shutdown the executor a poison token is sent to the workers to terminate them. Processes used in this strategy are of class QueuedLocalWorker.

Arguably, SequentialExecutor could be thought as a LocalExecutor with limited parallelism of just 1 worker, i.e. self.parallelism = 1. This option could lead to the unification of the executor implementations, running locally, into just one LocalExecutor with multiple modes.

Module Contents

class airflow.executors.local_executor.LocalWorker(result_queue)[source]

Bases: multiprocessing.Process, airflow.utils.log.logging_mixin.LoggingMixin

LocalWorker Process implementation to run airflow commands. Executes the given command and puts the result into a result queue when done, terminating execution.

execute_work(self, key, command)[source]

Executes command received and stores result state in queue. :param key: the key to identify the TI :type key: tuple(dag_id, task_id, execution_date) :param command: the command to execute :type command: str

run(self)[source]
class airflow.executors.local_executor.QueuedLocalWorker(task_queue, result_queue)[source]

Bases: airflow.executors.local_executor.LocalWorker

LocalWorker implementation that is waiting for tasks from a queue and will continue executing commands as they become available in the queue. It will terminate execution once the poison token is found.

run(self)[source]
class airflow.executors.local_executor.LocalExecutor[source]

Bases: airflow.executors.base_executor.BaseExecutor

LocalExecutor executes tasks locally in parallel. It uses the multiprocessing Python library and queues to parallelize the execution of tasks.

class _UnlimitedParallelism(executor)[source]

Bases: object

Implements LocalExecutor with unlimited parallelism, starting one process per each command to execute.

start(self)[source]
execute_async(self, key, command)[source]
Parameters
  • key (tuple(dag_id, task_id, execution_date)) – the key to identify the TI

  • command (str) – the command to execute

sync(self)[source]
end(self)[source]
class _LimitedParallelism(executor)[source]

Bases: object

Implements LocalExecutor with limited parallelism using a task queue to coordinate work distribution.

start(self)[source]
execute_async(self, key, command)[source]
Parameters
  • key (tuple(dag_id, task_id, execution_date)) – the key to identify the TI

  • command (str) – the command to execute

sync(self)[source]
end(self)[source]
start(self)[source]
execute_async(self, key, command, queue=None, executor_config=None)[source]
sync(self)[source]
end(self)[source]

Was this entry helpful?