airflow.executors.base_executor¶
Base executor - this is the base class for all the implemented executors.
Module Contents¶
Classes¶
| Class to derive in order to interface with executor-type systems | 
Attributes¶
- airflow.executors.base_executor.NOT_STARTED_MESSAGE = The executor should be started first![source]¶
- class airflow.executors.base_executor.BaseExecutor(parallelism=PARALLELISM)[source]¶
- Bases: - airflow.utils.log.logging_mixin.LoggingMixin- Class to derive in order to interface with executor-type systems like Celery, Kubernetes, Local, Sequential and the likes. - Parameters
- parallelism (int) – how many jobs should run at one time. Set to - 0for infinity
 - queue_task_instance(task_instance, mark_success=False, pickle_id=None, ignore_all_deps=False, ignore_depends_on_past=False, ignore_task_deps=False, ignore_ti_state=False, pool=None, cfg_path=None)[source]¶
- Queues task instance. 
 - has_task(task_instance)[source]¶
- Checks if a task is either queued or running in this executor. - Parameters
- task_instance (airflow.models.taskinstance.TaskInstance) – TaskInstance 
- Returns
- True if the task is known to this executor 
- Return type
 
 - sync()[source]¶
- Sync will get called periodically by the heartbeat method. Executors should override this to perform gather statuses. 
 - order_queued_tasks_by_priority()[source]¶
- Orders the queued tasks by priority. - Returns
- List of tuples from the queued_tasks according to the priority. 
- Return type
- List[Tuple[airflow.models.taskinstance.TaskInstanceKey, QueuedTaskInstanceType]] 
 
 - trigger_tasks(open_slots)[source]¶
- Initiates async execution of the queued tasks, up to the number of available slots. - Parameters
- open_slots (int) – Number of open slots 
 
 - change_state(key, state, info=None)[source]¶
- Changes state of the task. - Parameters
- info – Executor information for the task instance 
- key (airflow.models.taskinstance.TaskInstanceKey) – Unique key for the task instance 
- state (str) – State to set for the task. 
 
 
 - fail(key, info=None)[source]¶
- Set fail state for the event. - Parameters
- info – Executor information for the task instance 
- key (airflow.models.taskinstance.TaskInstanceKey) – Unique key for the task instance 
 
 
 - success(key, info=None)[source]¶
- Set success state for the event. - Parameters
- info – Executor information for the task instance 
- key (airflow.models.taskinstance.TaskInstanceKey) – Unique key for the task instance 
 
 
 - get_event_buffer(dag_ids=None)[source]¶
- Returns and flush the event buffer. In case dag_ids is specified it will only return and flush events for the given dag_ids. Otherwise it returns and flushes all events. - Parameters
- dag_ids – the dag_ids to return events for; returns all if given - None.
- Returns
- a dict of events 
- Return type
- Dict[airflow.models.taskinstance.TaskInstanceKey, EventBufferValueType] 
 
 - abstract execute_async(key, command, queue=None, executor_config=None)[source]¶
- This method will execute the command asynchronously. - Parameters
- key (airflow.models.taskinstance.TaskInstanceKey) – Unique key for the task instance 
- command (CommandType) – Command to run 
- queue (Optional[str]) – name of the queue 
- executor_config (Optional[Any]) – Configuration passed to the executor. 
 
 
 - abstract end()[source]¶
- This method is called when the caller is done submitting job and wants to wait synchronously for the job submitted previously to be all done. 
 - try_adopt_task_instances(tis)[source]¶
- Try to adopt running task instances that have been abandoned by a SchedulerJob dying. - Anything that is not adopted will be cleared by the scheduler (and then become eligible for re-scheduling) - Returns
- any TaskInstances that were unable to be adopted 
- Return type
- list[airflow.models.TaskInstance] 
 
 
