airflow.executors.celery_kubernetes_executor
¶
Module Contents¶
Classes¶
CeleryKubernetesExecutor consists of CeleryExecutor and KubernetesExecutor. |
- class airflow.executors.celery_kubernetes_executor.CeleryKubernetesExecutor(celery_executor, kubernetes_executor)[source]¶
Bases:
airflow.utils.log.logging_mixin.LoggingMixin
CeleryKubernetesExecutor consists of CeleryExecutor and KubernetesExecutor. It chooses an executor to use based on the queue defined on the task. When the queue is the value of
kubernetes_queue
in section[celery_kubernetes_executor]
of the configuration (default value: kubernetes), KubernetesExecutor is selected to run the task, otherwise, CeleryExecutor is used.- property queued_tasks(self) Dict[airflow.models.taskinstance.TaskInstanceKey, airflow.executors.base_executor.QueuedTaskInstanceType] [source]¶
Return queued tasks from celery and kubernetes executor
- property running(self) Set[airflow.models.taskinstance.TaskInstanceKey] [source]¶
Return running tasks from celery and kubernetes executor
- property job_id(self)[source]¶
This is a class attribute in BaseExecutor but since this is not really an executor, but a wrapper of executors we implement as property so we can have custom setter.
- queue_command(self, task_instance: airflow.models.taskinstance.TaskInstance, command: airflow.executors.base_executor.CommandType, priority: int = 1, queue: Optional[str] = None)[source]¶
Queues command via celery or kubernetes executor
- queue_task_instance(self, task_instance: airflow.models.taskinstance.TaskInstance, mark_success: bool = False, pickle_id: Optional[str] = None, ignore_all_deps: bool = False, ignore_depends_on_past: bool = False, ignore_task_deps: bool = False, ignore_ti_state: bool = False, pool: Optional[str] = None, cfg_path: Optional[str] = None) None [source]¶
Queues task instance via celery or kubernetes executor
- has_task(self, task_instance: airflow.models.taskinstance.TaskInstance) bool [source]¶
Checks if a task is either queued or running in either celery or kubernetes executor.
- Parameters
task_instance – TaskInstance
- Returns
True if the task is known to this executor
- get_event_buffer(self, dag_ids=None) Dict[airflow.models.taskinstance.TaskInstanceKey, airflow.executors.base_executor.EventBufferValueType] [source]¶
Returns and flush the event buffer from celery and kubernetes executor
- Parameters
dag_ids – to dag_ids to return events for, if None returns all
- Returns
a dict of events
- try_adopt_task_instances(self, tis: List[airflow.models.taskinstance.TaskInstance]) List[airflow.models.taskinstance.TaskInstance] [source]¶
Try to adopt running task instances that have been abandoned by a SchedulerJob dying.
Anything that is not adopted will be cleared by the scheduler (and then become eligible for re-scheduling)
- Returns
any TaskInstances that were unable to be adopted
- Return type
list[airflow.models.TaskInstance]