This is a provider package for
cncf.kubernetes provider. All classes for this provider package
airflow.providers.cncf.kubernetes python package.
You can install this package on top of an existing Airflow 2.1+ installation via
pip install apache-airflow-providers-cncf-kubernetes
Fix: Exception when parsing log #20966 (#23301)
Fixed Kubernetes Operator large xcom content Defect (#23490)
Clarify 'reattach_on_restart' behavior (#23377)
Add k8s container's error message in airflow exception (#22871)
KubernetesHook should try incluster first when not otherwise configured (#23126)
KubernetesPodOperator should patch "already checked" always (#22734)
Delete old Spark Application in SparkKubernetesOperator (#21092)
Cleanup dup code now that k8s provider requires 2.3.0+ (#22845)
Fix ''KubernetesPodOperator'' with 'KubernetesExecutor'' on 2.3.0 (#23371)
Fix KPO to have hyphen instead of period (#22982)
Fix new MyPy errors in main (#22884)
The provider in version 4.0.0 only works with Airflow 2.3+. Please upgrade Airflow to 2.3 version if you want to use the features or fixes in 4.* line of the provider.
The main reason for the incompatibility is using latest Kubernetes Libraries.
cncf.kubernetes provider requires newer version of libraries than
Airflow 2.1 and 2.2 used for Kubernetes Executor and that makes the provider
incompatible with those Airflow versions.
Log traceback only on ''DEBUG'' for KPO logs read interruption (#22595)
Update our approach for executor-bound dependencies (#22573)
Optionally not follow logs in KPO pod_manager (#22412)
Stop crashing when empty logs are received from kubernetes client (#22566)
Fix mistakenly added install_requires for all providers (#22382)
Fix "run_id" k8s and elasticsearch compatibility with Airflow 2.1 (#22385)
Remove RefreshConfiguration workaround for K8s token refreshing (#20759)
Add map_index label to mapped KubernetesPodOperator (#21916)
Change KubePodOperator labels from exeuction_date to run_id (#21960)
Support for Python 3.10
Fix Kubernetes example with wrong operator casing (#21898)
Remove types from KPO docstring (#21826)
Parameter is_delete_operator_pod default is changed to True (#20575)
Simplify KubernetesPodOperator (#19572)
Move pod_mutation_hook call from PodManager to KubernetesPodOperator (#20596)
Rename ''PodLauncher'' to ''PodManager'' (#20576)
Parameter is_delete_operator_pod has new default¶
Previously, the default for param
False, which means that
after a task runs, its pod is not deleted by the operator and remains on the
cluster indefinitely. With this release, we change the default to
Notes on changes KubernetesPodOperator and PodLauncher¶
Many methods in
PodLauncher have been renamed.
If you have subclassed
KubernetesPodOperator you will need to update your subclass to reflect
the new structure. Additionally
PodStatus enum has been renamed to
Generally speaking if you did not subclass
KubernetesPodOperator and you didn't use the
PodLauncher class directly,
then you don't need to worry about this change. If however you have subclassed
follows are some notes on the changes in this release.
One of the principal goals of the refactor is to clearly separate the "get or create pod" and
"wait for pod completion" phases. Previously the "wait for pod completion" logic would be invoked
differently depending on whether the operator were to "attach to an existing pod" (e.g. after a
worker failure) or "create a new pod" and this resulted in some code duplication and a bit more
nesting of logic. With this refactor we encapsulate the "get or create" step
KubernetesPodOperator.get_or_create_pod, and pull the monitoring and XCom logic up
into the top level of
execute because it can be the same for "attached" pods and "new" pods.
KubernetesPodOperator.get_or_create_pod tries first to find an existing pod using labels
specific to the task instance (see
If one does not exist it
creates a pod <~.PodManager.create_pod>.
The "waiting" part of execution has three components. The first step is to wait for the pod to leave the
Pending phase (
~.KubernetesPodOperator.await_pod_start). Next, if configured to do so,
the operator will follow the base container logs and forward these logs to the task logger until
base container is done. If not configured to harvest the
logs, the operator will instead
either way, we must await container completion before harvesting xcom. After (optionally) extracting the xcom
value from the base container, we
await pod completion <~.PodManager.await_pod_completion>.
Previously, depending on whether the pod was "reattached to" (e.g. after a worker failure) or
created anew, the waiting logic may have occurred in either
After the pod terminates, we execute different cleanup tasks depending on whether the pod terminated successfully.
If the pod terminates unsuccessfully, we attempt to log the pod events
additionally the task is configured not to delete the pod after termination, we apply a label
indicating that the pod failed and should not be "reattached to" in a retry. If the task is configured
to delete its pod, we delete it
we raise an AirflowException to fail the task instance.
If the pod terminates successfully, we delete the pod
(if configured to delete the pod) and push XCom (if configured to push XCom).
Details on method renames, refactors, and deletions¶
create_pod_launcheris converted to cached property
Construction of k8s
CoreV1Apiclient is now encapsulated within cached property
Logic to search for an existing pod (e.g. after an airflow worker failure) is moved out of
executeand into method
handle_pod_overlapis removed. Previously it monitored a "found" pod until completion. With this change the pod monitoring (and log following) is orchestrated directly from
executeand it is the same whether it's a "found" pod or a "new" pod. See methods
build_pod_request_obj. It now takes argument
contextin order to add TI-specific pod labels; previously they were added after return.
_get_ti_pod_labels. This method doesn't return all labels, but only those specific to the TI. We also add parameter
include_try_numberto control the inclusion of this label instead of possibly filtering it out later.
create_new_pod_for_operatoris removed. Previously it would mutate the labels on
self.pod, launch the pod, monitor the pod to completion etc. Now this logic is in part handled by
get_or_create_pod, where a new pod will be created if necessary. The monitoring etc is now orchestrated directly from
execute. Again, see the calls to methods
start_podis removed and split into two methods:
monitor_podis removed and split into methods
_task_statusare removed. These were needed due to the way in which pod
phasewas mapped to task instance states; but we no longer do such a mapping and instead deal with pod phases directly and untransformed.
read_pod_logsnow takes kwarg
Other changes in
PodPhase, and the values are no longer lower-cased.
airflow.settings.pod_mutation_hookis no longer called in
KubernetesPodOperator, mutation now occurs in
is_delete_operator_poddefault is changed to
Trueso that pods are deleted after task completion and not left to accumulate. In practice it seems more common to disable pod deletion only on a temporary basis for debugging purposes and therefore pod deletion is the more sensible default.
Add params config, in_cluster, and cluster_context to KubernetesHook (#19695)
Implement dry_run for KubernetesPodOperator (#20573)
Clarify docstring for ''build_pod_request_obj'' in K8s providers (#20574)
Fix Volume/VolumeMount KPO DeprecationWarning (#19726)
Added namespace as a template field in the KPO. (#19718)
Decouple name randomization from name kwarg (#19398)
Checking event.status.container_statuses before filtering (#19713)
Coalesce 'extra' params to None in KubernetesHook (#19694)
Change to correct type in KubernetesPodOperator (#19459)
Add more type hints to PodLauncher (#18928)
Add more information to PodLauncher timeout error (#17953)
Fix KubernetesPodOperator reattach when not deleting pods (#18070)
Make Kubernetes job description fit on one log line (#18377)
Do not fail KubernetesPodOperator tasks if log reading fails (#17649)
Fix using XCom with ''KubernetesPodOperator'' (#17760)
Import Hooks lazily individually in providers manager (#17682)
Enable using custom pod launcher in Kubernetes Pod Operator (#16945)
BugFix: Using 'json' string in template_field causes issue with K8s Operators (#16930)
Auto-apply apply_default decorator (#15667)
Due to apply_default decorator removal, this version of the provider requires Airflow 2.1.0+.
If your Airflow version is < 2.1.0, and you want to install this provider version, first upgrade
Airflow to at least version 2.1.0. Otherwise your Airflow package version will be upgraded
automatically and you will have to manually run
airflow upgrade db to complete the migration.
Add 'KubernetesPodOperat' 'pod-template-file' jinja template support (#15942)
Save pod name to xcom for KubernetesPodOperator (#15755)
Bug Fix Pod-Template Affinity Ignored due to empty Affinity K8S Object (#15787)
Bug Pod Template File Values Ignored (#16095)
Fix issue with parsing error logs in the KPO (#15638)
Fix unsuccessful KubernetesPod final_state call when 'is_delete_operator_pod=True' (#15490)
Require 'name' with KubernetesPodOperator (#15373)
Change KPO node_selectors warning to proper deprecationwarning (#15507)
Fix timeout when using XCom with KubernetesPodOperator (#15388)
Fix labels on the pod created by ''KubernetsPodOperator'' (#15492)
Separate Kubernetes pod_launcher from core airflow (#15165)
Add ability to specify api group and version for Spark operators (#14898)
Use libyaml C library when available. (#14577)
Allow pod name override in KubernetesPodOperator if pod_template is used. (#14186)
Allow users of the KPO to *actually* template environment variables (#14083)
Updated documentation and readme files.
Pass image_pull_policy in KubernetesPodOperator correctly (#13289)
Initial version of the provider.