:mod:`airflow.contrib.operators.gcp_container_operator` ======================================================= .. py:module:: airflow.contrib.operators.gcp_container_operator Module Contents --------------- .. py:class:: GKEClusterDeleteOperator(project_id, name, location, gcp_conn_id='google_cloud_default', api_version='v2', *args, **kwargs) Bases::class:`airflow.models.BaseOperator` Deletes the cluster, including the Kubernetes endpoint and all worker nodes. To delete a certain cluster, you must specify the ``project_id``, the ``name`` of the cluster, the ``location`` that the cluster is in, and the ``task_id``. **Operator Creation**: :: operator = GKEClusterDeleteOperator( task_id='cluster_delete', project_id='my-project', location='cluster-location' name='cluster-name') .. seealso:: For more detail about deleting clusters have a look at the reference: https://google-cloud-python.readthedocs.io/en/latest/container/gapic/v1/api.html#google.cloud.container_v1.ClusterManagerClient.delete_cluster :param project_id: The Google Developers Console [project ID or project number] :type project_id: str :param name: The name of the resource to delete, in this case cluster name :type name: str :param location: The name of the Google Compute Engine zone in which the cluster resides. :type location: str :param gcp_conn_id: The connection ID to use connecting to Google Cloud Platform. :type gcp_conn_id: str :param api_version: The api version to use :type api_version: str .. attribute:: template_fields :annotation: = ['project_id', 'gcp_conn_id', 'name', 'location', 'api_version'] .. method:: _check_input(self) .. method:: execute(self, context) .. py:class:: GKEClusterCreateOperator(project_id, location, body=None, gcp_conn_id='google_cloud_default', api_version='v2', *args, **kwargs) Bases::class:`airflow.models.BaseOperator` Create a Google Kubernetes Engine Cluster of specified dimensions The operator will wait until the cluster is created. The **minimum** required to define a cluster to create is: ``dict()`` :: cluster_def = {'name': 'my-cluster-name', 'initial_node_count': 1} or ``Cluster`` proto :: from google.cloud.container_v1.types import Cluster cluster_def = Cluster(name='my-cluster-name', initial_node_count=1) **Operator Creation**: :: operator = GKEClusterCreateOperator( task_id='cluster_create', project_id='my-project', location='my-location' body=cluster_def) .. seealso:: For more detail on about creating clusters have a look at the reference: :class:`google.cloud.container_v1.types.Cluster` :param project_id: The Google Developers Console [project ID or project number] :type project_id: str :param location: The name of the Google Compute Engine zone in which the cluster resides. :type location: str :param body: The Cluster definition to create, can be protobuf or python dict, if dict it must match protobuf message Cluster :type body: dict or google.cloud.container_v1.types.Cluster :param gcp_conn_id: The connection ID to use connecting to Google Cloud Platform. :type gcp_conn_id: str :param api_version: The api version to use :type api_version: str .. attribute:: template_fields :annotation: = ['project_id', 'gcp_conn_id', 'location', 'api_version', 'body'] .. method:: _check_input(self) .. method:: execute(self, context) .. data:: KUBE_CONFIG_ENV_VAR :annotation: = KUBECONFIG .. data:: G_APP_CRED :annotation: = GOOGLE_APPLICATION_CREDENTIALS .. py:class:: GKEPodOperator(project_id, location, cluster_name, gcp_conn_id='google_cloud_default', *args, **kwargs) Bases::class:`airflow.contrib.operators.kubernetes_pod_operator.KubernetesPodOperator` Executes a task in a Kubernetes pod in the specified Google Kubernetes Engine cluster This Operator assumes that the system has gcloud installed and either has working default application credentials or has configured a connection id with a service account. The **minimum** required to define a cluster to create are the variables ``task_id``, ``project_id``, ``location``, ``cluster_name``, ``name``, ``namespace``, and ``image`` **Operator Creation**: :: operator = GKEPodOperator(task_id='pod_op', project_id='my-project', location='us-central1-a', cluster_name='my-cluster-name', name='task-name', namespace='default', image='perl') .. seealso:: For more detail about application authentication have a look at the reference: https://cloud.google.com/docs/authentication/production#providing_credentials_to_your_application :param project_id: The Google Developers Console project id :type project_id: str :param location: The name of the Google Kubernetes Engine zone in which the cluster resides, e.g. 'us-central1-a' :type location: str :param cluster_name: The name of the Google Kubernetes Engine cluster the pod should be spawned in :type cluster_name: str :param gcp_conn_id: The google cloud connection id to use. This allows for users to specify a service account. :type gcp_conn_id: str .. attribute:: template_fields .. method:: execute(self, context) .. method:: _set_env_from_extras(self, extras) Sets the environment variable `GOOGLE_APPLICATION_CREDENTIALS` with either: - The path to the keyfile from the specified connection id - A generated file's path if the user specified JSON in the connection id. The file is assumed to be deleted after the process dies due to how mkstemp() works. The environment variable is used inside the gcloud command to determine correct service account to use. .. method:: _get_field(self, extras, field, default=None) Fetches a field from extras, and returns it. This is some Airflow magic. The google_cloud_platform hook type adds custom UI elements to the hook page, which allow admins to specify service_account, key_path, etc. They get formatted as shown below.