:mod:`airflow.contrib.operators.azure_container_instances_operator` =================================================================== .. py:module:: airflow.contrib.operators.azure_container_instances_operator Module Contents --------------- .. data:: Volume .. data:: DEFAULT_ENVIRONMENT_VARIABLES :annotation: :Dict[str, str] .. data:: DEFAULT_SECURED_VARIABLES :annotation: :Sequence[str] = [] .. data:: DEFAULT_VOLUMES :annotation: :Sequence[Volume] = [] .. data:: DEFAULT_MEMORY_IN_GB :annotation: = 2.0 .. data:: DEFAULT_CPU :annotation: = 1.0 .. py:class:: AzureContainerInstancesOperator(ci_conn_id, registry_conn_id, resource_group, name, image, region, environment_variables=None, secured_variables=None, volumes=None, memory_in_gb=None, cpu=None, gpu=None, command=None, remove_on_error=True, fail_if_exists=True, *args, **kwargs) Bases: :class:`airflow.models.BaseOperator` Start a container on Azure Container Instances :param ci_conn_id: connection id of a service principal which will be used to start the container instance :type ci_conn_id: str :param registry_conn_id: connection id of a user which can login to a private docker registry. If None, we assume a public registry :type registry_conn_id: str :param resource_group: name of the resource group wherein this container instance should be started :type resource_group: str :param name: name of this container instance. Please note this name has to be unique in order to run containers in parallel. :type name: str :param image: the docker image to be used :type image: str :param region: the region wherein this container instance should be started :type region: str :param environment_variables: key,value pairs containing environment variables which will be passed to the running container :type environment_variables: dict :param secured_variables: names of environmental variables that should not be exposed outside the container (typically passwords). :type secured_variables: [str] :param volumes: list of ``Volume`` tuples to be mounted to the container. Currently only Azure Fileshares are supported. :type volumes: list[] :param memory_in_gb: the amount of memory to allocate to this container :type memory_in_gb: double :param cpu: the number of cpus to allocate to this container :type cpu: double :param gpu: GPU Resource for the container. :type gpu: azure.mgmt.containerinstance.models.GpuResource :param command: the command to run inside the container :type command: [str] :param container_timeout: max time allowed for the execution of the container instance. :type container_timeout: datetime.timedelta **Example**:: AzureContainerInstancesOperator( "azure_service_principal", "azure_registry_user", "my-resource-group", "my-container-name-{{ ds }}", "myprivateregistry.azurecr.io/my_container:latest", "westeurope", {"MODEL_PATH": "my_value", "POSTGRES_LOGIN": "{{ macros.connection('postgres_default').login }}" "POSTGRES_PASSWORD": "{{ macros.connection('postgres_default').password }}", "JOB_GUID": "{{ ti.xcom_pull(task_ids='task1', key='guid') }}" }, ['POSTGRES_PASSWORD'], [("azure_wasb_conn_id", "my_storage_container", "my_fileshare", "/input-data", True),], memory_in_gb=14.0, cpu=4.0, gpu=GpuResource(count=1, sku='K80'), command=["/bin/echo", "world"], container_timeout=timedelta(hours=2), task_id="start_container" ) .. attribute:: template_fields :annotation: = ['name', 'image', 'command', 'environment_variables'] .. method:: execute(self, context) .. method:: on_kill(self) .. method:: _monitor_logging(self, ci_hook, resource_group, name) .. method:: _log_last(self, logs, last_line_logged) .. staticmethod:: _check_name(name)