Run ephemeral Docker Swarm services
Execute a command as an ephemeral docker swarm service.
- class airflow.providers.docker.operators.docker_swarm.DockerSwarmOperator(*, image, enable_logging=True, configs=None, secrets=None, mode=None, networks=None, placement=None, **kwargs)¶
Execute a command as an ephemeral docker swarm service. Example use-case - Using Docker Swarm orchestration to make one-time scripts highly available.
A temporary directory is created on the host and mounted into a container to allow storing files that together exceed the default disk size of 10GB in a container. The path to the mounted directory can be accessed via the environment variable
If a login to a private registry is required prior to pulling the image, a Docker connection needs to be configured in Airflow and the connection ID be provided with the parameter
image (str) -- Docker image from which to create the container. If image tag is omitted, "latest" will be used.
api_version -- Remote API version. Set to
autoto automatically detect the server's version.
auto_remove -- Auto-removal of the container on daemon side when the container's process exits. The default is False.
command -- Command to be run in the container. (templated)
docker_url -- URL of the host running the docker daemon. Default is unix://var/run/docker.sock
environment -- Environment variables to set in the container. (templated)
force_pull -- Pull the docker image on every run. Default is False.
mem_limit -- Maximum amount of memory the container can use. Either a float value, which represents the limit in bytes, or a string like
tls_ca_cert -- Path to a PEM-encoded certificate authority to secure the docker connection.
tls_client_cert -- Path to the PEM-encoded certificate used to authenticate docker client.
tls_client_key -- Path to the PEM-encoded key used to authenticate docker client.
tls_hostname -- Hostname to match against the docker server certificate or False to disable the check.
tls_ssl_version -- Version of SSL to use when communicating with docker daemon.
tmp_dir -- Mount point inside the container to a temporary directory created on the host by the operator. The path is also made available via the environment variable
AIRFLOW_TMP_DIRinside the container.
user -- Default user inside the docker container.
docker_conn_id -- The Docker connection id
tty -- Allocate pseudo-TTY to the container of this service This needs to be set see logs of the Docker container / service.
enable_logging (bool) -- Show the application's logs in operator's logs. Supported only if the Docker engine is using json-file or journald logging drivers. The tty parameter should be set to use this with Python applications.
configs (Optional[List[docker.types.ConfigReference]]) -- List of docker configs to be exposed to the containers of the swarm service. The configs are ConfigReference objects as per the docker api [https://docker-py.readthedocs.io/en/stable/services.html#docker.models.services.ServiceCollection.create]_
secrets (Optional[List[docker.types.SecretReference]]) -- List of docker secrets to be exposed to the containers of the swarm service. The secrets are SecretReference objects as per the docker create_service api. [https://docker-py.readthedocs.io/en/stable/services.html#docker.models.services.ServiceCollection.create]_
mode (Optional[docker.types.ServiceMode]) -- Indicate whether a service should be deployed as a replicated or global service, and associated parameters
placement (Optional[Union[docker.types.Placement, List[docker.types.Placement]]]) -- Placement instructions for the scheduler. If a list is passed instead, it is assumed to be a list of constraints as part of a Placement object.
- execute(self, context)¶
This is the main method to derive when creating an operator. Context is the same dictionary used as when rendering jinja templates.
Refer to get_template_context for more context.
Override this method to cleanup subprocesses when a task instance gets killed. Any use of the threading, subprocess or multiprocessing module within an operator needs to be cleaned up or it will leave ghost processes behind.