Creating Custom @task Decorators

As of Airflow 2.2 it is possible add custom decorators to the TaskFlow interface from within a provider package and have those decorators appear natively as part of the @task.____ design.

For an example. Let's say you were trying to create an easier mechanism to run python functions as "foo" tasks. The steps to create and register @task.foo are:

  1. Create a FooDecoratedOperator

    In this case, we are assuming that you have an existing FooOperator that takes a python function as an argument. By creating a FooDecoratedOperator that inherits from FooOperator and airflow.decorators.base.DecoratedOperator, Airflow will supply much of the needed functionality required to treat your new class as a taskflow native class.

  2. Create a foo_task function

    Once you have your decorated class, create a function that takes arguments python_callable, multiple_outputs, and kwargs. This function will use the airflow.decorators.base.task_decorator_factory function to convert the new FooDecoratedOperator into a TaskFlow function decorator!

    def foo_task(
        python_callable: Optional[Callable] = None,
        multiple_outputs: Optional[bool] = None,
        **kwargs
    ):
        return task_decorator_factory(
            python_callable=python_callable,
            multiple_outputs=multiple_outputs,
            decorated_operator_class=FooDecoratedOperator,
            **kwargs,
        )
    
  3. Register your new decorator in get_provider_info of your provider

    Finally, add a key-value task-decorators to the dict returned from the provider entrypoint. This should be a list with each item containing name and class-name keys. When Airflow starts, the ProviderManager class will automatically import this value and task.foo will work as a new decorator!

    def get_provider_info():
        return {
            "package-name": "foo-provider-airflow",
            "name": "Foo",
            "task-decorators": [
                {
                    "name": "foo",
                    # "Import path" and function name of the `foo_task`
                    "class-name": ["name.of.python.package.foo_task"],
                }
            ],
            # ...
        }
    

    Please note that the name must be a valid python identifier.

(Optional) Adding IDE auto-completion support

Note

This section mostly applies to the apache-airflow managed providers. We have not decided if we will allow third-party providers to register auto-completion in this way.

For better or worse, Python IDEs can not auto-complete dynamically generated methods (see JetBrain's write up on the subject).

To get around this, we had to find a solution that was "best possible." IDEs will only allow typing through stub files, but we wanted to avoid any situation where a user would update their provider and the auto-complete would be out of sync with the provider's actual parameters.

To hack around this problem, we found that you could extend the _TaskDecorator class in the airflow/decorators/__init__.py inside an if TYPE_CHECKING block and the correct auto-complete will show up in the IDE.

The first step is to create a Mixin class for your decorator.

Mixin classes are classes in python that tell the python interpreter that python can import them at any time. Because they are not dependent on other classes, Mixin classes are great for multiple inheritance.

In the DockerDecorator we created a Mixin class that looks like this:

airflow/providers/docker/decorators/docker.py[source]

class DockerDecoratorMixin:
    """
    Helper class for inheritance. This class is only used during type checking or auto-completion

    :meta private:
    """

    def docker(
        self,
        multiple_outputs: Optional[bool] = None,
        use_dill: bool = False,
        image: str = "",
        api_version: Optional[str] = None,
        container_name: Optional[str] = None,
        cpus: float = 1.0,
        docker_url: str = 'unix://var/run/docker.sock',
        environment: Optional[Dict] = None,
        private_environment: Optional[Dict] = None,
        force_pull: bool = False,
        mem_limit: Optional[Union[float, str]] = None,
        host_tmp_dir: Optional[str] = None,
        network_mode: Optional[str] = None,
        tls_ca_cert: Optional[str] = None,
        tls_client_cert: Optional[str] = None,
        tls_client_key: Optional[str] = None,
        tls_hostname: Optional[Union[str, bool]] = None,
        tls_ssl_version: Optional[str] = None,
        tmp_dir: str = '/tmp/airflow',
        user: Optional[Union[str, int]] = None,
        mounts: Optional[List[str]] = None,
        working_dir: Optional[str] = None,
        xcom_all: bool = False,
        docker_conn_id: Optional[str] = None,
        dns: Optional[List[str]] = None,
        dns_search: Optional[List[str]] = None,
        auto_remove: bool = False,
        shm_size: Optional[int] = None,
        tty: bool = False,
        privileged: bool = False,
        cap_add: Optional[Iterable[str]] = None,
        extra_hosts: Optional[Dict[str, str]] = None,
        **kwargs,
    ):
        """
        :param python_callable: A python function with no references to outside variables,
            defined with def, which will be run in a virtualenv
        :type python_callable: function
        :param multiple_outputs: if set, function return value will be
            unrolled to multiple XCom values. List/Tuples will unroll to xcom values
            with index as key. Dict will unroll to xcom values with keys as XCom keys.
            Defaults to False.
        :type multiple_outputs: bool
        :param use_dill: Whether to use dill or pickle for serialization
        :type use_dill: bool
        :param image: Docker image from which to create the container.
            If image tag is omitted, "latest" will be used.
        :type image: str
        :param api_version: Remote API version. Set to ``auto`` to automatically
            detect the server's version.
        :type api_version: str
        :param container_name: Name of the container. Optional (templated)
        :type container_name: str or None
        :param cpus: Number of CPUs to assign to the container.
            This value gets multiplied with 1024. See
            https://docs.docker.com/engine/reference/run/#cpu-share-constraint
        :type cpus: float
        :param docker_url: URL of the host running the docker daemon.
            Default is unix://var/run/docker.sock
        :type docker_url: str
        :param environment: Environment variables to set in the container. (templated)
        :type environment: dict
        :param private_environment: Private environment variables to set in the container.
            These are not templated, and hidden from the website.
        :type private_environment: dict
        :param force_pull: Pull the docker image on every run. Default is False.
        :type force_pull: bool
        :param mem_limit: Maximum amount of memory the container can use.
            Either a float value, which represents the limit in bytes,
            or a string like ``128m`` or ``1g``.
        :type mem_limit: float or str
        :param host_tmp_dir: Specify the location of the temporary directory on the host which will
            be mapped to tmp_dir. If not provided defaults to using the standard system temp directory.
        :type host_tmp_dir: str
        :param network_mode: Network mode for the container.
        :type network_mode: str
        :param tls_ca_cert: Path to a PEM-encoded certificate authority
            to secure the docker connection.
        :type tls_ca_cert: str
        :param tls_client_cert: Path to the PEM-encoded certificate
            used to authenticate docker client.
        :type tls_client_cert: str
        :param tls_client_key: Path to the PEM-encoded key used to authenticate docker client.
        :type tls_client_key: str
        :param tls_hostname: Hostname to match against
            the docker server certificate or False to disable the check.
        :type tls_hostname: str or bool
        :param tls_ssl_version: Version of SSL to use when communicating with docker daemon.
        :type tls_ssl_version: str
        :param tmp_dir: Mount point inside the container to
            a temporary directory created on the host by the operator.
            The path is also made available via the environment variable
            ``AIRFLOW_TMP_DIR`` inside the container.
        :type tmp_dir: str
        :param user: Default user inside the docker container.
        :type user: int or str
        :param mounts: List of mounts to mount into the container, e.g.
            ``['/host/path:/container/path', '/host/path2:/container/path2:ro']``.
        :type mounts: list
        :param working_dir: Working directory to
            set on the container (equivalent to the -w switch the docker client)
        :type working_dir: str
        :param xcom_all: Push all the stdout or just the last line.
            The default is False (last line).
        :type xcom_all: bool
        :param docker_conn_id: ID of the Airflow connection to use
        :type docker_conn_id: str
        :param dns: Docker custom DNS servers
        :type dns: list[str]
        :param dns_search: Docker custom DNS search domain
        :type dns_search: list[str]
        :param auto_remove: Auto-removal of the container on daemon side when the
            container's process exits.
            The default is False.
        :type auto_remove: bool
        :param shm_size: Size of ``/dev/shm`` in bytes. The size must be
            greater than 0. If omitted uses system default.
        :type shm_size: int
        :param tty: Allocate pseudo-TTY to the container
            This needs to be set see logs of the Docker container.
        :type tty: bool
        :param privileged: Give extended privileges to this container.
        :type privileged: bool
        :param cap_add: Include container capabilities
        :type cap_add: list[str]
        """
        ...

Notice that the function does not actually need to return anything as we only use this class for type checking. Sadly you will have to duplicate the args, defaults and types from your real FooOperator in order for them to show up in auto-completion prompts.

Once you have your Mixin class ready, go to airflow/decorators/__init__.py and add section similar to this

airflow/decorators/__init__.py[source]

if TYPE_CHECKING:
    try:
        from airflow.providers.docker.decorators.docker import DockerDecoratorMixin

        class _DockerTask(_TaskDecorator, DockerDecoratorMixin):
            pass

        _TaskDecorator = _DockerTask
    except ImportError:
        pass

The if TYPE_CHECKING guard means that this code will only be used for type checking (such as mypy) or generating IDE auto-completion. Catching the ImportError is important as

Once the change is merged and the next Airflow (minor or patch) release comes out, users will be able to see your decorator in IDE auto-complete. This auto-complete will change based on the version of the provider that the user has installed.

Please note that this step is not required to create a working decorator but does create a better experience for users of the provider.

Was this entry helpful?