airflow.providers.common.ai.decorators.llm

TaskFlow decorator for general-purpose LLM calls.

The user writes a function that returns the prompt string. The decorator handles hook creation, agent configuration, LLM call, and output serialization. When output_type is a Pydantic BaseModel, the result is serialized via model_dump() for XCom.

Functions

llm_task([python_callable])

Wrap a function that returns a prompt into a general-purpose LLM task.

Module Contents

airflow.providers.common.ai.decorators.llm.llm_task(python_callable=None, **kwargs)[source]

Wrap a function that returns a prompt into a general-purpose LLM task.

The function body constructs the prompt (can use Airflow context, XCom, etc.). The decorator handles hook creation, agent configuration, LLM call, and output serialization.

Usage:

@task.llm(
    llm_conn_id="openai_default",
    system_prompt="Summarize concisely.",
)
def summarize(text: str):
    return f"Summarize this article: {text}"

With structured output:

@task.llm(
    llm_conn_id="openai_default",
    system_prompt="Extract named entities.",
    output_type=Entities,
)
def extract(text: str):
    return f"Extract entities from: {text}"
Parameters:

python_callable (collections.abc.Callable | None) – Function to decorate.

Was this entry helpful?