Apache Druid Operators

Prerequisite

To use DruidOperator, you must configure a Druid Connection first.

DruidOperator

Submit a task directly to Druid, you need to provide the filepath to the Druid index specification json_index_file, and the connection id of the Druid overlord druid_ingest_conn_id which accepts index jobs in Airflow Connections.

There is also a example content of the Druid Ingestion specification below.

For parameter definition take a look at DruidOperator.

Using the operator

airflow/providers/apache/druid/example_dags/example_druid_dag.py[source]

submit_job = DruidOperator(task_id='spark_submit_job', json_index_file='json_index.json')
# Example content of json_index.json:
JSON_INDEX_STR = """
    {
        "type": "index_hadoop",
        "datasource": "datasource_prd",
        "spec": {
            "dataSchema": {
                "granularitySpec": {
                    "intervals": ["2021-09-01/2021-09-02"]
                }
            }
        }
    }
"""

Reference

For more information, please refer to Apache Druid Ingestion spec reference.

Was this entry helpful?