airflow.providers.apache.druid.transfers.hive_to_druid

This module contains operator to move data from Hive to Druid.

Module Contents

Classes

HiveToDruidOperator

Moves data from Hive to Druid, [del]note that for now the data is loaded

Attributes

LOAD_CHECK_INTERVAL

DEFAULT_TARGET_PARTITION_SIZE

airflow.providers.apache.druid.transfers.hive_to_druid.LOAD_CHECK_INTERVAL = 5[source]
airflow.providers.apache.druid.transfers.hive_to_druid.DEFAULT_TARGET_PARTITION_SIZE = 5000000[source]
class airflow.providers.apache.druid.transfers.hive_to_druid.HiveToDruidOperator(*, sql, druid_datasource, ts_dim, metric_spec=None, hive_cli_conn_id='hive_cli_default', druid_ingest_conn_id='druid_ingest_default', metastore_conn_id='metastore_default', hadoop_dependency_coordinates=None, intervals=None, num_shards=- 1, target_partition_size=- 1, query_granularity='NONE', segment_granularity='DAY', hive_tblproperties=None, job_properties=None, **kwargs)[source]

Bases: airflow.models.BaseOperator

Moves data from Hive to Druid, [del]note that for now the data is loaded into memory before being pushed to Druid, so this operator should be used for smallish amount of data.[/del]

Parameters
  • sql (str) -- SQL query to execute against the Druid database. (templated)

  • druid_datasource (str) -- the datasource you want to ingest into in druid

  • ts_dim (str) -- the timestamp dimension

  • metric_spec (Optional[List[Any]]) -- the metrics you want to define for your data

  • hive_cli_conn_id (str) -- the hive connection id

  • druid_ingest_conn_id (str) -- the druid ingest connection id

  • metastore_conn_id (str) -- the metastore connection id

  • hadoop_dependency_coordinates (Optional[List[str]]) -- list of coordinates to squeeze int the ingest json

  • intervals (Optional[List[Any]]) -- list of time intervals that defines segments, this is passed as is to the json object. (templated)

  • num_shards (float) -- Directly specify the number of shards to create.

  • target_partition_size (int) -- Target number of rows to include in a partition,

  • query_granularity (str) -- The minimum granularity to be able to query results at and the granularity of the data inside the segment. E.g. a value of "minute" will mean that data is aggregated at minutely granularity. That is, if there are collisions in the tuple (minute(timestamp), dimensions), then it will aggregate values together using the aggregators instead of storing individual rows. A granularity of 'NONE' means millisecond granularity.

  • segment_granularity (str) -- The granularity to create time chunks at. Multiple segments can be created per time chunk. For example, with 'DAY' segmentGranularity, the events of the same day fall into the same time chunk which can be optionally further partitioned into multiple segments based on other configurations and input size.

  • hive_tblproperties (Optional[Dict[Any, Any]]) -- additional properties for tblproperties in hive for the staging table

  • job_properties (Optional[Dict[Any, Any]]) -- additional properties for job

template_fields :Sequence[str] = ['sql', 'intervals'][source]
template_ext :Sequence[str] = ['.sql'][source]
template_fields_renderers[source]
execute(self, context)[source]

This is the main method to derive when creating an operator. Context is the same dictionary used as when rendering jinja templates.

Refer to get_template_context for more context.

construct_ingest_query(self, static_path, columns)[source]

Builds an ingest query for an HDFS TSV load.

Parameters
  • static_path (str) -- The path on hdfs where the data is

  • columns (List[str]) -- List of all the columns that are available

Was this entry helpful?