airflow.operators.hive_operator
¶
Module Contents¶
-
class
airflow.operators.hive_operator.
HiveOperator
(hql, hive_cli_conn_id='hive_cli_default', schema='default', hiveconfs=None, hiveconf_jinja_translate=False, script_begin_tag=None, run_as_owner=False, mapred_queue=None, mapred_queue_priority=None, mapred_job_name=None, *args, **kwargs)[source]¶ Bases:
airflow.models.BaseOperator
Executes hql code or hive script in a specific Hive database.
- Parameters
hql (str) – the hql to be executed. Note that you may also use a relative path from the dag file of a (template) hive script. (templated)
hive_cli_conn_id (str) – reference to the Hive database. (templated)
hiveconfs (dict) – if defined, these key value pairs will be passed to hive as
-hiveconf "key"="value"
hiveconf_jinja_translate (bool) – when True, hiveconf-type templating ${var} gets translated into jinja-type templating {{ var }} and ${hiveconf:var} gets translated into jinja-type templating {{ var }}. Note that you may want to use this along with the
DAG(user_defined_macros=myargs)
parameter. View the DAG object documentation for more details.script_begin_tag (str) – If defined, the operator will get rid of the part of the script before the first occurrence of script_begin_tag
mapred_queue (str) – queue used by the Hadoop CapacityScheduler. (templated)
mapred_queue_priority (str) – priority within CapacityScheduler queue. Possible settings include: VERY_HIGH, HIGH, NORMAL, LOW, VERY_LOW
mapred_job_name (str) – This name will appear in the jobtracker. This can make monitoring easier.