airflow.providers.apache.spark.hooks.spark_sql

Module Contents

Classes

SparkSqlHook

This hook is a wrapper around the spark-sql binary; requires the "spark-sql" binary to be in the PATH.

class airflow.providers.apache.spark.hooks.spark_sql.SparkSqlHook(sql, conf=None, conn_id=default_conn_name, total_executor_cores=None, executor_cores=None, executor_memory=None, keytab=None, principal=None, master=None, name='default-name', num_executors=None, verbose=True, yarn_queue=None)[source]

Bases: airflow.hooks.base.BaseHook

This hook is a wrapper around the spark-sql binary; requires the “spark-sql” binary to be in the PATH.

Parameters
  • sql (str) – The SQL query to execute

  • conf (str | None) – arbitrary Spark configuration property

  • conn_id (str) – connection_id string

  • total_executor_cores (int | None) – (Standalone & Mesos only) Total cores for all executors (Default: all the available cores on the worker)

  • executor_cores (int | None) – (Standalone & YARN only) Number of cores per executor (Default: 2)

  • executor_memory (str | None) – Memory per executor (e.g. 1000M, 2G) (Default: 1G)

  • keytab (str | None) – Full path to the file that contains the keytab

  • master (str | None) – spark://host:port, mesos://host:port, yarn, or local (Default: The host and port set in the Connection, or "yarn")

  • name (str) – Name of the job.

  • num_executors (int | None) – Number of executors to launch

  • verbose (bool) – Whether to pass the verbose flag to spark-sql

  • yarn_queue (str | None) – The YARN queue to submit to (Default: The queue value set in the Connection, or "default")

conn_name_attr = 'conn_id'[source]
default_conn_name = 'spark_sql_default'[source]
conn_type = 'spark_sql'[source]
hook_name = 'Spark SQL'[source]
classmethod get_ui_field_behaviour()[source]

Return custom UI field behaviour for Spark SQL connection.

classmethod get_connection_form_widgets()[source]

Return connection widgets to add to Spark SQL connection form.

get_conn()[source]

Return connection for the hook.

run_query(cmd='', **kwargs)[source]

Remote Popen (actually execute the Spark-sql query).

Parameters
  • cmd (str) – command to append to the spark-sql command

  • kwargs (Any) – extra arguments to Popen (see subprocess.Popen)

kill()[source]

Kill Spark job.

Was this entry helpful?