airflow.providers.apache.spark.hooks.spark_submit

Module Contents

class airflow.providers.apache.spark.hooks.spark_submit.SparkSubmitHook(conf: Optional[Dict[str, Any]] = None, conn_id: str = 'spark_default', files: Optional[str] = None, py_files: Optional[str] = None, archives: Optional[str] = None, driver_class_path: Optional[str] = None, jars: Optional[str] = None, java_class: Optional[str] = None, packages: Optional[str] = None, exclude_packages: Optional[str] = None, repositories: Optional[str] = None, total_executor_cores: Optional[int] = None, executor_cores: Optional[int] = None, executor_memory: Optional[str] = None, driver_memory: Optional[str] = None, keytab: Optional[str] = None, principal: Optional[str] = None, proxy_user: Optional[str] = None, name: str = 'default-name', num_executors: Optional[int] = None, status_poll_interval: int = 1, application_args: Optional[List[Any]] = None, env_vars: Optional[Dict[str, Any]] = None, verbose: bool = False, spark_binary: Optional[str] = None)[source]

Bases: airflow.hooks.base.BaseHook, airflow.utils.log.logging_mixin.LoggingMixin

This hook is a wrapper around the spark-submit binary to kick off a spark-submit job. It requires that the "spark-submit" binary is in the PATH or the spark_home to be supplied.

Parameters
  • conf (dict) -- Arbitrary Spark configuration properties

  • spark_conn_id (str) -- The spark connection id as configured in Airflow administration. When an invalid connection_id is supplied, it will default to yarn.

  • files (str) -- Upload additional files to the executor running the job, separated by a comma. Files will be placed in the working directory of each executor. For example, serialized objects.

  • py_files (str) -- Additional python files used by the job, can be .zip, .egg or .py.

  • driver_class_path (str) -- Additional, driver-specific, classpath settings.

  • jars (str) -- Submit additional jars to upload and place them in executor classpath.

  • java_class (str) -- the main class of the Java application

  • packages (str) -- Comma-separated list of maven coordinates of jars to include on the driver and executor classpaths

  • exclude_packages (str) -- Comma-separated list of maven coordinates of jars to exclude while resolving the dependencies provided in 'packages'

  • repositories (str) -- Comma-separated list of additional remote repositories to search for the maven coordinates given with 'packages'

  • total_executor_cores (int) -- (Standalone & Mesos only) Total cores for all executors (Default: all the available cores on the worker)

  • executor_cores (int) -- (Standalone, YARN and Kubernetes only) Number of cores per executor (Default: 2)

  • executor_memory (str) -- Memory per executor (e.g. 1000M, 2G) (Default: 1G)

  • driver_memory (str) -- Memory allocated to the driver (e.g. 1000M, 2G) (Default: 1G)

  • keytab (str) -- Full path to the file that contains the keytab

  • principal (str) -- The name of the kerberos principal used for keytab

  • proxy_user (str) -- User to impersonate when submitting the application

  • name (str) -- Name of the job (default airflow-spark)

  • num_executors (int) -- Number of executors to launch

  • status_poll_interval (int) -- Seconds to wait between polls of driver status in cluster mode (Default: 1)

  • application_args (list) -- Arguments for the application being submitted

  • env_vars (dict) -- Environment variables for spark-submit. It supports yarn and k8s mode too.

  • verbose (bool) -- Whether to pass the verbose flag to spark-submit process for debugging

  • spark_binary (str) -- The command to use for spark submit. Some distros may use spark2-submit.

Param

archives: Archives that spark should unzip (and possibly tag with #ALIAS) into the application working directory.

conn_name_attr = conn_id[source]
default_conn_name = spark_default[source]
conn_type = spark[source]
hook_name = Spark[source]
static get_ui_field_behaviour()[source]

Returns custom field behaviour

get_conn(self)[source]
submit(self, application: str = '', **kwargs)[source]

Remote Popen to execute the spark-submit job

Parameters
  • application (str) -- Submitted application, jar or py file

  • kwargs -- extra arguments to Popen (see subprocess.Popen)

on_kill(self)[source]

Kill Spark submit command

Was this entry helpful?