airflow.providers.microsoft.azure.operators.synapse

Module Contents

Classes

AzureSynapseRunSparkBatchOperator

Executes a Spark job on Azure Synapse.

class airflow.providers.microsoft.azure.operators.synapse.AzureSynapseRunSparkBatchOperator(*, azure_synapse_conn_id=AzureSynapseHook.default_conn_name, wait_for_termination=True, spark_pool='', payload, timeout=60 * 60 * 24 * 7, check_interval=60, **kwargs)[source]

Bases: airflow.models.BaseOperator

Executes a Spark job on Azure Synapse.

Parameters
  • azure_synapse_conn_id (str) – The connection identifier for connecting to Azure Synapse.

  • wait_for_termination (bool) – Flag to wait on a job run’s termination.

  • spark_pool (str) – The target synapse spark pool used to submit the job

  • payload (azure.synapse.spark.models.SparkBatchJobOptions) – Livy compatible payload which represents the spark job that a user wants to submit

  • timeout (int) – Time in seconds to wait for a job to reach a terminal status for non-asynchronous waits. Used only if wait_for_termination is True.

  • check_interval (int) – Time in seconds to check on a job run’s status for non-asynchronous waits. Used only if wait_for_termination is True.

template_fields: Sequence[str] = ('azure_synapse_conn_id', 'spark_pool')[source]
template_fields_renderers[source]
ui_color = '#0678d4'[source]
execute(context)[source]

This is the main method to derive when creating an operator. Context is the same dictionary used as when rendering jinja templates.

Refer to get_template_context for more context.

on_kill()[source]

Override this method to clean up subprocesses when a task instance gets killed. Any use of the threading, subprocess or multiprocessing module within an operator needs to be cleaned up, or it will leave ghost processes behind.

Was this entry helpful?