airflow.providers.databricks.hooks.databricks_sql

Module Contents

Classes

DatabricksSqlHook

Hook to interact with Databricks SQL.

Attributes

LIST_SQL_ENDPOINTS_ENDPOINT

airflow.providers.databricks.hooks.databricks_sql.LIST_SQL_ENDPOINTS_ENDPOINT = ('GET', 'api/2.0/sql/endpoints')[source]
class airflow.providers.databricks.hooks.databricks_sql.DatabricksSqlHook(databricks_conn_id=BaseDatabricksHook.default_conn_name, http_path=None, sql_endpoint_name=None, session_configuration=None, http_headers=None, catalog=None, schema=None, caller='DatabricksSqlHook', **kwargs)[source]

Bases: airflow.providers.databricks.hooks.databricks_base.BaseDatabricksHook, airflow.providers.common.sql.hooks.sql.DbApiHook

Hook to interact with Databricks SQL.

Parameters
  • databricks_conn_id (str) – Reference to the Databricks connection.

  • http_path (str | None) – Optional string specifying HTTP path of Databricks SQL Endpoint or cluster. If not specified, it should be either specified in the Databricks connection’s extra parameters, or sql_endpoint_name must be specified.

  • sql_endpoint_name (str | None) – Optional name of Databricks SQL Endpoint. If not specified, http_path must be provided as described above.

  • session_configuration (dict[str, str] | None) – An optional dictionary of Spark session parameters. Defaults to None. If not specified, it could be specified in the Databricks connection’s extra parameters.

  • http_headers (list[tuple[str, str]] | None) – An optional list of (k, v) pairs that will be set as HTTP headers on every request

  • catalog (str | None) – An optional initial catalog to use. Requires DBR version 9.0+

  • schema (str | None) – An optional initial schema to use. Requires DBR version 9.0+

  • kwargs – Additional parameters internal to Databricks SQL Connector parameters

hook_name = 'Databricks SQL'[source]
get_conn()[source]

Returns a Databricks SQL connection object

run(sql, autocommit=False, parameters=None, handler=None, split_statements=True, return_last=True)[source]

Runs a command or a list of commands. Pass a list of sql statements to the sql parameter to get them to execute sequentially.

Parameters
  • sql (str | Iterable[str]) – the sql statement to be executed (str) or a list of sql statements to execute

  • autocommit (bool) – What to set the connection’s autocommit setting to before executing the query. Note that currently there is no commit functionality in Databricks SQL so this flag has no effect.

  • parameters (Iterable | Mapping | None) – The parameters to render the SQL query with.

  • handler (Callable | None) – The result handler which is called with the result of each statement.

  • split_statements (bool) – Whether to split a single SQL string into statements and run separately

  • return_last (bool) – Whether to return result for only last statement or for all after split

Returns

return only result of the LAST SQL expression if handler was provided unless return_last is set to False.

Return type

Any | list[Any] | None

abstract bulk_dump(table, tmp_file)[source]

Dumps a database table into a tab-delimited file

Parameters
  • table – The name of the source table

  • tmp_file – The path of the target file

abstract bulk_load(table, tmp_file)[source]

Loads a tab-delimited file into a database table

Parameters
  • table – The name of the target table

  • tmp_file – The path of the file to load into the table

Was this entry helpful?