airflow.providers.databricks.hooks.databricks_sql
¶
Module Contents¶
Classes¶
Hook to interact with Databricks SQL. |
Attributes¶
- airflow.providers.databricks.hooks.databricks_sql.LIST_SQL_ENDPOINTS_ENDPOINT = ('GET', 'api/2.0/sql/endpoints')[source]¶
- class airflow.providers.databricks.hooks.databricks_sql.DatabricksSqlHook(databricks_conn_id=BaseDatabricksHook.default_conn_name, http_path=None, sql_endpoint_name=None, session_configuration=None, http_headers=None, catalog=None, schema=None, caller='DatabricksSqlHook', return_tuple=False, **kwargs)[source]¶
Bases:
airflow.providers.databricks.hooks.databricks_base.BaseDatabricksHook
,airflow.providers.common.sql.hooks.sql.DbApiHook
Hook to interact with Databricks SQL.
- Parameters
databricks_conn_id (str) – Reference to the Databricks connection.
http_path (str | None) – Optional string specifying HTTP path of Databricks SQL Endpoint or cluster. If not specified, it should be either specified in the Databricks connection’s extra parameters, or
sql_endpoint_name
must be specified.sql_endpoint_name (str | None) – Optional name of Databricks SQL Endpoint. If not specified,
http_path
must be provided as described above.session_configuration (dict[str, str] | None) – An optional dictionary of Spark session parameters. Defaults to None. If not specified, it could be specified in the Databricks connection’s extra parameters.
http_headers (list[tuple[str, str]] | None) – An optional list of (k, v) pairs that will be set as HTTP headers on every request
catalog (str | None) – An optional initial catalog to use. Requires DBR version 9.0+
schema (str | None) – An optional initial schema to use. Requires DBR version 9.0+
return_tuple (bool) – Return a
namedtuple
object instead of adatabricks.sql.Row
object. Default to False. In a future release of the provider, this will become True by default. This parameter ensures backward-compatibility during the transition phase to common tuple objects for all hooks based on DbApiHook. This flag will also be removed in a future release.kwargs – Additional parameters internal to Databricks SQL Connector parameters
- run(sql: str | Iterable[str], autocommit: bool = ..., parameters: Iterable | Mapping[str, Any] | None = ..., handler: None = ..., split_statements: bool = ..., return_last: bool = ...) None [source]¶
- run(sql: str | Iterable[str], autocommit: bool = ..., parameters: Iterable | Mapping[str, Any] | None = ..., handler: Callable[[Any], T] = ..., split_statements: bool = ..., return_last: bool = ...) tuple | list[tuple] | list[list[tuple] | tuple] | None
Run a command or a list of commands.
Pass a list of SQL statements to the SQL parameter to get them to execute sequentially.
- Parameters
sql – the sql statement to be executed (str) or a list of sql statements to execute
autocommit – What to set the connection’s autocommit setting to before executing the query. Note that currently there is no commit functionality in Databricks SQL so this flag has no effect.
parameters – The parameters to render the SQL query with.
handler – The result handler which is called with the result of each statement.
split_statements – Whether to split a single SQL string into statements and run separately
return_last – Whether to return result for only last statement or for all after split
- Returns
return only result of the LAST SQL expression if handler was provided unless return_last is set to False.