Databricks Connection

The Databricks connection type enables the Databricks & Databricks SQL Integration.

Authenticating to Databricks

There are several ways to connect to Databricks using Airflow.

  1. Use a Personal Access Token (PAT) i.e. add a token to the Airflow connection. This is the recommended method.

  2. Use Databricks login credentials i.e. add the username and password used to login to the Databricks account to the Airflow connection. Note that username/password authentication is discouraged and not supported for DatabricksSqlOperator.

  3. Using Azure Active Directory (AAD) token generated from Azure Service Principal's ID and secret (only on Azure Databricks). Service principal could be defined as a user inside workspace, or outside of workspace having Owner or Contributor permissions

  4. Using Azure Active Directory (AAD) token obtained for Azure managed identity, when Airflow runs on the VM with assigned managed identity (system-assigned or user-assigned)

Default Connection IDs

Hooks and operators related to Databricks use databricks_default by default.

Configuring the Connection

Host (required)

Specify the Databricks workspace URL

Login (optional)
  • If authentication with Databricks login credentials is used then specify the username used to login to Databricks.

  • If authentication with Azure Service Principal is used then specify the ID of the Azure Service Principal

Password (optional)
  • If authentication with Databricks login credentials is used then specify the password used to login to Databricks.

  • If authentication with Azure Service Principal is used then specify the secret of the Azure Service Principal

  • if authentication with PAT is used, then specify PAT and use token as the login (recommended)

Extra (optional)

Specify the extra parameter (as json dictionary) that can be used in the Databricks connection.

Following parameter should be used if using the PAT authentication method:

  • token: Specify PAT to use. Note, the PAT must appear in both the Password field as the token value in Extra.

Following parameters are necessary if using authentication with AAD token:

  • azure_tenant_id: ID of the Azure Active Directory tenant

  • azure_resource_id: optional Resource ID of the Azure Databricks workspace (required if Service Principal isn't a user inside workspace)

  • azure_ad_endpoint: optional host name of Azure AD endpoint if you're using special Azure Cloud (GovCloud, China, Germany). The value must contain a protocol. For example: https://login.microsoftonline.de.

Following parameters are necessary if using authentication with AAD token for Azure managed identity:

  • use_azure_managed_identity: required boolean flag to specify if managed identity needs to be used instead of service principal

  • azure_resource_id: optional Resource ID of the Azure Databricks workspace (required if managed identity isn't a user inside workspace)

Following parameters could be set when using DatabricksSqlOperator:

  • http_path: optional HTTP path of Databricks SQL endpoint or Databricks cluster. See documentation.

  • session_configuration: optional map containing Spark session configuration parameters.

  • named internal arguments to the Connection object from databricks-sql-connector package.

When specifying the connection using an environment variable you should specify it using URI syntax.

Note that all components of the URI should be URL-encoded.

For example:

export AIRFLOW_CONN_DATABRICKS_DEFAULT='databricks://@host-url?token=yourtoken'

Was this entry helpful?