airflow.providers.amazon.aws.operators.redshift_data

Module Contents

Classes

RedshiftDataOperator

Executes SQL Statements against an Amazon Redshift cluster using Redshift Data

class airflow.providers.amazon.aws.operators.redshift_data.RedshiftDataOperator(database, sql, cluster_identifier=None, db_user=None, parameters=None, secret_arn=None, statement_name=None, with_event=False, await_result=True, poll_interval=10, aws_conn_id='aws_default', region=None, **kwargs)[source]

Bases: airflow.models.BaseOperator

Executes SQL Statements against an Amazon Redshift cluster using Redshift Data

See also

For more information on how to use this operator, take a look at the guide: Execute a statement on an Amazon Redshift cluster

Parameters
  • database (str) – the name of the database

  • sql (str | list) – the SQL statement or list of SQL statement to run

  • cluster_identifier (str | None) – unique identifier of a cluster

  • db_user (str | None) – the database username

  • parameters (list | None) – the parameters for the SQL statement

  • secret_arn (str | None) – the name or ARN of the secret that enables db access

  • statement_name (str | None) – the name of the SQL statement

  • with_event (bool) – indicates whether to send an event to EventBridge

  • await_result (bool) – indicates whether to wait for a result, if True wait, if False don’t wait

  • poll_interval (int) – how often in seconds to check the query status

  • aws_conn_id (str) – aws connection to use

  • region (str | None) – aws region to use

template_fields = ['cluster_identifier', 'database', 'sql', 'db_user', 'parameters', 'statement_name',...[source]
template_ext = ['.sql'][source]
template_fields_renderers[source]
hook()[source]

Create and return an RedshiftDataHook.

execute_query()[source]
execute_batch_query()[source]
wait_for_results(statement_id)[source]
execute(context)[source]

Execute a statement against Amazon Redshift

on_kill()[source]

Cancel the submitted redshift query

Was this entry helpful?