airflow.providers.yandex.operators.yandexcloud_dataproc

Module Contents

class airflow.providers.yandex.operators.yandexcloud_dataproc.DataprocCreateClusterOperator(*, folder_id: Optional[str] = None, cluster_name: Optional[str] = None, cluster_description: Optional[str] = '', cluster_image_version: Optional[str] = None, ssh_public_keys: Optional[Union[str, Iterable[str]]] = None, subnet_id: Optional[str] = None, services: Iterable[str] = ('HDFS', 'YARN', 'MAPREDUCE', 'HIVE', 'SPARK'), s3_bucket: Optional[str] = None, zone: str = 'ru-central1-b', service_account_id: Optional[str] = None, masternode_resource_preset: Optional[str] = None, masternode_disk_size: Optional[int] = None, masternode_disk_type: Optional[str] = None, datanode_resource_preset: Optional[str] = None, datanode_disk_size: Optional[int] = None, datanode_disk_type: Optional[str] = None, datanode_count: int = 1, computenode_resource_preset: Optional[str] = None, computenode_disk_size: Optional[int] = None, computenode_disk_type: Optional[str] = None, computenode_count: int = 0, computenode_max_hosts_count: Optional[int] = None, computenode_measurement_duration: Optional[int] = None, computenode_warmup_duration: Optional[int] = None, computenode_stabilization_duration: Optional[int] = None, computenode_preemptible: bool = False, computenode_cpu_utilization_target: Optional[int] = None, computenode_decommission_timeout: Optional[int] = None, connection_id: Optional[str] = None, **kwargs)[source]

Bases: airflow.models.BaseOperator

Creates Yandex.Cloud Data Proc cluster.

Parameters
  • folder_id (Optional[str]) -- ID of the folder in which cluster should be created.

  • cluster_name (Optional[str]) -- Cluster name. Must be unique inside the folder.

  • cluster_description (str) -- Cluster description.

  • cluster_image_version (str) -- Cluster image version. Use default.

  • ssh_public_keys (Optional[Union[str, Iterable[str]]]) -- List of SSH public keys that will be deployed to created compute instances.

  • subnet_id (str) -- ID of the subnetwork. All Data Proc cluster nodes will use one subnetwork.

  • services (Iterable[str]) -- List of services that will be installed to the cluster. Possible options: HDFS, YARN, MAPREDUCE, HIVE, TEZ, ZOOKEEPER, HBASE, SQOOP, FLUME, SPARK, SPARK, ZEPPELIN, OOZIE

  • s3_bucket (Optional[str]) -- Yandex.Cloud S3 bucket to store cluster logs. Jobs will not work if the bucket is not specified.

  • zone (str) -- Availability zone to create cluster in. Currently there are ru-central1-a, ru-central1-b and ru-central1-c.

  • service_account_id (Optional[str]) -- Service account id for the cluster. Service account can be created inside the folder.

  • masternode_resource_preset (str) -- Resources preset (CPU+RAM configuration) for the primary node of the cluster.

  • masternode_disk_size (int) -- Masternode storage size in GiB.

  • masternode_disk_type (str) -- Masternode storage type. Possible options: network-ssd, network-hdd.

  • datanode_resource_preset (str) -- Resources preset (CPU+RAM configuration) for the data nodes of the cluster.

  • datanode_disk_size (int) -- Datanodes storage size in GiB.

  • datanode_disk_type (str) -- Datanodes storage type. Possible options: network-ssd, network-hdd.

  • computenode_resource_preset (str) -- Resources preset (CPU+RAM configuration) for the compute nodes of the cluster.

  • computenode_disk_size (int) -- Computenodes storage size in GiB.

  • computenode_disk_type (str) -- Computenodes storage type. Possible options: network-ssd, network-hdd.

  • connection_id (Optional[str]) -- ID of the Yandex.Cloud Airflow connection.

  • computenode_max_count (int) -- Maximum number of nodes of compute autoscaling subcluster.

  • computenode_warmup_duration (int) -- The warmup time of the instance in seconds. During this time, traffic is sent to the instance, but instance metrics are not collected. In seconds.

  • computenode_stabilization_duration (int) -- Minimum amount of time in seconds for monitoring before Instance Groups can reduce the number of instances in the group. During this time, the group size doesn't decrease, even if the new metric values indicate that it should. In seconds.

  • computenode_preemptible (bool) -- Preemptible instances are stopped at least once every 24 hours, and can be stopped at any time if their resources are needed by Compute.

  • computenode_cpu_utilization_target (int) -- Defines an autoscaling rule based on the average CPU utilization of the instance group. in percents. 10-100. By default is not set and default autoscaling strategy is used.

  • computenode_decommission_timeout (int) -- Timeout to gracefully decommission nodes during downscaling. In seconds.

execute(self, context)[source]
class airflow.providers.yandex.operators.yandexcloud_dataproc.DataprocDeleteClusterOperator(*, connection_id: Optional[str] = None, cluster_id: Optional[str] = None, **kwargs)[source]

Bases: airflow.models.BaseOperator

Deletes Yandex.Cloud Data Proc cluster.

Parameters
  • connection_id (Optional[str]) -- ID of the Yandex.Cloud Airflow connection.

  • cluster_id (Optional[str]) -- ID of the cluster to remove. (templated)

template_fields = ['cluster_id'][source]
execute(self, context)[source]
class airflow.providers.yandex.operators.yandexcloud_dataproc.DataprocCreateHiveJobOperator(*, query: Optional[str] = None, query_file_uri: Optional[str] = None, script_variables: Optional[Dict[str, str]] = None, continue_on_failure: bool = False, properties: Optional[Dict[str, str]] = None, name: str = 'Hive job', cluster_id: Optional[str] = None, connection_id: Optional[str] = None, **kwargs)[source]

Bases: airflow.models.BaseOperator

Runs Hive job in Data Proc cluster.

Parameters
  • query (Optional[str]) -- Hive query.

  • query_file_uri (Optional[str]) -- URI of the script that contains Hive queries. Can be placed in HDFS or S3.

  • properties (Optional[Dist[str, str]]) -- A mapping of property names to values, used to configure Hive.

  • script_variables (Optional[Dist[str, str]]) -- Mapping of query variable names to values.

  • continue_on_failure (bool) -- Whether to continue executing queries if a query fails.

  • name (str) -- Name of the job. Used for labeling.

  • cluster_id (Optional[str]) -- ID of the cluster to run job in. Will try to take the ID from Dataproc Hook object if ot specified. (templated)

  • connection_id (Optional[str]) -- ID of the Yandex.Cloud Airflow connection.

template_fields = ['cluster_id'][source]
execute(self, context)[source]
class airflow.providers.yandex.operators.yandexcloud_dataproc.DataprocCreateMapReduceJobOperator(*, main_class: Optional[str] = None, main_jar_file_uri: Optional[str] = None, jar_file_uris: Optional[Iterable[str]] = None, archive_uris: Optional[Iterable[str]] = None, file_uris: Optional[Iterable[str]] = None, args: Optional[Iterable[str]] = None, properties: Optional[Dict[str, str]] = None, name: str = 'Mapreduce job', cluster_id: Optional[str] = None, connection_id: Optional[str] = None, **kwargs)[source]

Bases: airflow.models.BaseOperator

Runs Mapreduce job in Data Proc cluster.

Parameters
  • main_jar_file_uri (Optional[str]) -- URI of jar file with job. Can be placed in HDFS or S3. Can be specified instead of main_class.

  • main_class (Optional[str]) -- Name of the main class of the job. Can be specified instead of main_jar_file_uri.

  • file_uris (Optional[Iterable[str]]) -- URIs of files used in the job. Can be placed in HDFS or S3.

  • archive_uris (Optional[Iterable[str]]) -- URIs of archive files used in the job. Can be placed in HDFS or S3.

  • jar_file_uris (Optional[Iterable[str]]) -- URIs of JAR files used in the job. Can be placed in HDFS or S3.

  • properties (Optional[Dist[str, str]]) -- Properties for the job.

  • args (Optional[Iterable[str]]) -- Arguments to be passed to the job.

  • name (str) -- Name of the job. Used for labeling.

  • cluster_id (Optional[str]) -- ID of the cluster to run job in. Will try to take the ID from Dataproc Hook object if ot specified. (templated)

  • connection_id (Optional[str]) -- ID of the Yandex.Cloud Airflow connection.

template_fields = ['cluster_id'][source]
execute(self, context)[source]
class airflow.providers.yandex.operators.yandexcloud_dataproc.DataprocCreateSparkJobOperator(*, main_class: Optional[str] = None, main_jar_file_uri: Optional[str] = None, jar_file_uris: Optional[Iterable[str]] = None, archive_uris: Optional[Iterable[str]] = None, file_uris: Optional[Iterable[str]] = None, args: Optional[Iterable[str]] = None, properties: Optional[Dict[str, str]] = None, name: str = 'Spark job', cluster_id: Optional[str] = None, connection_id: Optional[str] = None, **kwargs)[source]

Bases: airflow.models.BaseOperator

Runs Spark job in Data Proc cluster.

Parameters
  • main_jar_file_uri (Optional[str]) -- URI of jar file with job. Can be placed in HDFS or S3.

  • main_class (Optional[str]) -- Name of the main class of the job.

  • file_uris (Optional[Iterable[str]]) -- URIs of files used in the job. Can be placed in HDFS or S3.

  • archive_uris (Optional[Iterable[str]]) -- URIs of archive files used in the job. Can be placed in HDFS or S3.

  • jar_file_uris (Optional[Iterable[str]]) -- URIs of JAR files used in the job. Can be placed in HDFS or S3.

  • properties (Optional[Dist[str, str]]) -- Properties for the job.

  • args (Optional[Iterable[str]]) -- Arguments to be passed to the job.

  • name (str) -- Name of the job. Used for labeling.

  • cluster_id (Optional[str]) -- ID of the cluster to run job in. Will try to take the ID from Dataproc Hook object if ot specified. (templated)

  • connection_id (Optional[str]) -- ID of the Yandex.Cloud Airflow connection.

template_fields = ['cluster_id'][source]
execute(self, context)[source]
class airflow.providers.yandex.operators.yandexcloud_dataproc.DataprocCreatePysparkJobOperator(*, main_python_file_uri: Optional[str] = None, python_file_uris: Optional[Iterable[str]] = None, jar_file_uris: Optional[Iterable[str]] = None, archive_uris: Optional[Iterable[str]] = None, file_uris: Optional[Iterable[str]] = None, args: Optional[Iterable[str]] = None, properties: Optional[Dict[str, str]] = None, name: str = 'Pyspark job', cluster_id: Optional[str] = None, connection_id: Optional[str] = None, **kwargs)[source]

Bases: airflow.models.BaseOperator

Runs Pyspark job in Data Proc cluster.

Parameters
  • main_python_file_uri (Optional[str]) -- URI of python file with job. Can be placed in HDFS or S3.

  • python_file_uris (Optional[Iterable[str]]) -- URIs of python files used in the job. Can be placed in HDFS or S3.

  • file_uris (Optional[Iterable[str]]) -- URIs of files used in the job. Can be placed in HDFS or S3.

  • archive_uris (Optional[Iterable[str]]) -- URIs of archive files used in the job. Can be placed in HDFS or S3.

  • jar_file_uris (Optional[Iterable[str]]) -- URIs of JAR files used in the job. Can be placed in HDFS or S3.

  • properties (Optional[Dist[str, str]]) -- Properties for the job.

  • args (Optional[Iterable[str]]) -- Arguments to be passed to the job.

  • name (str) -- Name of the job. Used for labeling.

  • cluster_id (Optional[str]) -- ID of the cluster to run job in. Will try to take the ID from Dataproc Hook object if ot specified. (templated)

  • connection_id (Optional[str]) -- ID of the Yandex.Cloud Airflow connection.

template_fields = ['cluster_id'][source]
execute(self, context)[source]

Was this entry helpful?