airflow.providers.google.cloud.operators.dataflow

This module contains Google Dataflow operators.

Module Contents

Classes

CheckJobRunning

Helper enum for choosing what to do if job is already running

DataflowConfiguration

Dataflow configuration that can be passed to

DataflowCreateJavaJobOperator

Start a Java Cloud Dataflow batch job. The parameters of the operation

DataflowTemplatedJobStartOperator

Start a Templated Cloud Dataflow job. The parameters of the operation

DataflowStartFlexTemplateOperator

Starts flex templates with the Dataflow pipeline.

DataflowStartSqlJobOperator

Starts Dataflow SQL query.

DataflowCreatePythonJobOperator

Launching Cloud Dataflow jobs written in python. Note that both

class airflow.providers.google.cloud.operators.dataflow.CheckJobRunning[source]

Bases: enum.Enum

Helper enum for choosing what to do if job is already running IgnoreJob - do not check if running FinishIfRunning - finish current dag run with no action WaitForRun - wait for job to finish and then continue with new job

IgnoreJob = 1[source]
FinishIfRunning = 2[source]
WaitForRun = 3[source]
class airflow.providers.google.cloud.operators.dataflow.DataflowConfiguration(*, job_name='{{task.task_id}}', append_job_name=True, project_id=None, location=DEFAULT_DATAFLOW_LOCATION, gcp_conn_id='google_cloud_default', delegate_to=None, poll_sleep=10, impersonation_chain=None, drain_pipeline=False, cancel_timeout=5 * 60, wait_until_finished=None, multiple_jobs=None, check_if_running=CheckJobRunning.WaitForRun, service_account=None)[source]

Dataflow configuration that can be passed to BeamRunJavaPipelineOperator and BeamRunPythonPipelineOperator.

Parameters
  • job_name (str) -- The 'jobName' to use when executing the Dataflow job (templated). This ends up being set in the pipeline options, so any entry with key 'jobName' or 'job_name'``in ``options will be overwritten.

  • append_job_name (bool) -- True if unique suffix has to be appended to job name.

  • project_id (Optional[str]) -- Optional, the Google Cloud project ID in which to start a job. If set to None or missing, the default project_id from the Google Cloud connection is used.

  • location (Optional[str]) -- Job location.

  • gcp_conn_id (str) -- The connection ID to use connecting to Google Cloud.

  • delegate_to (Optional[str]) -- The account to impersonate using domain-wide delegation of authority, if any. For this to work, the service account making the request must have domain-wide delegation enabled.

  • poll_sleep (int) -- The time in seconds to sleep between polling Google Cloud Platform for the dataflow job status while the job is in the JOB_STATE_RUNNING state.

  • impersonation_chain (Optional[Union[str, Sequence[str]]]) -- Optional service account to impersonate using short-term credentials, or chained list of accounts required to get the access_token of the last account in the list, which will be impersonated in the request. If set as a string, the account must grant the originating account the Service Account Token Creator IAM role. If set as a sequence, the identities from the list must grant Service Account Token Creator IAM role to the directly preceding identity, with first account from the list granting this role to the originating account (templated).

  • drain_pipeline (bool) -- Optional, set to True if want to stop streaming job by draining it instead of canceling during killing task instance. See: https://cloud.google.com/dataflow/docs/guides/stopping-a-pipeline

  • cancel_timeout (Optional[int]) -- How long (in seconds) operator should wait for the pipeline to be successfully cancelled when task is being killed. (optional) default to 300s

  • wait_until_finished (Optional[bool]) --

    (Optional) If True, wait for the end of pipeline execution before exiting. If False, only submits job. If None, default behavior.

    The default behavior depends on the type of pipeline:

    • for the streaming pipeline, wait for jobs to start,

    • for the batch pipeline, wait for the jobs to complete.

    Warning

    You cannot call PipelineResult.wait_until_finish method in your pipeline code for the operator to work properly. i. e. you must use asynchronous execution. Otherwise, your pipeline will always wait until finished. For more information, look at: Asynchronous execution

    The process of starting the Dataflow job in Airflow consists of two steps:

    • running a subprocess and reading the stderr/stderr log for the job id.

    • loop waiting for the end of the job ID from the previous step. This loop checks the status of the job.

    Step two is started just after step one has finished, so if you have wait_until_finished in your pipeline code, step two will not start until the process stops. When this process stops, steps two will run, but it will only execute one iteration as the job will be in a terminal state.

    If you in your pipeline do not call the wait_for_pipeline method but pass wait_until_finish=True to the operator, the second loop will wait for the job's terminal state.

    If you in your pipeline do not call the wait_for_pipeline method, and pass wait_until_finish=False to the operator, the second loop will check once is job not in terminal state and exit the loop.

  • multiple_jobs (Optional[bool]) -- If pipeline creates multiple jobs then monitor all jobs. Supported only by BeamRunJavaPipelineOperator

  • check_if_running (CheckJobRunning) -- Before running job, validate that a previous run is not in process. IgnoreJob = do not check if running. FinishIfRunning = if job is running finish with nothing. WaitForRun = wait until job finished and the run job. Supported only by: BeamRunJavaPipelineOperator

  • service_account (Optional[str]) -- Run the job as a specific service account, instead of the default GCE robot.

template_fields :Sequence[str] = ['job_name', 'location'][source]
class airflow.providers.google.cloud.operators.dataflow.DataflowCreateJavaJobOperator(*, jar, job_name='{{task.task_id}}', dataflow_default_options=None, options=None, project_id=None, location=DEFAULT_DATAFLOW_LOCATION, gcp_conn_id='google_cloud_default', delegate_to=None, poll_sleep=10, job_class=None, check_if_running=CheckJobRunning.WaitForRun, multiple_jobs=False, cancel_timeout=10 * 60, wait_until_finished=None, **kwargs)[source]

Bases: airflow.models.BaseOperator

Start a Java Cloud Dataflow batch job. The parameters of the operation will be passed to the job.

This class is deprecated. Please use providers.apache.beam.operators.beam.BeamRunJavaPipelineOperator.

Example:

default_args = {
    "owner": "airflow",
    "depends_on_past": False,
    "start_date": (2016, 8, 1),
    "email": ["alex@vanboxel.be"],
    "email_on_failure": False,
    "email_on_retry": False,
    "retries": 1,
    "retry_delay": timedelta(minutes=30),
    "dataflow_default_options": {
        "project": "my-gcp-project",
        "zone": "us-central1-f",
        "stagingLocation": "gs://bucket/tmp/dataflow/staging/",
    },
}

dag = DAG("test-dag", default_args=default_args)

task = DataflowCreateJavaJobOperator(
    gcp_conn_id="gcp_default",
    task_id="normalize-cal",
    jar="{{var.value.gcp_dataflow_base}}pipeline-ingress-cal-normalize-1.0.jar",
    options={
        "autoscalingAlgorithm": "BASIC",
        "maxNumWorkers": "50",
        "start": "{{ds}}",
        "partitionType": "DAY",
    },
    dag=dag,
)

See also

For more detail on job submission have a look at the reference: https://cloud.google.com/dataflow/pipelines/specifying-exec-params

See also

For more information on how to use this operator, take a look at the guide: Java SDK pipelines

Parameters
  • jar (str) -- The reference to a self executing Dataflow jar (templated).

  • job_name (str) -- The 'jobName' to use when executing the Dataflow job (templated). This ends up being set in the pipeline options, so any entry with key 'jobName' in options will be overwritten.

  • dataflow_default_options (Optional[dict]) -- Map of default job options.

  • options (Optional[dict]) --

    Map of job specific options.The key must be a dictionary. The value can contain different types:

    • If the value is None, the single option - --key (without value) will be added.

    • If the value is False, this option will be skipped

    • If the value is True, the single option - --key (without value) will be added.

    • If the value is list, the many options will be added for each key. If the value is ['A', 'B'] and the key is key then the --key=A --key=B options will be left

    • Other value types will be replaced with the Python textual representation.

    When defining labels (labels option), you can also provide a dictionary.

  • project_id (Optional[str]) -- Optional, the Google Cloud project ID in which to start a job. If set to None or missing, the default project_id from the Google Cloud connection is used.

  • location (str) -- Job location.

  • gcp_conn_id (str) -- The connection ID to use connecting to Google Cloud.

  • delegate_to (Optional[str]) -- The account to impersonate using domain-wide delegation of authority, if any. For this to work, the service account making the request must have domain-wide delegation enabled.

  • poll_sleep (int) -- The time in seconds to sleep between polling Google Cloud Platform for the dataflow job status while the job is in the JOB_STATE_RUNNING state.

  • job_class (Optional[str]) -- The name of the dataflow job class to be executed, it is often not the main class configured in the dataflow jar file.

  • multiple_jobs (bool) -- If pipeline creates multiple jobs then monitor all jobs

  • check_if_running (CheckJobRunning) -- before running job, validate that a previous run is not in process if job is running finish with nothing, WaitForRun= wait until job finished and the run job) jar, options, and job_name are templated so you can use variables in them.

  • cancel_timeout (Optional[int]) -- How long (in seconds) operator should wait for the pipeline to be successfully cancelled when task is being killed.

  • wait_until_finished (Optional[bool]) --

    (Optional) If True, wait for the end of pipeline execution before exiting. If False, only submits job. If None, default behavior.

    The default behavior depends on the type of pipeline:

    • for the streaming pipeline, wait for jobs to start,

    • for the batch pipeline, wait for the jobs to complete.

    Warning

    You cannot call PipelineResult.wait_until_finish method in your pipeline code for the operator to work properly. i. e. you must use asynchronous execution. Otherwise, your pipeline will always wait until finished. For more information, look at: Asynchronous execution

    The process of starting the Dataflow job in Airflow consists of two steps:

    • running a subprocess and reading the stderr/stderr log for the job id.

    • loop waiting for the end of the job ID from the previous step. This loop checks the status of the job.

    Step two is started just after step one has finished, so if you have wait_until_finished in your pipeline code, step two will not start until the process stops. When this process stops, steps two will run, but it will only execute one iteration as the job will be in a terminal state.

    If you in your pipeline do not call the wait_for_pipeline method but pass wait_until_finish=True to the operator, the second loop will wait for the job's terminal state.

    If you in your pipeline do not call the wait_for_pipeline method, and pass wait_until_finish=False to the operator, the second loop will check once is job not in terminal state and exit the loop.

Note that both dataflow_default_options and options will be merged to specify pipeline execution parameter, and dataflow_default_options is expected to save high-level options, for instances, project and zone information, which apply to all dataflow operators in the DAG.

It's a good practice to define dataflow_* parameters in the default_args of the dag like the project, zone and staging location.

default_args = {
    "dataflow_default_options": {
        "zone": "europe-west1-d",
        "stagingLocation": "gs://my-staging-bucket/staging/",
    }
}

You need to pass the path to your dataflow as a file reference with the jar parameter, the jar needs to be a self executing jar (see documentation here: https://beam.apache.org/documentation/runners/dataflow/#self-executing-jar). Use options to pass on options to your job.

t1 = DataflowCreateJavaJobOperator(
    task_id="dataflow_example",
    jar="{{var.value.gcp_dataflow_base}}pipeline/build/libs/pipeline-example-1.0.jar",
    options={
        "autoscalingAlgorithm": "BASIC",
        "maxNumWorkers": "50",
        "start": "{{ds}}",
        "partitionType": "DAY",
        "labels": {"foo": "bar"},
    },
    gcp_conn_id="airflow-conn-id",
    dag=my_dag,
)
template_fields :Sequence[str] = ['options', 'jar', 'job_name'][source]
ui_color = #0273d4[source]
execute(self, context)[source]

Execute the Apache Beam Pipeline.

on_kill(self)[source]

Override this method to cleanup subprocesses when a task instance gets killed. Any use of the threading, subprocess or multiprocessing module within an operator needs to be cleaned up or it will leave ghost processes behind.

class airflow.providers.google.cloud.operators.dataflow.DataflowTemplatedJobStartOperator(*, template, job_name='{{task.task_id}}', options=None, dataflow_default_options=None, parameters=None, project_id=None, location=DEFAULT_DATAFLOW_LOCATION, gcp_conn_id='google_cloud_default', delegate_to=None, poll_sleep=10, impersonation_chain=None, environment=None, cancel_timeout=10 * 60, wait_until_finished=None, **kwargs)[source]

Bases: airflow.models.BaseOperator

Start a Templated Cloud Dataflow job. The parameters of the operation will be passed to the job.

See also

For more information on how to use this operator, take a look at the guide: Templated jobs

Parameters
  • template (str) -- The reference to the Dataflow template.

  • job_name (str) -- The 'jobName' to use when executing the Dataflow template (templated).

  • options (Optional[Dict[str, Any]]) --

    Map of job runtime environment options. It will update environment argument if passed.

    See also

    For more information on possible configurations, look at the API documentation https://cloud.google.com/dataflow/pipelines/specifying-exec-params

  • dataflow_default_options (Optional[Dict[str, Any]]) -- Map of default job environment options.

  • parameters (Optional[Dict[str, str]]) -- Map of job specific parameters for the template.

  • project_id (Optional[str]) -- Optional, the Google Cloud project ID in which to start a job. If set to None or missing, the default project_id from the Google Cloud connection is used.

  • location (str) -- Job location.

  • gcp_conn_id (str) -- The connection ID to use connecting to Google Cloud.

  • delegate_to (Optional[str]) -- The account to impersonate using domain-wide delegation of authority, if any. For this to work, the service account making the request must have domain-wide delegation enabled.

  • poll_sleep (int) -- The time in seconds to sleep between polling Google Cloud Platform for the dataflow job status while the job is in the JOB_STATE_RUNNING state.

  • impersonation_chain (Optional[Union[str, Sequence[str]]]) -- Optional service account to impersonate using short-term credentials, or chained list of accounts required to get the access_token of the last account in the list, which will be impersonated in the request. If set as a string, the account must grant the originating account the Service Account Token Creator IAM role. If set as a sequence, the identities from the list must grant Service Account Token Creator IAM role to the directly preceding identity, with first account from the list granting this role to the originating account (templated).

  • environment (Optional[Dict]) --

    Optional, Map of job runtime environment options.

    See also

    For more information on possible configurations, look at the API documentation https://cloud.google.com/dataflow/pipelines/specifying-exec-params

  • cancel_timeout (Optional[int]) -- How long (in seconds) operator should wait for the pipeline to be successfully cancelled when task is being killed.

  • wait_until_finished (Optional[bool]) --

    (Optional) If True, wait for the end of pipeline execution before exiting. If False, only submits job. If None, default behavior.

    The default behavior depends on the type of pipeline:

    • for the streaming pipeline, wait for jobs to start,

    • for the batch pipeline, wait for the jobs to complete.

    Warning

    You cannot call PipelineResult.wait_until_finish method in your pipeline code for the operator to work properly. i. e. you must use asynchronous execution. Otherwise, your pipeline will always wait until finished. For more information, look at: Asynchronous execution

    The process of starting the Dataflow job in Airflow consists of two steps:

    • running a subprocess and reading the stderr/stderr log for the job id.

    • loop waiting for the end of the job ID from the previous step. This loop checks the status of the job.

    Step two is started just after step one has finished, so if you have wait_until_finished in your pipeline code, step two will not start until the process stops. When this process stops, steps two will run, but it will only execute one iteration as the job will be in a terminal state.

    If you in your pipeline do not call the wait_for_pipeline method but pass wait_until_finish=True to the operator, the second loop will wait for the job's terminal state.

    If you in your pipeline do not call the wait_for_pipeline method, and pass wait_until_finish=False to the operator, the second loop will check once is job not in terminal state and exit the loop.

It's a good practice to define dataflow_* parameters in the default_args of the dag like the project, zone and staging location.

default_args = {
    "dataflow_default_options": {
        "zone": "europe-west1-d",
        "tempLocation": "gs://my-staging-bucket/staging/",
    }
}

You need to pass the path to your dataflow template as a file reference with the template parameter. Use parameters to pass on parameters to your job. Use environment to pass on runtime environment variables to your job.

t1 = DataflowTemplatedJobStartOperator(
    task_id="dataflow_example",
    template="{{var.value.gcp_dataflow_base}}",
    parameters={
        "inputFile": "gs://bucket/input/my_input.txt",
        "outputFile": "gs://bucket/output/my_output.txt",
    },
    gcp_conn_id="airflow-conn-id",
    dag=my_dag,
)

template, dataflow_default_options, parameters, and job_name are templated so you can use variables in them.

Note that dataflow_default_options is expected to save high-level options for project information, which apply to all dataflow operators in the DAG.

template_fields :Sequence[str] = ['template', 'job_name', 'options', 'parameters', 'project_id', 'location', 'gcp_conn_id',...[source]
ui_color = #0273d4[source]
execute(self, context)[source]

This is the main method to derive when creating an operator. Context is the same dictionary used as when rendering jinja templates.

Refer to get_template_context for more context.

on_kill(self)[source]

Override this method to cleanup subprocesses when a task instance gets killed. Any use of the threading, subprocess or multiprocessing module within an operator needs to be cleaned up or it will leave ghost processes behind.

class airflow.providers.google.cloud.operators.dataflow.DataflowStartFlexTemplateOperator(body, location, project_id=None, gcp_conn_id='google_cloud_default', delegate_to=None, drain_pipeline=False, cancel_timeout=10 * 60, wait_until_finished=None, *args, **kwargs)[source]

Bases: airflow.models.BaseOperator

Starts flex templates with the Dataflow pipeline.

See also

For more information on how to use this operator, take a look at the guide: Templated jobs

Parameters
  • body (Dict) -- The request body. See: https://cloud.google.com/dataflow/docs/reference/rest/v1b3/projects.locations.flexTemplates/launch#request-body

  • location (str) -- The location of the Dataflow job (for example europe-west1)

  • project_id (Optional[str]) -- The ID of the GCP project that owns the job. If set to None or missing, the default project_id from the GCP connection is used.

  • gcp_conn_id (str) -- The connection ID to use connecting to Google Cloud Platform.

  • delegate_to (Optional[str]) -- The account to impersonate, if any. For this to work, the service account making the request must have domain-wide delegation enabled.

  • drain_pipeline (bool) -- Optional, set to True if want to stop streaming job by draining it instead of canceling during killing task instance. See: https://cloud.google.com/dataflow/docs/guides/stopping-a-pipeline

  • cancel_timeout (Optional[int]) -- How long (in seconds) operator should wait for the pipeline to be successfully cancelled when task is being killed.

  • wait_until_finished (Optional[bool]) --

    (Optional) If True, wait for the end of pipeline execution before exiting. If False, only submits job. If None, default behavior.

    The default behavior depends on the type of pipeline:

    • for the streaming pipeline, wait for jobs to start,

    • for the batch pipeline, wait for the jobs to complete.

    Warning

    You cannot call PipelineResult.wait_until_finish method in your pipeline code for the operator to work properly. i. e. you must use asynchronous execution. Otherwise, your pipeline will always wait until finished. For more information, look at: Asynchronous execution

    The process of starting the Dataflow job in Airflow consists of two steps:

    • running a subprocess and reading the stderr/stderr log for the job id.

    • loop waiting for the end of the job ID from the previous step. This loop checks the status of the job.

    Step two is started just after step one has finished, so if you have wait_until_finished in your pipeline code, step two will not start until the process stops. When this process stops, steps two will run, but it will only execute one iteration as the job will be in a terminal state.

    If you in your pipeline do not call the wait_for_pipeline method but pass wait_until_finish=True to the operator, the second loop will wait for the job's terminal state.

    If you in your pipeline do not call the wait_for_pipeline method, and pass wait_until_finish=False to the operator, the second loop will check once is job not in terminal state and exit the loop.

template_fields :Sequence[str] = ['body', 'location', 'project_id', 'gcp_conn_id'][source]
execute(self, context)[source]

This is the main method to derive when creating an operator. Context is the same dictionary used as when rendering jinja templates.

Refer to get_template_context for more context.

on_kill(self)[source]

Override this method to cleanup subprocesses when a task instance gets killed. Any use of the threading, subprocess or multiprocessing module within an operator needs to be cleaned up or it will leave ghost processes behind.

class airflow.providers.google.cloud.operators.dataflow.DataflowStartSqlJobOperator(job_name, query, options, location=DEFAULT_DATAFLOW_LOCATION, project_id=None, gcp_conn_id='google_cloud_default', delegate_to=None, drain_pipeline=False, *args, **kwargs)[source]

Bases: airflow.models.BaseOperator

Starts Dataflow SQL query.

See also

For more information on how to use this operator, take a look at the guide: Dataflow SQL

Warning

This operator requires gcloud command (Google Cloud SDK) must be installed on the Airflow worker <https://cloud.google.com/sdk/docs/install>`__

Parameters
  • job_name (str) -- The unique name to assign to the Cloud Dataflow job.

  • query (str) -- The SQL query to execute.

  • options (Dict[str, Any]) --

    Job parameters to be executed. It can be a dictionary with the following keys.

    For more information, look at: https://cloud.google.com/sdk/gcloud/reference/beta/dataflow/sql/query command reference

  • location (str) -- The location of the Dataflow job (for example europe-west1)

  • project_id (Optional[str]) -- The ID of the GCP project that owns the job. If set to None or missing, the default project_id from the GCP connection is used.

  • gcp_conn_id (str) -- The connection ID to use connecting to Google Cloud Platform.

  • delegate_to (Optional[str]) -- The account to impersonate, if any. For this to work, the service account making the request must have domain-wide delegation enabled.

  • drain_pipeline (bool) -- Optional, set to True if want to stop streaming job by draining it instead of canceling during killing task instance. See: https://cloud.google.com/dataflow/docs/guides/stopping-a-pipeline

template_fields :Sequence[str] = ['job_name', 'query', 'options', 'location', 'project_id', 'gcp_conn_id'][source]
template_fields_renderers[source]
execute(self, context)[source]

This is the main method to derive when creating an operator. Context is the same dictionary used as when rendering jinja templates.

Refer to get_template_context for more context.

on_kill(self)[source]

Override this method to cleanup subprocesses when a task instance gets killed. Any use of the threading, subprocess or multiprocessing module within an operator needs to be cleaned up or it will leave ghost processes behind.

class airflow.providers.google.cloud.operators.dataflow.DataflowCreatePythonJobOperator(*, py_file, job_name='{{task.task_id}}', dataflow_default_options=None, options=None, py_interpreter='python3', py_options=None, py_requirements=None, py_system_site_packages=False, project_id=None, location=DEFAULT_DATAFLOW_LOCATION, gcp_conn_id='google_cloud_default', delegate_to=None, poll_sleep=10, drain_pipeline=False, cancel_timeout=10 * 60, wait_until_finished=None, **kwargs)[source]

Bases: airflow.models.BaseOperator

Launching Cloud Dataflow jobs written in python. Note that both dataflow_default_options and options will be merged to specify pipeline execution parameter, and dataflow_default_options is expected to save high-level options, for instances, project and zone information, which apply to all dataflow operators in the DAG.

This class is deprecated. Please use providers.apache.beam.operators.beam.BeamRunPythonPipelineOperator.

See also

For more detail on job submission have a look at the reference: https://cloud.google.com/dataflow/pipelines/specifying-exec-params

See also

For more information on how to use this operator, take a look at the guide: Python SDK pipelines

Parameters
  • py_file (str) -- Reference to the python dataflow pipeline file.py, e.g., /some/local/file/path/to/your/python/pipeline/file. (templated)

  • job_name (str) -- The 'job_name' to use when executing the Dataflow job (templated). This ends up being set in the pipeline options, so any entry with key 'jobName' or 'job_name' in options will be overwritten.

  • py_options (Optional[List[str]]) -- Additional python options, e.g., ["-m", "-v"].

  • dataflow_default_options (Optional[dict]) -- Map of default job options.

  • options (Optional[dict]) --

    Map of job specific options.The key must be a dictionary. The value can contain different types:

    • If the value is None, the single option - --key (without value) will be added.

    • If the value is False, this option will be skipped

    • If the value is True, the single option - --key (without value) will be added.

    • If the value is list, the many options will be added for each key. If the value is ['A', 'B'] and the key is key then the --key=A --key=B options will be left

    • Other value types will be replaced with the Python textual representation.

    When defining labels (labels option), you can also provide a dictionary.

  • py_interpreter (str) -- Python version of the beam pipeline. If None, this defaults to the python3. To track python versions supported by beam and related issues check: https://issues.apache.org/jira/browse/BEAM-1251

  • py_requirements (Optional[List[str]]) --

    Additional python package(s) to install. If a value is passed to this parameter, a new virtual environment has been created with additional packages installed.

    You could also install the apache_beam package if it is not installed on your system or you want to use a different version.

  • py_system_site_packages (bool) --

    Whether to include system_site_packages in your virtualenv. See virtualenv documentation for more information.

    This option is only relevant if the py_requirements parameter is not None.

  • gcp_conn_id (str) -- The connection ID to use connecting to Google Cloud.

  • project_id (Optional[str]) -- Optional, the Google Cloud project ID in which to start a job. If set to None or missing, the default project_id from the Google Cloud connection is used.

  • location (str) -- Job location.

  • delegate_to (Optional[str]) -- The account to impersonate using domain-wide delegation of authority, if any. For this to work, the service account making the request must have domain-wide delegation enabled.

  • poll_sleep (int) -- The time in seconds to sleep between polling Google Cloud Platform for the dataflow job status while the job is in the JOB_STATE_RUNNING state.

  • drain_pipeline (bool) -- Optional, set to True if want to stop streaming job by draining it instead of canceling during killing task instance. See: https://cloud.google.com/dataflow/docs/guides/stopping-a-pipeline

  • cancel_timeout (Optional[int]) -- How long (in seconds) operator should wait for the pipeline to be successfully cancelled when task is being killed.

  • wait_until_finished (Optional[bool]) --

    (Optional) If True, wait for the end of pipeline execution before exiting. If False, only submits job. If None, default behavior.

    The default behavior depends on the type of pipeline:

    • for the streaming pipeline, wait for jobs to start,

    • for the batch pipeline, wait for the jobs to complete.

    Warning

    You cannot call PipelineResult.wait_until_finish method in your pipeline code for the operator to work properly. i. e. you must use asynchronous execution. Otherwise, your pipeline will always wait until finished. For more information, look at: Asynchronous execution

    The process of starting the Dataflow job in Airflow consists of two steps:

    • running a subprocess and reading the stderr/stderr log for the job id.

    • loop waiting for the end of the job ID from the previous step. This loop checks the status of the job.

    Step two is started just after step one has finished, so if you have wait_until_finished in your pipeline code, step two will not start until the process stops. When this process stops, steps two will run, but it will only execute one iteration as the job will be in a terminal state.

    If you in your pipeline do not call the wait_for_pipeline method but pass wait_until_finish=True to the operator, the second loop will wait for the job's terminal state.

    If you in your pipeline do not call the wait_for_pipeline method, and pass wait_until_finish=False to the operator, the second loop will check once is job not in terminal state and exit the loop.

template_fields :Sequence[str] = ['options', 'dataflow_default_options', 'job_name', 'py_file'][source]
execute(self, context)[source]

Execute the python dataflow job.

on_kill(self)[source]

Override this method to cleanup subprocesses when a task instance gets killed. Any use of the threading, subprocess or multiprocessing module within an operator needs to be cleaned up or it will leave ghost processes behind.

Was this entry helpful?