airflow.models

Airflow models

Package Contents

airflow.models.Base :Any[source]
airflow.models.ID_LEN = 250[source]
class airflow.models.BaseOperator(task_id, owner=conf.get('operators', 'DEFAULT_OWNER'), email=None, email_on_retry=True, email_on_failure=True, retries=conf.getint('core', 'default_task_retries', fallback=0), retry_delay=timedelta(seconds=300), retry_exponential_backoff=False, max_retry_delay=None, start_date=None, end_date=None, schedule_interval=None, depends_on_past=False, wait_for_downstream=False, dag=None, params=None, default_args=None, priority_weight=1, weight_rule=WeightRule.DOWNSTREAM, queue=conf.get('celery', 'default_queue'), pool=Pool.DEFAULT_POOL_NAME, sla=None, execution_timeout=None, on_failure_callback=None, on_success_callback=None, on_retry_callback=None, trigger_rule=TriggerRule.ALL_SUCCESS, resources=None, run_as_user=None, task_concurrency=None, executor_config=None, do_xcom_push=True, inlets=None, outlets=None, *args, **kwargs)[source]

Bases: airflow.utils.log.logging_mixin.LoggingMixin

Abstract base class for all operators. Since operators create objects that become nodes in the dag, BaseOperator contains many recursive methods for dag crawling behavior. To derive this class, you are expected to override the constructor as well as the ‘execute’ method.

Operators derived from this class should perform or trigger certain tasks synchronously (wait for completion). Example of operators could be an operator that runs a Pig job (PigOperator), a sensor operator that waits for a partition to land in Hive (HiveSensorOperator), or one that moves data from Hive to MySQL (Hive2MySqlOperator). Instances of these operators (tasks) target specific operations, running specific scripts, functions or data transfers.

This class is abstract and shouldn’t be instantiated. Instantiating a class derived from this one results in the creation of a task object, which ultimately becomes a node in DAG objects. Task dependencies should be set by using the set_upstream and/or set_downstream methods.

Parameters
  • task_id (str) – a unique, meaningful id for the task

  • owner (str) – the owner of the task, using the unix username is recommended

  • retries (int) – the number of retries that should be performed before failing the task

  • retry_delay (datetime.timedelta) – delay between retries

  • retry_exponential_backoff (bool) – allow progressive longer waits between retries by using exponential backoff algorithm on retry delay (delay will be converted into seconds)

  • max_retry_delay (datetime.timedelta) – maximum delay interval between retries

  • start_date (datetime.datetime) – The start_date for the task, determines the execution_date for the first task instance. The best practice is to have the start_date rounded to your DAG’s schedule_interval. Daily jobs have their start_date some day at 00:00:00, hourly jobs have their start_date at 00:00 of a specific hour. Note that Airflow simply looks at the latest execution_date and adds the schedule_interval to determine the next execution_date. It is also very important to note that different tasks’ dependencies need to line up in time. If task A depends on task B and their start_date are offset in a way that their execution_date don’t line up, A’s dependencies will never be met. If you are looking to delay a task, for example running a daily task at 2AM, look into the TimeSensor and TimeDeltaSensor. We advise against using dynamic start_date and recommend using fixed ones. Read the FAQ entry about start_date for more information.

  • end_date (datetime.datetime) – if specified, the scheduler won’t go beyond this date

  • depends_on_past (bool) – when set to true, task instances will run sequentially while relying on the previous task’s schedule to succeed. The task instance for the start_date is allowed to run.

  • wait_for_downstream (bool) – when set to true, an instance of task X will wait for tasks immediately downstream of the previous instance of task X to finish successfully before it runs. This is useful if the different instances of a task X alter the same asset, and this asset is used by tasks downstream of task X. Note that depends_on_past is forced to True wherever wait_for_downstream is used.

  • queue (str) – which queue to target when running this job. Not all executors implement queue management, the CeleryExecutor does support targeting specific queues.

  • dag (airflow.models.DAG) – a reference to the dag the task is attached to (if any)

  • priority_weight (int) – priority weight of this task against other task. This allows the executor to trigger higher priority tasks before others when things get backed up. Set priority_weight as a higher number for more important tasks.

  • weight_rule (str) – weighting method used for the effective total priority weight of the task. Options are: { downstream | upstream | absolute } default is downstream When set to downstream the effective weight of the task is the aggregate sum of all downstream descendants. As a result, upstream tasks will have higher weight and will be scheduled more aggressively when using positive weight values. This is useful when you have multiple dag run instances and desire to have all upstream tasks to complete for all runs before each dag can continue processing downstream tasks. When set to upstream the effective weight is the aggregate sum of all upstream ancestors. This is the opposite where downtream tasks have higher weight and will be scheduled more aggressively when using positive weight values. This is useful when you have multiple dag run instances and prefer to have each dag complete before starting upstream tasks of other dags. When set to absolute, the effective weight is the exact priority_weight specified without additional weighting. You may want to do this when you know exactly what priority weight each task should have. Additionally, when set to absolute, there is bonus effect of significantly speeding up the task creation process as for very large DAGS. Options can be set as string or using the constants defined in the static class airflow.utils.WeightRule

  • pool (str) – the slot pool this task should run in, slot pools are a way to limit concurrency for certain tasks

  • sla (datetime.timedelta) – time by which the job is expected to succeed. Note that this represents the timedelta after the period is closed. For example if you set an SLA of 1 hour, the scheduler would send an email soon after 1:00AM on the 2016-01-02 if the 2016-01-01 instance has not succeeded yet. The scheduler pays special attention for jobs with an SLA and sends alert emails for sla misses. SLA misses are also recorded in the database for future reference. All tasks that share the same SLA time get bundled in a single email, sent soon after that time. SLA notification are sent once and only once for each task instance.

  • execution_timeout (datetime.timedelta) – max time allowed for the execution of this task instance, if it goes beyond it will raise and fail.

  • on_failure_callback (callable) – a function to be called when a task instance of this task fails. a context dictionary is passed as a single parameter to this function. Context contains references to related objects to the task instance and is documented under the macros section of the API.

  • on_retry_callback (callable) – much like the on_failure_callback except that it is executed when retries occur.

  • on_success_callback (callable) – much like the on_failure_callback except that it is executed when the task succeeds.

  • trigger_rule (str) – defines the rule by which dependencies are applied for the task to get triggered. Options are: { all_success | all_failed | all_done | one_success | one_failed | none_failed | none_skipped | dummy} default is all_success. Options can be set as string or using the constants defined in the static class airflow.utils.TriggerRule

  • resources (dict) – A map of resource parameter names (the argument names of the Resources constructor) to their values.

  • run_as_user (str) – unix username to impersonate while running the task

  • task_concurrency (int) – When set, a task will be able to limit the concurrent runs across execution_dates

  • executor_config (dict) –

    Additional task-level configuration parameters that are interpreted by a specific executor. Parameters are namespaced by the name of executor.

    Example: to run this task in a specific docker container through the KubernetesExecutor

    MyOperator(...,
        executor_config={
        "KubernetesExecutor":
            {"image": "myCustomDockerImage"}
            }
    )
    

  • do_xcom_push (bool) – if True, an XCom is pushed containing the Operator’s result

template_fields :Iterable[str] = []
template_ext :Iterable[str] = []
ui_color = #fff
ui_fgcolor = #000
_base_operator_shallow_copy_attrs :Iterable[str] = ['user_defined_macros', 'user_defined_filters', 'params', '_log']
shallow_copy_attrs :Iterable[str] = []
_comps
dag

Returns the Operator’s DAG if set, otherwise raises an error

dag_id

Returns dag id if it has one or an adhoc + owner

deps

Returns the list of dependencies for the operator. These differ from execution context dependencies in that they are specific to tasks and can be extended/overridden by subclasses.

schedule_interval

The schedule interval of the DAG always wins over individual tasks so that tasks within a DAG always line up. The task still needs a schedule_interval as it may not be attached to a DAG.

priority_weight_total

Total priority weight for the task. It might include all upstream or downstream tasks. depending on the weight rule.

  • WeightRule.ABSOLUTE - only own weight

  • WeightRule.DOWNSTREAM - adds priority weight of all downstream tasks

  • WeightRule.UPSTREAM - adds priority weight of all upstream tasks

upstream_list

@property: list of tasks directly upstream

upstream_task_ids

@property: list of ids of tasks directly upstream

downstream_list

@property: list of tasks directly downstream

downstream_task_ids

@property: list of ids of tasks directly downstream

task_type

@property: type of the task

__eq__(self, other)
__ne__(self, other)
__lt__(self, other)
__hash__(self)
__rshift__(self, other)

Implements Self >> Other == self.set_downstream(other)

If “Other” is a DAG, the DAG is assigned to the Operator.

__lshift__(self, other)

Implements Self << Other == self.set_upstream(other)

If “Other” is a DAG, the DAG is assigned to the Operator.

__rrshift__(self, other)

Called for [DAG] >> [Operator] because DAGs don’t have __rshift__ operators.

__rlshift__(self, other)

Called for [DAG] << [Operator] because DAGs don’t have __lshift__ operators.

has_dag(self)

Returns True if the Operator has been assigned to a DAG.

Returns dictionary of all extra links for the operator

Returns dictionary of all global extra links

pre_execute(self, context)

This hook is triggered right before self.execute() is called.

execute(self, context)

This is the main method to derive when creating an operator. Context is the same dictionary used as when rendering jinja templates.

Refer to get_template_context for more context.

post_execute(self, context, result=None)

This hook is triggered right after self.execute() is called. It is passed the execution context and any results returned by the operator.

on_kill(self)

Override this method to cleanup subprocesses when a task instance gets killed. Any use of the threading, subprocess or multiprocessing module within an operator needs to be cleaned up or it will leave ghost processes behind.

__deepcopy__(self, memo)

Hack sorting double chained task lists by task_id to avoid hitting max_depth on deepcopy operations.

__getstate__(self)
__setstate__(self, state)
render_template_fields(self, context, jinja_env=None)

Template all attributes listed in template_fields. Note this operation is irreversible.

Parameters
  • context (dict) – Dict with values to apply on content

  • jinja_env (jinja2.Environment) – Jinja environment

_do_render_template_fields(self, parent, template_fields, context, jinja_env, seen_oids)
render_template(self, content, context, jinja_env=None, seen_oids=None)

Render a templated string. The content can be a collection holding multiple templated strings and will be templated recursively.

Parameters
  • content (Any) – Content to template. Only strings can be templated (may be inside collection).

  • context (dict) – Dict with values to apply on templated content

  • jinja_env (jinja2.Environment) – Jinja environment. Can be provided to avoid re-creating Jinja environments during recursion.

  • seen_oids (set) – template fields already rendered (to avoid RecursionError on circular dependencies)

Returns

Templated content

_render_nested_template_fields(self, content, context, jinja_env, seen_oids)
get_template_env(self)

Fetch a Jinja template environment from the DAG or instantiate empty environment if no DAG.

prepare_template(self)

Hook that is triggered after the templated fields get replaced by their content. If you need your operator to alter the content of the file before the template is rendered, it should override this method to do so.

resolve_template_files(self)
clear(self, start_date=None, end_date=None, upstream=False, downstream=False, session=None)

Clears the state of task instances associated with the task, following the parameters specified.

get_task_instances(self, start_date=None, end_date=None, session=None)

Get a set of task instance related to this task for a specific date range.

get_flat_relative_ids(self, upstream=False, found_descendants=None)

Get a flat list of relatives’ ids, either upstream or downstream.

get_flat_relatives(self, upstream=False)

Get a flat list of relatives, either upstream or downstream.

run(self, start_date=None, end_date=None, ignore_first_depends_on_past=False, ignore_ti_state=False, mark_success=False)

Run a set of task instances for a date range.

dry_run(self)

Performs dry run for the operator - just render template fields.

get_direct_relative_ids(self, upstream=False)

Get the direct relative ids to the current task, upstream or downstream.

get_direct_relatives(self, upstream=False)

Get the direct relatives to the current task, upstream or downstream.

__repr__(self)
add_only_new(self, item_set, item)

Adds only new items to item set

_set_relatives(self, task_or_task_list, upstream=False)

Sets relatives for the task.

set_downstream(self, task_or_task_list)

Set a task or a task list to be directly downstream from the current task.

set_upstream(self, task_or_task_list)

Set a task or a task list to be directly upstream from the current task.

xcom_push(self, context, key, value, execution_date=None)

See TaskInstance.xcom_push()

xcom_pull(self, context, task_ids=None, dag_id=None, key=XCOM_RETURN_KEY, include_prior_dates=None)

See TaskInstance.xcom_pull()

@property: extra links for the task.

For an operator, gets the URL that the external links specified in extra_links should point to. :raise ValueError: The error message of a ValueError will be passed on through to the fronted to show up as a tooltip on the disabled link :param dttm: The datetime parsed execution date for the URL being searched for :param link_name: The name of the link we’re looking for the URL for. Should be one of the options specified in extra_links :return: A URL

class airflow.models.Connection(conn_id=None, conn_type=None, host=None, login=None, password=None, schema=None, port=None, extra=None, uri=None)[source]

Bases: airflow.models.base.Base, airflow.LoggingMixin

Placeholder to store information about different database instances connection information. The idea here is that scripts use references to database instances (conn_id) instead of hard coding hostname, logins and passwords when using operators or hooks.

__tablename__ = connection
id
conn_id
conn_type
host
schema
login
_password
port
is_encrypted
is_extra_encrypted
_extra
_types = [['docker', 'Docker Registry'], ['fs', 'File (path)'], ['ftp', 'FTP'], ['google_cloud_platform', 'Google Cloud Platform'], ['hdfs', 'HDFS'], ['http', 'HTTP'], ['pig_cli', 'Pig Client Wrapper'], ['hive_cli', 'Hive Client Wrapper'], ['hive_metastore', 'Hive Metastore Thrift'], ['hiveserver2', 'Hive Server 2 Thrift'], ['jdbc', 'Jdbc Connection'], ['jenkins', 'Jenkins'], ['mysql', 'MySQL'], ['postgres', 'Postgres'], ['oracle', 'Oracle'], ['vertica', 'Vertica'], ['presto', 'Presto'], ['s3', 'S3'], ['samba', 'Samba'], ['sqlite', 'Sqlite'], ['ssh', 'SSH'], ['cloudant', 'IBM Cloudant'], ['mssql', 'Microsoft SQL Server'], ['mesos_framework-id', 'Mesos Framework ID'], ['jira', 'JIRA'], ['redis', 'Redis'], ['wasb', 'Azure Blob Storage'], ['databricks', 'Databricks'], ['aws', 'Amazon Web Services'], ['emr', 'Elastic MapReduce'], ['snowflake', 'Snowflake'], ['segment', 'Segment'], ['azure_data_lake', 'Azure Data Lake'], ['azure_container_instances', 'Azure Container Instances'], ['azure_cosmos', 'Azure CosmosDB'], ['cassandra', 'Cassandra'], ['qubole', 'Qubole'], ['mongo', 'MongoDB'], ['gcpcloudsql', 'Google Cloud SQL'], ['grpc', 'GRPC Connection']]
password
extra
extra_dejson

Returns the extra property by deserializing json.

parse_from_uri(self, uri)
get_password(self)
set_password(self, value)
get_extra(self)
set_extra(self, value)
rotate_fernet_key(self)
get_hook(self)
__repr__(self)
debug_info(self)
class airflow.models.DAG(dag_id, description='', schedule_interval=timedelta(days=1), start_date=None, end_date=None, full_filepath=None, template_searchpath=None, template_undefined=jinja2.Undefined, user_defined_macros=None, user_defined_filters=None, default_args=None, concurrency=conf.getint('core', 'dag_concurrency'), max_active_runs=conf.getint('core', 'max_active_runs_per_dag'), dagrun_timeout=None, sla_miss_callback=None, default_view=None, orientation=conf.get('webserver', 'dag_orientation'), catchup=conf.getboolean('scheduler', 'catchup_by_default'), on_success_callback=None, on_failure_callback=None, doc_md=None, params=None, access_control=None, is_paused_upon_creation=None, jinja_environment_kwargs=None)[source]

Bases: airflow.dag.base_dag.BaseDag, airflow.utils.log.logging_mixin.LoggingMixin

A dag (directed acyclic graph) is a collection of tasks with directional dependencies. A dag also has a schedule, a start date and an end date (optional). For each schedule, (say daily or hourly), the DAG needs to run each individual tasks as their dependencies are met. Certain tasks have the property of depending on their own past, meaning that they can’t run until their previous schedule (and upstream tasks) are completed.

DAGs essentially act as namespaces for tasks. A task_id can only be added once to a DAG.

Parameters
  • dag_id (str) – The id of the DAG

  • description (str) – The description for the DAG to e.g. be shown on the webserver

  • schedule_interval (datetime.timedelta or dateutil.relativedelta.relativedelta or str that acts as a cron expression) – Defines how often that DAG runs, this timedelta object gets added to your latest task instance’s execution_date to figure out the next schedule

  • start_date (datetime.datetime) – The timestamp from which the scheduler will attempt to backfill

  • end_date (datetime.datetime) – A date beyond which your DAG won’t run, leave to None for open ended scheduling

  • template_searchpath (str or list[str]) – This list of folders (non relative) defines where jinja will look for your templates. Order matters. Note that jinja/airflow includes the path of your DAG file by default

  • template_undefined (jinja2.Undefined) – Template undefined type.

  • user_defined_macros (dict) – a dictionary of macros that will be exposed in your jinja templates. For example, passing dict(foo='bar') to this argument allows you to {{ foo }} in all jinja templates related to this DAG. Note that you can pass any type of object here.

  • user_defined_filters (dict) – a dictionary of filters that will be exposed in your jinja templates. For example, passing dict(hello=lambda name: 'Hello %s' % name) to this argument allows you to {{ 'world' | hello }} in all jinja templates related to this DAG.

  • default_args (dict) – A dictionary of default parameters to be used as constructor keyword parameters when initialising operators. Note that operators have the same hook, and precede those defined here, meaning that if your dict contains ‘depends_on_past’: True here and ‘depends_on_past’: False in the operator’s call default_args, the actual value will be False.

  • params (dict) – a dictionary of DAG level parameters that are made accessible in templates, namespaced under params. These params can be overridden at the task level.

  • concurrency (int) – the number of task instances allowed to run concurrently

  • max_active_runs (int) – maximum number of active DAG runs, beyond this number of DAG runs in a running state, the scheduler won’t create new active DAG runs

  • dagrun_timeout (datetime.timedelta) – specify how long a DagRun should be up before timing out / failing, so that new DagRuns can be created. The timeout is only enforced for scheduled DagRuns, and only once the # of active DagRuns == max_active_runs.

  • sla_miss_callback (types.FunctionType) – specify a function to call when reporting SLA timeouts.

  • default_view (str) – Specify DAG default view (tree, graph, duration, gantt, landing_times)

  • orientation (str) – Specify DAG orientation in graph view (LR, TB, RL, BT)

  • catchup (bool) – Perform scheduler catchup (or only run latest)? Defaults to True

  • on_failure_callback (callable) – A function to be called when a DagRun of this dag fails. A context dictionary is passed as a single parameter to this function.

  • on_success_callback (callable) – Much like the on_failure_callback except that it is executed when the dag succeeds.

  • access_control (dict) – Specify optional DAG-level permissions, e.g., “{‘role1’: {‘can_dag_read’}, ‘role2’: {‘can_dag_read’, ‘can_dag_edit’}}”

  • is_paused_upon_creation (bool or None) – Specifies if the dag is paused when created for the first time. If the dag exists already, this flag will be ignored. If this optional parameter is not specified, the global config setting will be used.

  • jinja_environment_kwargs (dict) –

    additional configuration options to be passed to Jinja Environment for template rendering

    Example: to avoid Jinja from removing a trailing newline from template strings

    DAG(dag_id='my-dag',
        jinja_environment_kwargs={
            'keep_trailing_newline': True,
            # some other jinja2 Environment options here
        }
    )
    

    See: Jinja Environment documentation

_comps
dag_id
full_filepath
concurrency
access_control
description
pickle_id
tasks
task_ids
filepath

File location of where the dag object is instantiated

folder

Folder location of where the DAG object is instantiated.

owner

Return list of all owners found in DAG tasks.

Returns

Comma separated list of owners in DAG tasks

Return type

str

concurrency_reached

Returns a boolean indicating whether the concurrency limit for this DAG has been reached

is_paused

Returns a boolean indicating whether this DAG is paused

latest_execution_date

Returns the latest date for which at least one dag run exists

subdags

Returns a list of the subdag objects associated to this DAG

roots

Return nodes with no parents. These are first to execute and are called roots or root nodes.

leaves

Return nodes with no children. These are last to execute and are called leaves or leaf nodes.

__repr__(self)
__eq__(self, other)
__ne__(self, other)
__lt__(self, other)
__hash__(self)
__enter__(self)
__exit__(self, _type, _value, _tb)
get_default_view(self)

This is only there for backward compatible jinja2 templates

date_range(self, start_date, num=None, end_date=timezone.utcnow())
is_fixed_time_schedule(self)

Figures out if the DAG schedule has a fixed time (e.g. 3 AM).

Returns

True if the schedule has a fixed time, False if not.

following_schedule(self, dttm)

Calculates the following schedule for this dag in UTC.

Parameters

dttm – utc datetime

Returns

utc datetime

previous_schedule(self, dttm)

Calculates the previous schedule for this dag in UTC

Parameters

dttm – utc datetime

Returns

utc datetime

get_run_dates(self, start_date, end_date=None)

Returns a list of dates between the interval received as parameter using this dag’s schedule interval. Returned dates can be used for execution dates.

Parameters
  • start_date (datetime) – the start date of the interval

  • end_date (datetime) – the end date of the interval, defaults to timezone.utcnow()

Returns

a list of dates within the interval following the dag’s schedule

Return type

list

normalize_schedule(self, dttm)

Returns dttm + interval unless dttm is first interval then it returns dttm

get_last_dagrun(self, session=None, include_externally_triggered=False)
_get_concurrency_reached(self, session=None)
_get_is_paused(self, session=None)
handle_callback(self, dagrun, success=True, reason=None, session=None)

Triggers the appropriate callback depending on the value of success, namely the on_failure_callback or on_success_callback. This method gets the context of a single TaskInstance part of this DagRun and passes that to the callable along with a ‘reason’, primarily to differentiate DagRun failures.

Parameters
  • dagrun – DagRun object

  • success – Flag to specify if failure or success callback should be called

  • reason – Completion reason

  • session – Database session

get_active_runs(self)

Returns a list of dag run execution dates currently running

Returns

List of execution dates

get_num_active_runs(self, external_trigger=None, session=None)

Returns the number of active “running” dag runs

Parameters
  • external_trigger (bool) – True for externally triggered active dag runs

  • session

Returns

number greater than 0 for active dag runs

get_dagrun(self, execution_date, session=None)

Returns the dag run for a given execution date if it exists, otherwise none.

Parameters
  • execution_date – The execution date of the DagRun to find.

  • session

Returns

The DagRun if found, otherwise None.

get_dagruns_between(self, start_date, end_date, session=None)

Returns the list of dag runs between start_date (inclusive) and end_date (inclusive).

Parameters
  • start_date – The starting execution date of the DagRun to find.

  • end_date – The ending execution date of the DagRun to find.

  • session

Returns

The list of DagRuns found.

_get_latest_execution_date(self, session=None)
resolve_template_files(self)
get_template_env(self)

Build a Jinja2 environment.

set_dependency(self, upstream_task_id, downstream_task_id)

Simple utility method to set dependency between two tasks that already have been added to the DAG using add_task()

get_task_instances(self, start_date=None, end_date=None, state=None, session=None)
topological_sort(self)

Sorts tasks in topographical order, such that a task comes after any of its upstream dependencies.

Heavily inspired by: http://blog.jupo.org/2012/04/06/topological-sorting-acyclic-directed-graphs/

Returns

list of tasks in topological order

set_dag_runs_state(self, state=State.RUNNING, session=None, start_date=None, end_date=None)
clear(self, start_date=None, end_date=None, only_failed=False, only_running=False, confirm_prompt=False, include_subdags=True, include_parentdag=True, reset_dag_runs=True, dry_run=False, session=None, get_tis=False)

Clears a set of task instances associated with the current dag for a specified date range.

classmethod clear_dags(cls, dags, start_date=None, end_date=None, only_failed=False, only_running=False, confirm_prompt=False, include_subdags=True, include_parentdag=False, reset_dag_runs=True, dry_run=False)
__deepcopy__(self, memo)
sub_dag(self, task_regex, include_downstream=False, include_upstream=True)

Returns a subset of the current dag as a deep copy of the current dag based on a regex that should match one or many tasks, and includes upstream and downstream neighbours based on the flag passed.

has_task(self, task_id)
get_task(self, task_id)
pickle_info(self)
pickle(self, session=None)
tree_view(self)

Print an ASCII tree representation of the DAG.

add_task(self, task)

Add a task to the DAG

Parameters

task (task) – the task you want to add

add_tasks(self, tasks)

Add a list of tasks to the DAG

Parameters

tasks (list of tasks) – a lit of tasks you want to add

run(self, start_date=None, end_date=None, mark_success=False, local=False, executor=None, donot_pickle=conf.getboolean('core', 'donot_pickle'), ignore_task_deps=False, ignore_first_depends_on_past=False, pool=None, delay_on_limit_secs=1.0, verbose=False, conf=None, rerun_failed_tasks=False, run_backwards=False)

Runs the DAG.

Parameters
  • start_date (datetime.datetime) – the start date of the range to run

  • end_date (datetime.datetime) – the end date of the range to run

  • mark_success (bool) – True to mark jobs as succeeded without running them

  • local (bool) – True to run the tasks using the LocalExecutor

  • executor (airflow.executor.BaseExecutor) – The executor instance to run the tasks

  • donot_pickle (bool) – True to avoid pickling DAG object and send to workers

  • ignore_task_deps (bool) – True to skip upstream tasks

  • ignore_first_depends_on_past (bool) – True to ignore depends_on_past dependencies for the first set of tasks only

  • pool (str) – Resource pool to use

  • delay_on_limit_secs (float) – Time in seconds to wait before next attempt to run dag run when max_active_runs limit has been reached

  • verbose (bool) – Make logging output more verbose

  • conf (dict) – user defined dictionary passed from CLI

  • rerun_failed_tasks

  • run_backwards

Type

bool

Type

bool

cli(self)

Exposes a CLI specific to this DAG

create_dagrun(self, run_id, state, execution_date=None, start_date=None, external_trigger=False, conf=None, session=None)

Creates a dag run from this dag including the tasks associated with this dag. Returns the dag run.

Parameters
  • run_id (str) – defines the the run id for this dag run

  • execution_date (datetime.datetime) – the execution date of this dag run

  • state (airflow.utils.state.State) – the state of the dag run

  • start_date (datetime) – the date this dag run should be evaluated

  • external_trigger (bool) – whether this dag run is externally triggered

  • session (sqlalchemy.orm.session.Session) – database session

sync_to_db(self, owner=None, sync_time=None, session=None)

Save attributes about this DAG to the DB. Note that this method can be called for both DAGs and SubDAGs. A SubDag is actually a SubDagOperator.

Parameters
  • dag (airflow.models.DAG) – the DAG object to save to the DB

  • sync_time (datetime) – The time that the DAG should be marked as sync’ed

Returns

None

static deactivate_unknown_dags(active_dag_ids, session=None)

Given a list of known DAGs, deactivate any other DAGs that are marked as active in the ORM

Parameters

active_dag_ids (list[unicode]) – list of DAG IDs that are active

Returns

None

static deactivate_stale_dags(expiration_date, session=None)

Deactivate any DAGs that were last touched by the scheduler before the expiration date. These DAGs were likely deleted.

Parameters

expiration_date (datetime) – set inactive DAGs that were touched before this time

Returns

None

static get_num_task_instances(dag_id, task_ids=None, states=None, session=None)

Returns the number of task instances in the given DAG.

Parameters
  • session – ORM session

  • dag_id (unicode) – ID of the DAG to get the task concurrency of

  • task_ids (list[unicode]) – A list of valid task IDs for the given DAG

  • states (list[state]) – A list of states to filter by if supplied

Returns

The number of running tasks

Return type

int

test_cycle(self)

Check to see if there are any cycles in the DAG. Returns False if no cycle found, otherwise raises exception.

_test_cycle_helper(self, visit_map, task_id)

Checks if a cycle exists from the input task using DFS traversal

class airflow.models.DagModel[source]

Bases: airflow.models.base.Base

__tablename__ = dag

These items are stored in the database for state related information

dag_id
is_paused_at_creation
is_paused
is_subdag
is_active
last_scheduler_run
last_pickled
last_expired
scheduler_lock
pickle_id
fileloc
owners
description
default_view
schedule_interval
timezone
safe_dag_id
__repr__(self)
static get_dagmodel(dag_id, session=None)
classmethod get_current(cls, dag_id, session=None)
get_default_view(self)
get_last_dagrun(self, session=None, include_externally_triggered=False)
get_dag(self)
create_dagrun(self, run_id, state, execution_date, start_date=None, external_trigger=False, conf=None, session=None)

Creates a dag run from this dag including the tasks associated with this dag. Returns the dag run.

Parameters
  • run_id (str) – defines the the run id for this dag run

  • execution_date (datetime.datetime) – the execution date of this dag run

  • state (airflow.utils.state.State) – the state of the dag run

  • start_date (datetime.datetime) – the date this dag run should be evaluated

  • external_trigger (bool) – whether this dag run is externally triggered

  • session (sqlalchemy.orm.session.Session) – database session

set_is_paused(self, is_paused, including_subdags=True, session=None)

Pause/Un-pause a DAG.

Parameters
  • is_paused – Is the DAG paused

  • including_subdags – whether to include the DAG’s subdags

  • session – session

class airflow.models.DagBag(dag_folder=None, executor=None, include_examples=conf.getboolean('core', 'LOAD_EXAMPLES'), safe_mode=conf.getboolean('core', 'DAG_DISCOVERY_SAFE_MODE'))[source]

Bases: airflow.dag.base_dag.BaseDagBag, airflow.utils.log.logging_mixin.LoggingMixin

A dagbag is a collection of dags, parsed out of a folder tree and has high level configuration settings, like what database to use as a backend and what executor to use to fire off tasks. This makes it easier to run distinct environments for say production and development, tests, or for different teams or security profiles. What would have been system level settings are now dagbag level so that one system can run multiple, independent settings sets.

Parameters
  • dag_folder (unicode) – the folder to scan to find DAGs

  • executor – the executor to use when executing task instances in this DagBag

  • include_examples (bool) – whether to include the examples that ship with airflow or not

  • has_logged – an instance boolean that gets flipped from False to True after a file has been skipped. This is to prevent overloading the user with logging messages about skipped files. Therefore only once per DagBag is a file logged being skipped.

CYCLE_NEW = 0
CYCLE_IN_PROGRESS = 1
CYCLE_DONE = 2
DAGBAG_IMPORT_TIMEOUT
UNIT_TEST_MODE
SCHEDULER_ZOMBIE_TASK_THRESHOLD
dag_ids
size(self)
Returns

the amount of dags contained in this dagbag

get_dag(self, dag_id)

Gets the DAG out of the dictionary, and refreshes it if expired

process_file(self, filepath, only_if_updated=True, safe_mode=True)

Given a path to a python module or zip file, this method imports the module and look for dag objects within it.

kill_zombies(self, zombies, session=None)

Fail given zombie tasks, which are tasks that haven’t had a heartbeat for too long, in the current DagBag.

Parameters
  • zombies (airflow.utils.dag_processing.SimpleTaskInstance) – zombie task instances to kill.

  • session (sqlalchemy.orm.session.Session) – DB session.

bag_dag(self, dag, parent_dag, root_dag)

Adds the DAG into the bag, recurses into sub dags. Throws AirflowDagCycleException if a cycle is detected in this dag or its subdags

collect_dags(self, dag_folder=None, only_if_updated=True, include_examples=conf.getboolean('core', 'LOAD_EXAMPLES'), safe_mode=conf.getboolean('core', 'DAG_DISCOVERY_SAFE_MODE'))

Given a file path or a folder, this method looks for python modules, imports them and adds them to the dagbag collection.

Note that if a .airflowignore file is found while processing the directory, it will behave much like a .gitignore, ignoring files that match any of the regex patterns specified in the file.

Note: The patterns in .airflowignore are treated as un-anchored regexes, not shell-like glob patterns.

dagbag_report(self)

Prints a report around DagBag loading stats

class airflow.models.DagPickle(dag)[source]

Bases: airflow.models.base.Base

Dags can originate from different places (user repos, master repo, …) and also get executed in different places (different executors). This object represents a version of a DAG and becomes a source of truth for a BackfillJob execution. A pickle is a native python serialized object, and in this case gets stored in the database for the duration of the job.

The executors pick up the DagPickle id and read the dag definition from the database.

id
pickle
created_dttm
pickle_hash
__tablename__ = dag_pickle
class airflow.models.DagRun[source]

Bases: airflow.models.base.Base, airflow.utils.log.logging_mixin.LoggingMixin

DagRun describes an instance of a Dag. It can be created by the scheduler (for regular runs) or by an external trigger

__tablename__ = dag_run
ID_PREFIX = scheduled__
ID_FORMAT_PREFIX
id
dag_id
execution_date
start_date
end_date
_state
run_id
external_trigger
conf
dag
__table_args__
state
is_backfill
__repr__(self)
get_state(self)
set_state(self, state)
classmethod id_for_date(cls, date, prefix=ID_FORMAT_PREFIX)
refresh_from_db(self, session=None)

Reloads the current dagrun from the database :param session: database session

static find(dag_id=None, run_id=None, execution_date=None, state=None, external_trigger=None, no_backfills=False, session=None)

Returns a set of dag runs for the given search criteria.

Parameters
  • dag_id (int, list) – the dag_id to find dag runs for

  • run_id (str) – defines the the run id for this dag run

  • execution_date (datetime.datetime) – the execution date

  • state (str) – the state of the dag run

  • external_trigger (bool) – whether this dag run is externally triggered

  • no_backfills (bool) – return no backfills (True), return all (False). Defaults to False

  • session (sqlalchemy.orm.session.Session) – database session

get_task_instances(self, state=None, session=None)

Returns the task instances for this dag run

get_task_instance(self, task_id, session=None)

Returns the task instance specified by task_id for this dag run

Parameters

task_id – the task id

get_dag(self)

Returns the Dag associated with this DagRun.

Returns

DAG

get_previous_dagrun(self, state=None, session=None)

The previous DagRun, if there is one

get_previous_scheduled_dagrun(self, session=None)

The previous, SCHEDULED DagRun, if there is one

update_state(self, session=None)

Determines the overall state of the DagRun based on the state of its TaskInstances.

Returns

State

_emit_duration_stats_for_finished_state(self)
verify_integrity(self, session=None)

Verifies the DagRun by checking for removed tasks or tasks that are not in the database yet. It will set state to removed or add the task if required.

static get_run(session, dag_id, execution_date)
Parameters
  • dag_id (unicode) – DAG ID

  • execution_date (datetime) – execution date

Returns

DagRun corresponding to the given dag_id and execution date if one exists. None otherwise.

Return type

airflow.models.DagRun

classmethod get_latest_runs(cls, session)

Returns the latest DagRun for each DAG.

class airflow.models.ImportError[source]

Bases: airflow.models.base.Base

__tablename__ = import_error
id
timestamp
filename
stacktrace
class airflow.models.KubeWorkerIdentifier[source]

Bases: airflow.models.base.Base

__tablename__ = kube_worker_uuid
one_row_id
worker_uuid
static get_or_create_current_kube_worker_uuid(session=None)
static checkpoint_kube_worker_uuid(worker_uuid, session=None)
class airflow.models.KubeResourceVersion[source]

Bases: airflow.models.base.Base

__tablename__ = kube_resource_version
one_row_id
resource_version
static get_current_resource_version(session=None)
static checkpoint_resource_version(resource_version, session=None)
static reset_resource_version(session=None)
class airflow.models.Log(event, task_instance, owner=None, extra=None, **kwargs)[source]

Bases: airflow.models.base.Base

Used to actively log events to the database

__tablename__ = log
id
dttm
dag_id
task_id
event
execution_date
owner
extra
__table_args__
class airflow.models.Pool[source]

Bases: airflow.models.base.Base

__tablename__ = slot_pool
id
pool
slots
description
DEFAULT_POOL_NAME = default_pool
__repr__(self)
static get_pool(pool_name, session=None)
static get_default_pool(session=None)
to_json(self)
occupied_slots(self, session)

Returns the number of slots used by running/queued tasks at the moment.

used_slots(self, session)

Returns the number of slots used by running tasks at the moment.

queued_slots(self, session)

Returns the number of slots used by queued tasks at the moment.

open_slots(self, session)

Returns the number of slots open at the moment

class airflow.models.TaskFail(task, execution_date, start_date, end_date)[source]

Bases: airflow.models.base.Base

TaskFail tracks the failed run durations of each task instance.

__tablename__ = task_fail
id
task_id
dag_id
execution_date
start_date
end_date
duration
__table_args__
class airflow.models.SkipMixin[source]

Bases: airflow.utils.log.logging_mixin.LoggingMixin

skip(self, dag_run, execution_date, tasks, session=None)

Sets tasks instances to skipped from the same dag run.

Parameters
  • dag_run – the DagRun for which to set the tasks to skipped

  • execution_date – execution_date

  • tasks – tasks to skip (not task_ids)

  • session – db session to use

skip_all_except(self, ti, branch_task_ids)

This method implements the logic for a branching operator; given a single task ID or list of task IDs to follow, this skips all other tasks immediately downstream of this operator.

class airflow.models.SlaMiss[source]

Bases: airflow.models.base.Base

Model that stores a history of the SLA that have been missed. It is used to keep track of SLA failures over time and to avoid double triggering alert emails.

__tablename__ = sla_miss
task_id
dag_id
execution_date
email_sent
timestamp
description
notification_sent
__table_args__
__repr__(self)
airflow.models.clear_task_instances(tis, session, activate_dag_runs=True, dag=None)[source]
Clears a set of task instances, but makes sure the running ones
get killed.
Parameters
  • tis – a list of task instances

  • session – current session

  • activate_dag_runs – flag to check for active dag run

  • dag – DAG object

class airflow.models.TaskInstance(task, execution_date, state=None)[source]

Bases: airflow.models.base.Base, airflow.utils.log.logging_mixin.LoggingMixin

Task instances store the state of a task instance. This table is the authority and single source of truth around what tasks have run and the state they are in.

The SqlAlchemy model doesn’t have a SqlAlchemy foreign key to the task or dag model deliberately to have more control over transactions.

Database transactions on this table should insure double triggers and any confusion around what task instances are or aren’t ready to run even while multiple schedulers may be firing task instances.

__tablename__ = task_instance
task_id
dag_id
execution_date
start_date
end_date
duration
state
_try_number
max_tries
hostname
unixname
job_id
pool
queue
priority_weight
operator
queued_dttm
pid
executor_config
__table_args__
try_number

Return the try number that this task number will be when it is actually run.

If the TI is currently running, this will match the column in the databse, in all othercases this will be incremenetd

next_try_number
log_filepath
log_url
mark_success_url
key

Returns a tuple that identifies the task instance uniquely

is_premature

Returns whether a task is in UP_FOR_RETRY state and its retry interval has elapsed.

previous_ti

The task instance for the task that ran before this task instance.

previous_ti_success

The ti from prior succesful dag run for this task, by execution date.

previous_execution_date_success

The execution date from property previous_ti_success.

previous_start_date_success

The start date from property previous_ti_success.

init_on_load(self)

Initialize the attributes that aren’t stored in the DB.

command(self, mark_success=False, ignore_all_deps=False, ignore_depends_on_past=False, ignore_task_deps=False, ignore_ti_state=False, local=False, pickle_id=None, raw=False, job_id=None, pool=None, cfg_path=None)

Returns a command that can be executed anywhere where airflow is installed. This command is part of the message sent to executors by the orchestrator.

command_as_list(self, mark_success=False, ignore_all_deps=False, ignore_task_deps=False, ignore_depends_on_past=False, ignore_ti_state=False, local=False, pickle_id=None, raw=False, job_id=None, pool=None, cfg_path=None)

Returns a command that can be executed anywhere where airflow is installed. This command is part of the message sent to executors by the orchestrator.

static generate_command(dag_id, task_id, execution_date, mark_success=False, ignore_all_deps=False, ignore_depends_on_past=False, ignore_task_deps=False, ignore_ti_state=False, local=False, pickle_id=None, file_path=None, raw=False, job_id=None, pool=None, cfg_path=None)

Generates the shell command required to execute this task instance.

Parameters
  • dag_id (unicode) – DAG ID

  • task_id (unicode) – Task ID

  • execution_date (datetime.datetime) – Execution date for the task

  • mark_success (bool) – Whether to mark the task as successful

  • ignore_all_deps (bool) – Ignore all ignorable dependencies. Overrides the other ignore_* parameters.

  • ignore_depends_on_past (bool) – Ignore depends_on_past parameter of DAGs (e.g. for Backfills)

  • ignore_task_deps (bool) – Ignore task-specific dependencies such as depends_on_past and trigger rule

  • ignore_ti_state (bool) – Ignore the task instance’s previous failure/success

  • local (bool) – Whether to run the task locally

  • pickle_id (unicode) – If the DAG was serialized to the DB, the ID associated with the pickled DAG

  • file_path – path to the file containing the DAG definition

  • raw – raw mode (needs more details)

  • job_id – job ID (needs more details)

  • pool (unicode) – the Airflow pool that the task should run in

  • cfg_path (basestring) – the Path to the configuration file

Returns

shell command that can be used to run the task instance

current_state(self, session=None)

Get the very latest state from the database, if a session is passed, we use and looking up the state becomes part of the session, otherwise a new session is used.

error(self, session=None)

Forces the task instance’s state to FAILED in the database.

refresh_from_db(self, session=None, lock_for_update=False, refresh_executor_config=False)

Refreshes the task instance from the database based on the primary key

Parameters
  • refresh_executor_config – if True, revert executor config to result from DB. Often, however, we will want to keep the newest version

  • lock_for_update – if True, indicates that the database should lock the TaskInstance (issuing a FOR UPDATE clause) until the session is committed.

clear_xcom_data(self, session=None)

Clears all XCom data from the database for the task instance

set_state(self, state, session=None, commit=True)
are_dependents_done(self, session=None)

Checks whether the dependents of this task instance have all succeeded. This is meant to be used by wait_for_downstream.

This is useful when you do not want to start processing the next schedule of a task until the dependents are done. For instance, if the task DROPs and recreates a table.

_get_previous_ti(self, state=None, session=None)
are_dependencies_met(self, dep_context=None, session=None, verbose=False)

Returns whether or not all the conditions are met for this task instance to be run given the context for the dependencies (e.g. a task instance being force run from the UI will ignore some dependencies).

Parameters
  • dep_context (DepContext) – The execution context that determines the dependencies that should be evaluated.

  • session (sqlalchemy.orm.session.Session) – database session

  • verbose (bool) – whether log details on failed dependencies on info or debug log level

get_failed_dep_statuses(self, dep_context=None, session=None)
__repr__(self)
next_retry_datetime(self)

Get datetime of the next retry if the task instance fails. For exponential backoff, retry_delay is used as base and will be converted to seconds.

ready_for_retry(self)

Checks on whether the task instance is in the right state and timeframe to be retried.

pool_full(self, session)

Returns a boolean as to whether the slot pool has room for this task to run

get_dagrun(self, session)

Returns the DagRun for this TaskInstance

Parameters

session

Returns

DagRun

_check_and_change_state_before_execution(self, verbose=True, ignore_all_deps=False, ignore_depends_on_past=False, ignore_task_deps=False, ignore_ti_state=False, mark_success=False, test_mode=False, job_id=None, pool=None, session=None)

Checks dependencies and then sets state to RUNNING if they are met. Returns True if and only if state is set to RUNNING, which implies that task should be executed, in preparation for _run_raw_task

Parameters
  • verbose (bool) – whether to turn on more verbose logging

  • ignore_all_deps (bool) – Ignore all of the non-critical dependencies, just runs

  • ignore_depends_on_past (bool) – Ignore depends_on_past DAG attribute

  • ignore_task_deps (bool) – Don’t check the dependencies of this TI’s task

  • ignore_ti_state (bool) – Disregards previous task instance state

  • mark_success (bool) – Don’t run the task, mark its state as success

  • test_mode (bool) – Doesn’t record success or failure in the DB

  • pool (str) – specifies the pool to use to run the task instance

Returns

whether the state was changed to running or not

Return type

bool

_run_raw_task(self, mark_success=False, test_mode=False, job_id=None, pool=None, session=None)

Immediately runs the task (without checking or changing db state before execution) and then sets the appropriate final state after completion and runs any post-execute callbacks. Meant to be called only after another function changes the state to running.

Parameters
  • mark_success (bool) – Don’t run the task, mark its state as success

  • test_mode (bool) – Doesn’t record success or failure in the DB

  • pool (str) – specifies the pool to use to run the task instance

run(self, verbose=True, ignore_all_deps=False, ignore_depends_on_past=False, ignore_task_deps=False, ignore_ti_state=False, mark_success=False, test_mode=False, job_id=None, pool=None, session=None)
dry_run(self)
_handle_reschedule(self, actual_start_date, reschedule_exception, test_mode=False, context=None, session=None)
handle_failure(self, error, test_mode=False, context=None, session=None)
is_eligible_to_retry(self)

Is task instance is eligible for retry

get_template_context(self, session=None)
overwrite_params_with_dag_run_conf(self, params, dag_run)
render_templates(self, context=None)

Render templates in the operator fields.

email_alert(self, exception)
set_duration(self)
xcom_push(self, key, value, execution_date=None)

Make an XCom available for tasks to pull.

Parameters
  • key (str) – A key for the XCom

  • value (any pickleable object) – A value for the XCom. The value is pickled and stored in the database.

  • execution_date (datetime) – if provided, the XCom will not be visible until this date. This can be used, for example, to send a message to a task on a future date without it being immediately visible.

xcom_pull(self, task_ids=None, dag_id=None, key=XCOM_RETURN_KEY, include_prior_dates=False)

Pull XComs that optionally meet certain criteria.

The default value for key limits the search to XComs that were returned by other tasks (as opposed to those that were pushed manually). To remove this filter, pass key=None (or any desired value).

If a single task_id string is provided, the result is the value of the most recent matching XCom from that task_id. If multiple task_ids are provided, a tuple of matching values is returned. None is returned whenever no matches are found.

Parameters
  • key (str) – A key for the XCom. If provided, only XComs with matching keys will be returned. The default key is ‘return_value’, also available as a constant XCOM_RETURN_KEY. This key is automatically given to XComs returned by tasks (as opposed to being pushed manually). To remove the filter, pass key=None.

  • task_ids (str or iterable of strings (representing task_ids)) – Only XComs from tasks with matching ids will be pulled. Can pass None to remove the filter.

  • dag_id (str) – If provided, only pulls XComs from this DAG. If None (default), the DAG of the calling task is used.

  • include_prior_dates (bool) – If False, only XComs from the current execution_date are returned. If True, XComs from previous dates are returned as well.

get_num_running_task_instances(self, session)
init_run_context(self, raw=False)

Sets the log context.

class airflow.models.TaskReschedule(task, execution_date, try_number, start_date, end_date, reschedule_date)[source]

Bases: airflow.models.base.Base

TaskReschedule tracks rescheduled task instances.

__tablename__ = task_reschedule
id
task_id
dag_id
execution_date
try_number
start_date
end_date
duration
reschedule_date
__table_args__
static find_for_task_instance(task_instance, session)

Returns all task reschedules for the task instance and try number, in ascending order.

Parameters

task_instance (airflow.models.TaskInstance) – the task instance to find task reschedules for

class airflow.models.Variable[source]

Bases: airflow.models.base.Base, airflow.utils.log.logging_mixin.LoggingMixin

__tablename__ = variable
__NO_DEFAULT_SENTINEL
id
key
_val
is_encrypted
val
__repr__(self)
get_val(self)
set_val(self, value)
classmethod setdefault(cls, key, default, deserialize_json=False)

Like a Python builtin dict object, setdefault returns the current value for a key, and if it isn’t there, stores the default value and returns it.

Parameters
  • key (str) – Dict key for this Variable

  • default (Mixed) – Default value to set and return if the variable isn’t already in the DB

  • deserialize_json – Store this as a JSON encoded value in the DB and un-encode it when retrieving a value

Returns

Mixed

classmethod get(cls, key, default_var=__NO_DEFAULT_SENTINEL, deserialize_json=False, session=None)
classmethod set(cls, key, value, serialize_json=False, session=None)
classmethod delete(cls, key, session=None)
rotate_fernet_key(self)
class airflow.models.XCom[source]

Bases: airflow.models.base.Base, airflow.utils.log.logging_mixin.LoggingMixin

Base class for XCom objects.

__tablename__ = xcom
id
key
value
timestamp
execution_date
task_id
dag_id
__table_args__

TODO: “pickling” has been deprecated and JSON is preferred. “pickling” will be removed in Airflow 2.0.

init_on_load(self)
__repr__(self)
classmethod set(cls, key, value, execution_date, task_id, dag_id, session=None)

Store an XCom value.

Returns

None

classmethod get_one(cls, execution_date, key=None, task_id=None, dag_id=None, include_prior_dates=False, session=None)

Retrieve an XCom value, optionally meeting certain criteria. TODO: “pickling” has been deprecated and JSON is preferred. “pickling” will be removed in Airflow 2.0.

Returns

XCom value

classmethod get_many(cls, execution_date, key=None, task_ids=None, dag_ids=None, include_prior_dates=False, limit=100, session=None)

Retrieve an XCom value, optionally meeting certain criteria TODO: “pickling” has been deprecated and JSON is preferred. “pickling” will be removed in Airflow 2.0.

classmethod delete(cls, xcoms, session=None)
static serialize_value(value)
airflow.models.XCOM_RETURN_KEY = return_value[source]
class airflow.models.KnownEvent[source]

Bases: airflow.models.base.Base

__tablename__ = known_event
id
label
start_date
end_date
user_id
known_event_type_id
reported_by
event_type
description
__repr__(self)
class airflow.models.KnownEventType[source]

Bases: airflow.models.base.Base

__tablename__ = known_event_type
id
know_event_type
__repr__(self)
class airflow.models.User[source]

Bases: airflow.models.base.Base

__tablename__ = users
id
username
email
superuser
__repr__(self)
get_id(self)
is_superuser(self)
class airflow.models.Chart[source]

Bases: airflow.models.base.Base

__tablename__ = chart
id
label
conn_id
user_id
chart_type
sql_layout
sql
y_log_scale
show_datatable
show_sql
height
default_params
owner
x_is_date
iteration_no
last_modified
__repr__(self)

Was this entry helpful?