airflow.models
¶
Submodules¶
Package Contents¶
-
airflow.models.
GetDefaultExecutor
()[source]¶ Creates a new instance of the configured executor if none exists and returns it
-
class
airflow.models.
LocalExecutor
¶ Bases:
airflow.executors.base_executor.BaseExecutor
LocalExecutor executes tasks locally in parallel. It uses the multiprocessing Python library and queues to parallelize the execution of tasks.
-
class
_UnlimitedParallelism
(executor)¶ Bases:
object
Implements LocalExecutor with unlimited parallelism, starting one process per each command to execute.
-
start
(self)¶
-
execute_async
(self, key, command)¶
-
sync
(self)¶
-
end
(self)¶
-
-
class
_LimitedParallelism
(executor)¶ Bases:
object
Implements LocalExecutor with limited parallelism using a task queue to coordinate work distribution.
-
start
(self)¶
-
execute_async
(self, key, command)¶
-
sync
(self)¶
-
end
(self)¶
-
-
start
(self)¶
-
execute_async
(self, key, command, queue=None, executor_config=None)¶
-
sync
(self)¶
-
end
(self)¶
-
class
-
exception
airflow.models.
AirflowDagCycleException
[source]¶ Bases:
airflow.exceptions.AirflowException
-
exception
airflow.models.
AirflowException
[source]¶ Bases:
Exception
Base class for all Airflow’s errors. Each custom exception should be derived from this class
-
status_code
= 500¶
-
-
exception
airflow.models.
AirflowRescheduleException
(reschedule_date)[source]¶ Bases:
airflow.exceptions.AirflowException
Raise when the task should be re-scheduled at a later time.
- Parameters
reschedule_date – The date when the task should be rescheduled
-
class
airflow.models.
BaseDag
[source]¶ Bases:
object
Base DAG object that both the SimpleDag and DAG inherit.
-
__metaclass__
¶
-
dag_id
¶ - Returns
the DAG ID
- Return type
unicode
-
task_ids
¶ - Returns
A list of task IDs that are in this DAG
- Return type
List[unicode]
-
full_filepath
¶ - Returns
The absolute path to the file that contains this DAG’s definition
- Return type
unicode
-
concurrency
(self)¶ - Returns
maximum number of tasks that can run simultaneously from this DAG
- Return type
-
pickle_id
(self)¶ - Returns
The pickle ID for this DAG, if it has one. Otherwise None.
- Return type
unicode
-
-
class
airflow.models.
BaseDagBag
[source]¶ Bases:
object
Base object that both the SimpleDagBag and DagBag inherit.
-
dag_ids
¶ - Returns
a list of DAG IDs in this bag
- Return type
List[unicode]
-
get_dag
(self, dag_id)¶ - Returns
whether the task exists in this bag
- Return type
airflow.dag.base_dag.BaseDag
-
-
airflow.models.
apply_lineage
(func)[source]¶ Saves the lineage to XCom and if configured to do so sends it to the backend.
-
airflow.models.
prepare_lineage
(func)[source]¶ Prepares the lineage inlets and outlets. Inlets can be:
“auto” -> picks up any outlets from direct upstream tasks that have outlets defined, as such that if A -> B -> C and B does not have outlets but A does, these are provided as inlets.
“list of task_ids” -> picks up outlets from the upstream task_ids
“list of datasets” -> manually defined list of DataSet
-
class
airflow.models.
DagPickle
(dag)[source]¶ Bases:
airflow.models.base.Base
Dags can originate from different places (user repos, master repo, …) and also get executed in different places (different executors). This object represents a version of a DAG and becomes a source of truth for a BackfillJob execution. A pickle is a native python serialized object, and in this case gets stored in the database for the duration of the job.
The executors pick up the DagPickle id and read the dag definition from the database.
-
id
¶
-
pickle
¶
-
created_dttm
¶
-
pickle_hash
¶
-
__tablename__
= dag_pickle¶
-
-
class
airflow.models.
ImportError
[source]¶ Bases:
airflow.models.base.Base
-
__tablename__
= import_error¶
-
id
¶
-
timestamp
¶
-
filename
¶
-
stacktrace
¶
-
-
class
airflow.models.
SlaMiss
[source]¶ Bases:
airflow.models.base.Base
Model that stores a history of the SLA that have been missed. It is used to keep track of SLA failures over time and to avoid double triggering alert emails.
-
__tablename__
= sla_miss¶
-
task_id
¶
-
dag_id
¶
-
execution_date
¶
-
email_sent
¶
-
timestamp
¶
-
description
¶
-
notification_sent
¶
-
__table_args__
¶
-
__repr__
(self)¶
-
-
class
airflow.models.
KubeWorkerIdentifier
[source]¶ Bases:
airflow.models.base.Base
-
__tablename__
= kube_worker_uuid¶
-
one_row_id
¶
-
worker_uuid
¶
-
static
get_or_create_current_kube_worker_uuid
(session=None)¶
-
static
checkpoint_kube_worker_uuid
(worker_uuid, session=None)¶
-
-
class
airflow.models.
KubeResourceVersion
[source]¶ Bases:
airflow.models.base.Base
-
__tablename__
= kube_resource_version¶
-
one_row_id
¶
-
resource_version
¶
-
static
get_current_resource_version
(session=None)¶
-
static
checkpoint_resource_version
(resource_version, session=None)¶
-
static
reset_resource_version
(session=None)¶
-
-
class
airflow.models.
Log
(event, task_instance, owner=None, extra=None, **kwargs)[source]¶ Bases:
airflow.models.base.Base
Used to actively log events to the database
-
__tablename__
= log¶
-
id
¶
-
dttm
¶
-
dag_id
¶
-
task_id
¶
-
event
¶
-
execution_date
¶
-
owner
¶
-
extra
¶
-
__table_args__
¶
-
-
class
airflow.models.
TaskFail
(task, execution_date, start_date, end_date)[source]¶ Bases:
airflow.models.base.Base
TaskFail tracks the failed run durations of each task instance.
-
__tablename__
= task_fail¶
-
id
¶
-
task_id
¶
-
dag_id
¶
-
execution_date
¶
-
start_date
¶
-
end_date
¶
-
duration
¶
-
__table_args__
¶
-
-
class
airflow.models.
TaskReschedule
(task, execution_date, try_number, start_date, end_date, reschedule_date)[source]¶ Bases:
airflow.models.base.Base
TaskReschedule tracks rescheduled task instances.
-
__tablename__
= task_reschedule¶
-
id
¶
-
task_id
¶
-
dag_id
¶
-
execution_date
¶
-
try_number
¶
-
start_date
¶
-
end_date
¶
-
duration
¶
-
reschedule_date
¶
-
__table_args__
¶
-
static
find_for_task_instance
(task_instance, session)¶ Returns all task reschedules for the task instance and try number, in ascending order.
- Parameters
task_instance (airflow.models.TaskInstance) – the task instance to find task reschedules for
-
-
class
airflow.models.
NotInRetryPeriodDep
[source]¶ Bases:
airflow.ti_deps.deps.base_ti_dep.BaseTIDep
-
NAME
= Not In Retry Period¶
-
IGNOREABLE
= True¶
-
IS_TASK_DEP
= True¶
-
_get_dep_statuses
(self, ti, session, dep_context)¶
-
-
class
airflow.models.
PrevDagrunDep
[source]¶ Bases:
airflow.ti_deps.deps.base_ti_dep.BaseTIDep
Is the past dagrun in a state that allows this task instance to run, e.g. did this task instance’s task in the previous dagrun complete if we are depending on past.
-
NAME
= Previous Dagrun State¶
-
IGNOREABLE
= True¶
-
IS_TASK_DEP
= True¶
-
_get_dep_statuses
(self, ti, session, dep_context)¶
-
-
class
airflow.models.
TriggerRuleDep
[source]¶ Bases:
airflow.ti_deps.deps.base_ti_dep.BaseTIDep
Determines if a task’s upstream tasks are in a state that allows a given task instance to run.
-
NAME
= Trigger Rule¶
-
IGNOREABLE
= True¶
-
IS_TASK_DEP
= True¶
-
_get_dep_statuses
(self, ti, session, dep_context)¶
-
_evaluate_trigger_rule
(self, ti, successes, skipped, failed, upstream_failed, done, flag_upstream_failed, session)¶ Yields a dependency status that indicate whether the given task instance’s trigger rule was met.
- Parameters
ti (airflow.models.TaskInstance) – the task instance to evaluate the trigger rule of
successes (bool) – Number of successful upstream tasks
skipped (bool) – Number of skipped upstream tasks
failed (bool) – Number of failed upstream tasks
upstream_failed (bool) – Number of upstream_failed upstream tasks
done (bool) – Number of completed upstream tasks
flag_upstream_failed (bool) – This is a hack to generate the upstream_failed state creation while checking to see whether the task instance is runnable. It was the shortest path to add the feature
session (sqlalchemy.orm.session.Session) – database session
-
-
class
airflow.models.
DepContext
(deps=None, flag_upstream_failed=False, ignore_all_deps=False, ignore_depends_on_past=False, ignore_in_retry_period=False, ignore_in_reschedule_period=False, ignore_task_deps=False, ignore_ti_state=False)[source]¶ Bases:
object
A base class for contexts that specifies which dependencies should be evaluated in the context for a task instance to satisfy the requirements of the context. Also stores state related to the context that can be used by dependency classes.
For example there could be a SomeRunContext that subclasses this class which has dependencies for:
Making sure there are slots available on the infrastructure to run the task instance
A task-instance’s task-specific dependencies are met (e.g. the previous task instance completed successfully)
…
- Parameters
deps (set(airflow.ti_deps.deps.base_ti_dep.BaseTIDep)) – The context-specific dependencies that need to be evaluated for a task instance to run in this execution context.
flag_upstream_failed (bool) – This is a hack to generate the upstream_failed state creation while checking to see whether the task instance is runnable. It was the shortest path to add the feature. This is bad since this class should be pure (no side effects).
ignore_all_deps (bool) – Whether or not the context should ignore all ignoreable dependencies. Overrides the other ignore_* parameters
ignore_depends_on_past (bool) – Ignore depends_on_past parameter of DAGs (e.g. for Backfills)
ignore_in_retry_period (bool) – Ignore the retry period for task instances
ignore_in_reschedule_period (bool) – Ignore the reschedule period for task instances
ignore_task_deps (bool) – Ignore task-specific dependencies such as depends_on_past and trigger rule
ignore_ti_state (bool) – Ignore the task instance’s previous failure/success
-
airflow.models.
list_py_file_paths
(directory, safe_mode=True, include_examples=None)[source]¶ Traverse a directory and look for Python files.
- Parameters
directory (unicode) – the directory to traverse
safe_mode – whether to use a heuristic to determine whether a file contains Airflow DAG definitions
- Returns
a list of paths to Python files in the specified directory
- Return type
list[unicode]
-
airflow.models.
utils_date_range
(start_date, end_date=None, num=None, delta=None)¶ Get a set of dates as a list based on a start, end and delta, delta can be something that can be added to datetime.datetime or a cron expression as a str
:Example:
date_range(datetime(2016, 1, 1), datetime(2016, 1, 3), delta=timedelta(1)) [datetime.datetime(2016, 1, 1, 0, 0), datetime.datetime(2016, 1, 2, 0, 0), datetime.datetime(2016, 1, 3, 0, 0)] date_range(datetime(2016, 1, 1), datetime(2016, 1, 3), delta='0 0 * * *') [datetime.datetime(2016, 1, 1, 0, 0), datetime.datetime(2016, 1, 2, 0, 0), datetime.datetime(2016, 1, 3, 0, 0)] date_range(datetime(2016, 1, 1), datetime(2016, 3, 3), delta="0 0 0 * *") [datetime.datetime(2016, 1, 1, 0, 0), datetime.datetime(2016, 2, 1, 0, 0), datetime.datetime(2016, 3, 1, 0, 0)]
- Parameters
start_date (datetime.datetime) – anchor date to start the series from
end_date (datetime.datetime) – right boundary for the date range
num (int) – alternatively to end_date, you can specify the number of number of entries you want in the range. This number can be negative, output will always be sorted regardless
-
airflow.models.
provide_session
(func)[source]¶ Function decorator that provides a session if it isn’t provided. If you want to reuse a session or run the function as part of a database transaction, you pass it to the function, if not this wrapper will create one and close it for you.
-
airflow.models.
send_email
(to, subject, html_content, files=None, dryrun=False, cc=None, bcc=None, mime_subtype='mixed', mime_charset='us-ascii', **kwargs)[source]¶ Send email using backend specified in EMAIL_BACKEND.
-
airflow.models.
as_tuple
(obj)[source]¶ If obj is a container, returns obj as a tuple. Otherwise, returns a tuple containing obj.
-
airflow.models.
is_container
(obj)[source]¶ Test if an object is a container (iterable) but not a string
-
airflow.models.
pprinttable
(rows)[source]¶ Returns a pretty ascii table from tuples
If namedtuple are used, the table will have headers
-
class
airflow.models.
Resources
(cpus=configuration.conf.getint('operators', 'default_cpus'), ram=configuration.conf.getint('operators', 'default_ram'), disk=configuration.conf.getint('operators', 'default_disk'), gpus=configuration.conf.getint('operators', 'default_gpus'))[source]¶ Bases:
object
The resources required by an operator. Resources that are not specified will use the default values from the airflow config.
- Parameters
cpus (long) – The number of cpu cores that are required
ram (long) – The amount of RAM required
disk (long) – The amount of disk space required
gpus (long) – The number of gpu units that are required
-
__eq__
(self, other)¶
-
__repr__
(self)¶
-
class
airflow.models.
State
[source]¶ Bases:
object
Static class with task instance states constants and color method to avoid hardcoding.
-
NONE
¶
-
REMOVED
= removed¶
-
SCHEDULED
= scheduled¶
-
QUEUED
= queued¶
-
RUNNING
= running¶
-
SUCCESS
= success¶
-
SHUTDOWN
= shutdown¶
-
FAILED
= failed¶
-
UP_FOR_RETRY
= up_for_retry¶
-
UP_FOR_RESCHEDULE
= up_for_reschedule¶
-
UPSTREAM_FAILED
= upstream_failed¶
-
SKIPPED
= skipped¶
-
task_states
¶
-
dag_states
¶
-
state_color
¶
-
classmethod
color
(cls, state)¶
-
classmethod
color_fg
(cls, state)¶
-
classmethod
finished
(cls)¶ A list of states indicating that a task started and completed a run attempt. Note that the attempt could have resulted in failure or have been interrupted; in any case, it is no longer running.
-
classmethod
unfinished
(cls)¶ A list of states indicating that a task either has not completed a run or has not even started.
-
-
class
airflow.models.
UtcDateTime
[source]¶ Bases:
sqlalchemy.types.TypeDecorator
Almost equivalent to
DateTime
withtimezone=True
option, but it differs from that by:Never silently take naive
datetime
, instead it always raiseValueError
unless time zone aware value.Unlike SQLAlchemy’s built-in
DateTime
, it never return naivedatetime
, but time zone aware value, even with SQLite or MySQL.Always returns DateTime in UTC
-
impl
¶
-
process_bind_param
(self, value, dialect)¶
-
process_result_value
(self, value, dialect)¶ Processes DateTimes from the DB making sure it is always returning UTC. Not using timezone.convert_to_utc as that converts to configured TIMEZONE while the DB might be running with some other setting. We assume UTC datetimes in the database.
-
class
airflow.models.
Interval
[source]¶ Bases:
sqlalchemy.types.TypeDecorator
-
impl
¶
-
attr_keys
¶
-
process_bind_param
(self, value, dialect)¶
-
process_result_value
(self, value, dialect)¶
-
-
class
airflow.models.
timeout
(seconds=1, error_message='Timeout')[source]¶ Bases:
airflow.utils.log.logging_mixin.LoggingMixin
To be used in a
with
block and timeout its content.-
handle_timeout
(self, signum, frame)¶
-
__enter__
(self)¶
-
__exit__
(self, type, value, traceback)¶
-
-
class
airflow.models.
TriggerRule
[source]¶ Bases:
object
-
ALL_SUCCESS
= all_success¶
-
ALL_FAILED
= all_failed¶
-
ALL_DONE
= all_done¶
-
ONE_SUCCESS
= one_success¶
-
ONE_FAILED
= one_failed¶
-
NONE_FAILED
= none_failed¶
-
NONE_SKIPPED
= none_skipped¶
-
DUMMY
= dummy¶
-
_ALL_TRIGGER_RULES
¶
-
classmethod
is_valid
(cls, trigger_rule)¶
-
classmethod
all_triggers
(cls)¶
-
-
class
airflow.models.
WeightRule
[source]¶ Bases:
object
-
DOWNSTREAM
= downstream¶
-
UPSTREAM
= upstream¶
-
ABSOLUTE
= absolute¶
-
_ALL_WEIGHT_RULES
¶
-
classmethod
is_valid
(cls, weight_rule)¶
-
classmethod
all_weight_rules
(cls)¶
-
-
airflow.models.
get_hostname
()[source]¶ Fetch the hostname using the callable from the config or using socket.getfqdn as a fallback.
-
class
airflow.models.
LoggingMixin
(context=None)[source]¶ Bases:
object
Convenience super-class to have a logger configured with the class name
-
logger
¶
-
log
¶
-
_set_context
(self, context)¶
-
-
class
airflow.models.
NullFernet
[source]¶ Bases:
object
A “Null” encryptor class that doesn’t encrypt or decrypt but that presents a similar interface to Fernet.
The purpose of this is to make the rest of the code not have to know the difference, and to only display the message once, not 20 times when airflow initdb is ran.
-
airflow.models.
get_fernet
()[source]¶ Deferred load of Fernet key.
This function could fail either because Cryptography is not installed or because the Fernet key is invalid.
- Returns
Fernet object
- Raises
airflow.exceptions.AirflowException if there’s a problem trying to load Fernet
-
airflow.models.
clear_task_instances
(tis, session, activate_dag_runs=True, dag=None)[source]¶ Clears a set of task instances, but makes sure the running ones get killed.
- Parameters
tis – a list of task instances
session – current session
activate_dag_runs – flag to check for active dag run
dag – DAG object
-
airflow.models.
get_last_dagrun
(dag_id, session, include_externally_triggered=False)[source]¶ Returns the last dag run for a dag, None if there was none. Last dag run can be any type of run eg. scheduled or backfilled. Overridden DagRuns are ignored.
-
class
airflow.models.
DagBag
(dag_folder=None, executor=None, include_examples=configuration.conf.getboolean('core', 'LOAD_EXAMPLES'), safe_mode=configuration.conf.getboolean('core', 'DAG_DISCOVERY_SAFE_MODE'))[source]¶ Bases:
airflow.dag.base_dag.BaseDagBag
,airflow.utils.log.logging_mixin.LoggingMixin
A dagbag is a collection of dags, parsed out of a folder tree and has high level configuration settings, like what database to use as a backend and what executor to use to fire off tasks. This makes it easier to run distinct environments for say production and development, tests, or for different teams or security profiles. What would have been system level settings are now dagbag level so that one system can run multiple, independent settings sets.
- Parameters
dag_folder (unicode) – the folder to scan to find DAGs
executor – the executor to use when executing task instances in this DagBag
include_examples (bool) – whether to include the examples that ship with airflow or not
has_logged – an instance boolean that gets flipped from False to True after a file has been skipped. This is to prevent overloading the user with logging messages about skipped files. Therefore only once per DagBag is a file logged being skipped.
-
process_file
(self, filepath, only_if_updated=True, safe_mode=True)[source]¶ Given a path to a python module or zip file, this method imports the module and look for dag objects within it.
-
kill_zombies
(self, zombies, session=None)[source]¶ Fail given zombie tasks, which are tasks that haven’t had a heartbeat for too long, in the current DagBag.
- Parameters
zombies (airflow.utils.dag_processing.SimpleTaskInstance) – zombie task instances to kill.
session (sqlalchemy.orm.session.Session) – DB session.
-
bag_dag
(self, dag, parent_dag, root_dag)[source]¶ Adds the DAG into the bag, recurses into sub dags. Throws AirflowDagCycleException if a cycle is detected in this dag or its subdags
-
collect_dags
(self, dag_folder=None, only_if_updated=True, include_examples=configuration.conf.getboolean('core', 'LOAD_EXAMPLES'), safe_mode=configuration.conf.getboolean('core', 'DAG_DISCOVERY_SAFE_MODE'))[source]¶ Given a file path or a folder, this method looks for python modules, imports them and adds them to the dagbag collection.
Note that if a
.airflowignore
file is found while processing the directory, it will behave much like a.gitignore
, ignoring files that match any of the regex patterns specified in the file.Note: The patterns in .airflowignore are treated as un-anchored regexes, not shell-like glob patterns.
-
class
airflow.models.
User
[source]¶ Bases:
airflow.models.base.Base
-
class
airflow.models.
TaskInstance
(task, execution_date, state=None)[source]¶ Bases:
airflow.models.base.Base
,airflow.utils.log.logging_mixin.LoggingMixin
Task instances store the state of a task instance. This table is the authority and single source of truth around what tasks have run and the state they are in.
The SqlAlchemy model doesn’t have a SqlAlchemy foreign key to the task or dag model deliberately to have more control over transactions.
Database transactions on this table should insure double triggers and any confusion around what task instances are or aren’t ready to run even while multiple schedulers may be firing task instances.
-
try_number
[source]¶ Return the try number that this task number will be when it is actually run.
If the TI is currently running, this will match the column in the databse, in all othercases this will be incremenetd
-
is_premature
[source]¶ Returns whether a task is in UP_FOR_RETRY state and its retry interval has elapsed.
-
command
(self, mark_success=False, ignore_all_deps=False, ignore_depends_on_past=False, ignore_task_deps=False, ignore_ti_state=False, local=False, pickle_id=None, raw=False, job_id=None, pool=None, cfg_path=None)[source]¶ Returns a command that can be executed anywhere where airflow is installed. This command is part of the message sent to executors by the orchestrator.
-
command_as_list
(self, mark_success=False, ignore_all_deps=False, ignore_task_deps=False, ignore_depends_on_past=False, ignore_ti_state=False, local=False, pickle_id=None, raw=False, job_id=None, pool=None, cfg_path=None)[source]¶ Returns a command that can be executed anywhere where airflow is installed. This command is part of the message sent to executors by the orchestrator.
-
static
generate_command
(dag_id, task_id, execution_date, mark_success=False, ignore_all_deps=False, ignore_depends_on_past=False, ignore_task_deps=False, ignore_ti_state=False, local=False, pickle_id=None, file_path=None, raw=False, job_id=None, pool=None, cfg_path=None)[source]¶ Generates the shell command required to execute this task instance.
- Parameters
dag_id (unicode) – DAG ID
task_id (unicode) – Task ID
execution_date (datetime) – Execution date for the task
mark_success (bool) – Whether to mark the task as successful
ignore_all_deps (bool) – Ignore all ignorable dependencies. Overrides the other ignore_* parameters.
ignore_depends_on_past (bool) – Ignore depends_on_past parameter of DAGs (e.g. for Backfills)
ignore_task_deps (bool) – Ignore task-specific dependencies such as depends_on_past and trigger rule
ignore_ti_state (bool) – Ignore the task instance’s previous failure/success
local (bool) – Whether to run the task locally
pickle_id (unicode) – If the DAG was serialized to the DB, the ID associated with the pickled DAG
file_path – path to the file containing the DAG definition
raw – raw mode (needs more details)
job_id – job ID (needs more details)
pool (unicode) – the Airflow pool that the task should run in
cfg_path (basestring) – the Path to the configuration file
- Returns
shell command that can be used to run the task instance
-
current_state
(self, session=None)[source]¶ Get the very latest state from the database, if a session is passed, we use and looking up the state becomes part of the session, otherwise a new session is used.
-
refresh_from_db
(self, session=None, lock_for_update=False)[source]¶ Refreshes the task instance from the database based on the primary key
- Parameters
lock_for_update – if True, indicates that the database should lock the TaskInstance (issuing a FOR UPDATE clause) until the session is committed.
-
clear_xcom_data
(self, session=None)[source]¶ Clears all XCom data from the database for the task instance
-
are_dependents_done
(self, session=None)[source]¶ Checks whether the dependents of this task instance have all succeeded. This is meant to be used by wait_for_downstream.
This is useful when you do not want to start processing the next schedule of a task until the dependents are done. For instance, if the task DROPs and recreates a table.
-
are_dependencies_met
(self, dep_context=None, session=None, verbose=False)[source]¶ Returns whether or not all the conditions are met for this task instance to be run given the context for the dependencies (e.g. a task instance being force run from the UI will ignore some dependencies).
- Parameters
dep_context (DepContext) – The execution context that determines the dependencies that should be evaluated.
session (sqlalchemy.orm.session.Session) – database session
verbose (bool) – whether log details on failed dependencies on info or debug log level
-
next_retry_datetime
(self)[source]¶ Get datetime of the next retry if the task instance fails. For exponential backoff, retry_delay is used as base and will be converted to seconds.
-
ready_for_retry
(self)[source]¶ Checks on whether the task instance is in the right state and timeframe to be retried.
-
pool_full
(self, session)[source]¶ Returns a boolean as to whether the slot pool has room for this task to run
-
get_dagrun
(self, session)[source]¶ Returns the DagRun for this TaskInstance
- Parameters
session –
- Returns
DagRun
-
_check_and_change_state_before_execution
(self, verbose=True, ignore_all_deps=False, ignore_depends_on_past=False, ignore_task_deps=False, ignore_ti_state=False, mark_success=False, test_mode=False, job_id=None, pool=None, session=None)[source]¶ Checks dependencies and then sets state to RUNNING if they are met. Returns True if and only if state is set to RUNNING, which implies that task should be executed, in preparation for _run_raw_task
- Parameters
verbose (bool) – whether to turn on more verbose logging
ignore_all_deps (bool) – Ignore all of the non-critical dependencies, just runs
ignore_depends_on_past (bool) – Ignore depends_on_past DAG attribute
ignore_task_deps (bool) – Don’t check the dependencies of this TI’s task
ignore_ti_state (bool) – Disregards previous task instance state
mark_success (bool) – Don’t run the task, mark its state as success
test_mode (bool) – Doesn’t record success or failure in the DB
pool (str) – specifies the pool to use to run the task instance
- Returns
whether the state was changed to running or not
- Return type
-
_run_raw_task
(self, mark_success=False, test_mode=False, job_id=None, pool=None, session=None)[source]¶ Immediately runs the task (without checking or changing db state before execution) and then sets the appropriate final state after completion and runs any post-execute callbacks. Meant to be called only after another function changes the state to running.
-
run
(self, verbose=True, ignore_all_deps=False, ignore_depends_on_past=False, ignore_task_deps=False, ignore_ti_state=False, mark_success=False, test_mode=False, job_id=None, pool=None, session=None)[source]¶
-
_handle_reschedule
(self, actual_start_date, reschedule_exception, test_mode=False, context=None, session=None)[source]¶
-
xcom_push
(self, key, value, execution_date=None)[source]¶ Make an XCom available for tasks to pull.
- Parameters
key (str) – A key for the XCom
value (any pickleable object) – A value for the XCom. The value is pickled and stored in the database.
execution_date (datetime) – if provided, the XCom will not be visible until this date. This can be used, for example, to send a message to a task on a future date without it being immediately visible.
-
xcom_pull
(self, task_ids=None, dag_id=None, key=XCOM_RETURN_KEY, include_prior_dates=False)[source]¶ Pull XComs that optionally meet certain criteria.
The default value for key limits the search to XComs that were returned by other tasks (as opposed to those that were pushed manually). To remove this filter, pass key=None (or any desired value).
If a single task_id string is provided, the result is the value of the most recent matching XCom from that task_id. If multiple task_ids are provided, a tuple of matching values is returned. None is returned whenever no matches are found.
- Parameters
key (str) – A key for the XCom. If provided, only XComs with matching keys will be returned. The default key is ‘return_value’, also available as a constant XCOM_RETURN_KEY. This key is automatically given to XComs returned by tasks (as opposed to being pushed manually). To remove the filter, pass key=None.
task_ids (str or iterable of strings (representing task_ids)) – Only XComs from tasks with matching ids will be pulled. Can pass None to remove the filter.
dag_id (str) – If provided, only pulls XComs from this DAG. If None (default), the DAG of the calling task is used.
include_prior_dates (bool) – If False, only XComs from the current execution_date are returned. If True, XComs from previous dates are returned as well.
-
-
class
airflow.models.
BaseOperator
(task_id, owner=configuration.conf.get('operators', 'DEFAULT_OWNER'), email=None, email_on_retry=True, email_on_failure=True, retries=0, retry_delay=timedelta(seconds=300), retry_exponential_backoff=False, max_retry_delay=None, start_date=None, end_date=None, schedule_interval=None, depends_on_past=False, wait_for_downstream=False, dag=None, params=None, default_args=None, priority_weight=1, weight_rule=WeightRule.DOWNSTREAM, queue=configuration.conf.get('celery', 'default_queue'), pool=None, sla=None, execution_timeout=None, on_failure_callback=None, on_success_callback=None, on_retry_callback=None, trigger_rule=TriggerRule.ALL_SUCCESS, resources=None, run_as_user=None, task_concurrency=None, executor_config=None, inlets=None, outlets=None, *args, **kwargs)[source]¶ Bases:
airflow.utils.log.logging_mixin.LoggingMixin
Abstract base class for all operators. Since operators create objects that become nodes in the dag, BaseOperator contains many recursive methods for dag crawling behavior. To derive this class, you are expected to override the constructor as well as the ‘execute’ method.
Operators derived from this class should perform or trigger certain tasks synchronously (wait for completion). Example of operators could be an operator that runs a Pig job (PigOperator), a sensor operator that waits for a partition to land in Hive (HiveSensorOperator), or one that moves data from Hive to MySQL (Hive2MySqlOperator). Instances of these operators (tasks) target specific operations, running specific scripts, functions or data transfers.
This class is abstract and shouldn’t be instantiated. Instantiating a class derived from this one results in the creation of a task object, which ultimately becomes a node in DAG objects. Task dependencies should be set by using the set_upstream and/or set_downstream methods.
- Parameters
task_id (str) – a unique, meaningful id for the task
owner (str) – the owner of the task, using the unix username is recommended
retries (int) – the number of retries that should be performed before failing the task
retry_delay (datetime.timedelta) – delay between retries
retry_exponential_backoff (bool) – allow progressive longer waits between retries by using exponential backoff algorithm on retry delay (delay will be converted into seconds)
max_retry_delay (datetime.timedelta) – maximum delay interval between retries
start_date (datetime.datetime) – The
start_date
for the task, determines theexecution_date
for the first task instance. The best practice is to have the start_date rounded to your DAG’sschedule_interval
. Daily jobs have their start_date some day at 00:00:00, hourly jobs have their start_date at 00:00 of a specific hour. Note that Airflow simply looks at the latestexecution_date
and adds theschedule_interval
to determine the nextexecution_date
. It is also very important to note that different tasks’ dependencies need to line up in time. If task A depends on task B and their start_date are offset in a way that their execution_date don’t line up, A’s dependencies will never be met. If you are looking to delay a task, for example running a daily task at 2AM, look into theTimeSensor
andTimeDeltaSensor
. We advise against using dynamicstart_date
and recommend using fixed ones. Read the FAQ entry about start_date for more information.end_date (datetime.datetime) – if specified, the scheduler won’t go beyond this date
depends_on_past (bool) – when set to true, task instances will run sequentially while relying on the previous task’s schedule to succeed. The task instance for the start_date is allowed to run.
wait_for_downstream (bool) – when set to true, an instance of task X will wait for tasks immediately downstream of the previous instance of task X to finish successfully before it runs. This is useful if the different instances of a task X alter the same asset, and this asset is used by tasks downstream of task X. Note that depends_on_past is forced to True wherever wait_for_downstream is used.
queue (str) – which queue to target when running this job. Not all executors implement queue management, the CeleryExecutor does support targeting specific queues.
dag (airflow.models.DAG) – a reference to the dag the task is attached to (if any)
priority_weight (int) – priority weight of this task against other task. This allows the executor to trigger higher priority tasks before others when things get backed up.
weight_rule (str) – weighting method used for the effective total priority weight of the task. Options are:
{ downstream | upstream | absolute }
default isdownstream
When set todownstream
the effective weight of the task is the aggregate sum of all downstream descendants. As a result, upstream tasks will have higher weight and will be scheduled more aggressively when using positive weight values. This is useful when you have multiple dag run instances and desire to have all upstream tasks to complete for all runs before each dag can continue processing downstream tasks. When set toupstream
the effective weight is the aggregate sum of all upstream ancestors. This is the opposite where downtream tasks have higher weight and will be scheduled more aggressively when using positive weight values. This is useful when you have multiple dag run instances and prefer to have each dag complete before starting upstream tasks of other dags. When set toabsolute
, the effective weight is the exactpriority_weight
specified without additional weighting. You may want to do this when you know exactly what priority weight each task should have. Additionally, when set toabsolute
, there is bonus effect of significantly speeding up the task creation process as for very large DAGS. Options can be set as string or using the constants defined in the static classairflow.utils.WeightRule
pool (str) – the slot pool this task should run in, slot pools are a way to limit concurrency for certain tasks
sla (datetime.timedelta) – time by which the job is expected to succeed. Note that this represents the
timedelta
after the period is closed. For example if you set an SLA of 1 hour, the scheduler would send an email soon after 1:00AM on the2016-01-02
if the2016-01-01
instance has not succeeded yet. The scheduler pays special attention for jobs with an SLA and sends alert emails for sla misses. SLA misses are also recorded in the database for future reference. All tasks that share the same SLA time get bundled in a single email, sent soon after that time. SLA notification are sent once and only once for each task instance.execution_timeout (datetime.timedelta) – max time allowed for the execution of this task instance, if it goes beyond it will raise and fail.
on_failure_callback (callable) – a function to be called when a task instance of this task fails. a context dictionary is passed as a single parameter to this function. Context contains references to related objects to the task instance and is documented under the macros section of the API.
on_retry_callback (callable) – much like the
on_failure_callback
except that it is executed when retries occur.on_success_callback (callable) – much like the
on_failure_callback
except that it is executed when the task succeeds.trigger_rule (str) – defines the rule by which dependencies are applied for the task to get triggered. Options are:
{ all_success | all_failed | all_done | one_success | one_failed | none_failed | none_skipped | dummy}
default isall_success
. Options can be set as string or using the constants defined in the static classairflow.utils.TriggerRule
resources (dict) – A map of resource parameter names (the argument names of the Resources constructor) to their values.
run_as_user (str) – unix username to impersonate while running the task
task_concurrency (int) – When set, a task will be able to limit the concurrent runs across execution_dates
executor_config (dict) –
Additional task-level configuration parameters that are interpreted by a specific executor. Parameters are namespaced by the name of executor.
Example: to run this task in a specific docker container through the KubernetesExecutor
MyOperator(..., executor_config={ "KubernetesExecutor": {"image": "myCustomDockerImage"} } )
-
_base_operator_shallow_copy_attrs
= ['user_defined_macros', 'user_defined_filters', 'params', '_log'][source]¶
-
deps
[source]¶ Returns the list of dependencies for the operator. These differ from execution context dependencies in that they are specific to tasks and can be extended/overridden by subclasses.
-
schedule_interval
[source]¶ The schedule interval of the DAG always wins over individual tasks so that tasks within a DAG always line up. The task still needs a schedule_interval as it may not be attached to a DAG.
-
__rshift__
(self, other)[source]¶ Implements Self >> Other == self.set_downstream(other)
If “Other” is a DAG, the DAG is assigned to the Operator.
-
__lshift__
(self, other)[source]¶ Implements Self << Other == self.set_upstream(other)
If “Other” is a DAG, the DAG is assigned to the Operator.
-
__rrshift__
(self, other)[source]¶ Called for [DAG] >> [Operator] because DAGs don’t have __rshift__ operators.
-
__rlshift__
(self, other)[source]¶ Called for [DAG] << [Operator] because DAGs don’t have __lshift__ operators.
-
execute
(self, context)[source]¶ This is the main method to derive when creating an operator. Context is the same dictionary used as when rendering jinja templates.
Refer to get_template_context for more context.
-
post_execute
(self, context, result=None)[source]¶ This hook is triggered right after self.execute() is called. It is passed the execution context and any results returned by the operator.
-
on_kill
(self)[source]¶ Override this method to cleanup subprocesses when a task instance gets killed. Any use of the threading, subprocess or multiprocessing module within an operator needs to be cleaned up or it will leave ghost processes behind.
-
__deepcopy__
(self, memo)[source]¶ Hack sorting double chained task lists by task_id to avoid hitting max_depth on deepcopy operations.
-
render_template_from_field
(self, attr, content, context, jinja_env)[source]¶ Renders a template from a field. If the field is a string, it will simply render the string and return the result. If it is a collection or nested set of collections, it will traverse the structure and render all elements in it. If the field has another type, it will return it as it is.
-
render_template
(self, attr, content, context)[source]¶ Renders a template either from a file or directly in a field, and returns the rendered result.
-
prepare_template
(self)[source]¶ Hook that is triggered after the templated fields get replaced by their content. If you need your operator to alter the content of the file before the template is rendered, it should override this method to do so.
-
clear
(self, start_date=None, end_date=None, upstream=False, downstream=False, session=None)[source]¶ Clears the state of task instances associated with the task, following the parameters specified.
-
get_task_instances
(self, session, start_date=None, end_date=None)[source]¶ Get a set of task instance related to this task for a specific date range.
-
get_flat_relative_ids
(self, upstream=False, found_descendants=None)[source]¶ Get a flat list of relatives’ ids, either upstream or downstream.
-
get_flat_relatives
(self, upstream=False)[source]¶ Get a flat list of relatives, either upstream or downstream.
-
run
(self, start_date=None, end_date=None, ignore_first_depends_on_past=False, ignore_ti_state=False, mark_success=False)[source]¶ Run a set of task instances for a date range.
-
get_direct_relative_ids
(self, upstream=False)[source]¶ Get the direct relative ids to the current task, upstream or downstream.
-
get_direct_relatives
(self, upstream=False)[source]¶ Get the direct relatives to the current task, upstream or downstream.
-
set_downstream
(self, task_or_task_list)[source]¶ Set a task or a task list to be directly downstream from the current task.
-
class
airflow.models.
DagModel
[source]¶ Bases:
airflow.models.base.Base
-
create_dagrun
(self, run_id, state, execution_date, start_date=None, external_trigger=False, conf=None, session=None)[source]¶ Creates a dag run from this dag including the tasks associated with this dag. Returns the dag run.
- Parameters
run_id (str) – defines the the run id for this dag run
execution_date (datetime.datetime) – the execution date of this dag run
state (airflow.utils.state.State) – the state of the dag run
start_date (datetime.datetime) – the date this dag run should be evaluated
external_trigger (bool) – whether this dag run is externally triggered
session (sqlalchemy.orm.session.Session) – database session
-
-
class
airflow.models.
DAG
(dag_id, description='', schedule_interval=timedelta(days=1), start_date=None, end_date=None, full_filepath=None, template_searchpath=None, user_defined_macros=None, user_defined_filters=None, default_args=None, concurrency=configuration.conf.getint('core', 'dag_concurrency'), max_active_runs=configuration.conf.getint('core', 'max_active_runs_per_dag'), dagrun_timeout=None, sla_miss_callback=None, default_view=None, orientation=configuration.conf.get('webserver', 'dag_orientation'), catchup=configuration.conf.getboolean('scheduler', 'catchup_by_default'), on_success_callback=None, on_failure_callback=None, doc_md=None, params=None)[source]¶ Bases:
airflow.dag.base_dag.BaseDag
,airflow.utils.log.logging_mixin.LoggingMixin
A dag (directed acyclic graph) is a collection of tasks with directional dependencies. A dag also has a schedule, a start end an end date (optional). For each schedule, (say daily or hourly), the DAG needs to run each individual tasks as their dependencies are met. Certain tasks have the property of depending on their own past, meaning that they can’t run until their previous schedule (and upstream tasks) are completed.
DAGs essentially act as namespaces for tasks. A task_id can only be added once to a DAG.
- Parameters
dag_id (str) – The id of the DAG
description (str) – The description for the DAG to e.g. be shown on the webserver
schedule_interval (datetime.timedelta or dateutil.relativedelta.relativedelta or str that acts as a cron expression) – Defines how often that DAG runs, this timedelta object gets added to your latest task instance’s execution_date to figure out the next schedule
start_date (datetime.datetime) – The timestamp from which the scheduler will attempt to backfill
end_date (datetime.datetime) – A date beyond which your DAG won’t run, leave to None for open ended scheduling
template_searchpath (str or list[str]) – This list of folders (non relative) defines where jinja will look for your templates. Order matters. Note that jinja/airflow includes the path of your DAG file by default
user_defined_macros (dict) – a dictionary of macros that will be exposed in your jinja templates. For example, passing
dict(foo='bar')
to this argument allows you to{{ foo }}
in all jinja templates related to this DAG. Note that you can pass any type of object here.user_defined_filters (dict) – a dictionary of filters that will be exposed in your jinja templates. For example, passing
dict(hello=lambda name: 'Hello %s' % name)
to this argument allows you to{{ 'world' | hello }}
in all jinja templates related to this DAG.default_args (dict) – A dictionary of default parameters to be used as constructor keyword parameters when initialising operators. Note that operators have the same hook, and precede those defined here, meaning that if your dict contains ‘depends_on_past’: True here and ‘depends_on_past’: False in the operator’s call default_args, the actual value will be False.
params (dict) – a dictionary of DAG level parameters that are made accessible in templates, namespaced under params. These params can be overridden at the task level.
concurrency (int) – the number of task instances allowed to run concurrently
max_active_runs (int) – maximum number of active DAG runs, beyond this number of DAG runs in a running state, the scheduler won’t create new active DAG runs
dagrun_timeout (datetime.timedelta) – specify how long a DagRun should be up before timing out / failing, so that new DagRuns can be created
sla_miss_callback (types.FunctionType) – specify a function to call when reporting SLA timeouts.
default_view (str) – Specify DAG default view (tree, graph, duration, gantt, landing_times)
orientation (str) – Specify DAG orientation in graph view (LR, TB, RL, BT)
catchup (bool) – Perform scheduler catchup (or only run latest)? Defaults to True
on_failure_callback (callable) – A function to be called when a DagRun of this dag fails. A context dictionary is passed as a single parameter to this function.
on_success_callback (callable) – Much like the
on_failure_callback
except that it is executed when the dag succeeds.
-
concurrency_reached
[source]¶ Returns a boolean indicating whether the concurrency limit for this DAG has been reached
-
is_fixed_time_schedule
(self)[source]¶ Figures out if the DAG schedule has a fixed time (e.g. 3 AM).
- Returns
True if the schedule has a fixed time, False if not.
-
following_schedule
(self, dttm)[source]¶ Calculates the following schedule for this dag in UTC.
- Parameters
dttm – utc datetime
- Returns
utc datetime
-
previous_schedule
(self, dttm)[source]¶ Calculates the previous schedule for this dag in UTC
- Parameters
dttm – utc datetime
- Returns
utc datetime
-
get_run_dates
(self, start_date, end_date=None)[source]¶ Returns a list of dates between the interval received as parameter using this dag’s schedule interval. Returned dates can be used for execution dates.
- Parameters
start_date (datetime) – the start date of the interval
end_date (datetime) – the end date of the interval, defaults to timezone.utcnow()
- Returns
a list of dates within the interval following the dag’s schedule
- Return type
-
normalize_schedule
(self, dttm)[source]¶ Returns dttm + interval unless dttm is first interval then it returns dttm
-
handle_callback
(self, dagrun, success=True, reason=None, session=None)[source]¶ Triggers the appropriate callback depending on the value of success, namely the on_failure_callback or on_success_callback. This method gets the context of a single TaskInstance part of this DagRun and passes that to the callable along with a ‘reason’, primarily to differentiate DagRun failures.
- Parameters
dagrun – DagRun object
success – Flag to specify if failure or success callback should be called
reason – Completion reason
session – Database session
-
get_active_runs
(self, session=None)[source]¶ Returns a list of dag run execution dates currently running
- Parameters
session –
- Returns
List of execution dates
-
get_num_active_runs
(self, external_trigger=None, session=None)[source]¶ Returns the number of active “running” dag runs
- Parameters
external_trigger (bool) – True for externally triggered active dag runs
session –
- Returns
number greater than 0 for active dag runs
-
get_dagrun
(self, execution_date, session=None)[source]¶ Returns the dag run for a given execution date if it exists, otherwise none.
- Parameters
execution_date – The execution date of the DagRun to find.
session –
- Returns
The DagRun if found, otherwise None.
-
get_template_env
(self)[source]¶ Returns a jinja2 Environment while taking into account the DAGs template_searchpath, user_defined_macros and user_defined_filters
-
set_dependency
(self, upstream_task_id, downstream_task_id)[source]¶ Simple utility method to set dependency between two tasks that already have been added to the DAG using add_task()
-
topological_sort
(self)[source]¶ Sorts tasks in topographical order, such that a task comes after any of its upstream dependencies.
Heavily inspired by: http://blog.jupo.org/2012/04/06/topological-sorting-acyclic-directed-graphs/
- Returns
list of tasks in topological order
-
set_dag_runs_state
(self, state=State.RUNNING, session=None, start_date=None, end_date=None)[source]¶
-
clear
(self, start_date=None, end_date=None, only_failed=False, only_running=False, confirm_prompt=False, include_subdags=True, include_parentdag=True, reset_dag_runs=True, dry_run=False, session=None, get_tis=False)[source]¶ Clears a set of task instances associated with the current dag for a specified date range.
-
classmethod
clear_dags
(cls, dags, start_date=None, end_date=None, only_failed=False, only_running=False, confirm_prompt=False, include_subdags=True, include_parentdag=False, reset_dag_runs=True, dry_run=False)[source]¶
-
sub_dag
(self, task_regex, include_downstream=False, include_upstream=True)[source]¶ Returns a subset of the current dag as a deep copy of the current dag based on a regex that should match one or many tasks, and includes upstream and downstream neighbours based on the flag passed.
-
add_task
(self, task)[source]¶ Add a task to the DAG
- Parameters
task (task) – the task you want to add
-
add_tasks
(self, tasks)[source]¶ Add a list of tasks to the DAG
- Parameters
tasks (list of tasks) – a lit of tasks you want to add
-
run
(self, start_date=None, end_date=None, mark_success=False, local=False, executor=None, donot_pickle=configuration.conf.getboolean('core', 'donot_pickle'), ignore_task_deps=False, ignore_first_depends_on_past=False, pool=None, delay_on_limit_secs=1.0, verbose=False, conf=None, rerun_failed_tasks=False, run_backwards=False)[source]¶ Runs the DAG.
- Parameters
start_date (datetime.datetime) – the start date of the range to run
end_date (datetime.datetime) – the end date of the range to run
mark_success (bool) – True to mark jobs as succeeded without running them
local (bool) – True to run the tasks using the LocalExecutor
executor (airflow.executor.BaseExecutor) – The executor instance to run the tasks
donot_pickle (bool) – True to avoid pickling DAG object and send to workers
ignore_task_deps (bool) – True to skip upstream tasks
ignore_first_depends_on_past (bool) – True to ignore depends_on_past dependencies for the first set of tasks only
pool (str) – Resource pool to use
delay_on_limit_secs (float) – Time in seconds to wait before next attempt to run dag run when max_active_runs limit has been reached
verbose (bool) – Make logging output more verbose
conf (dict) – user defined dictionary passed from CLI
rerun_failed_tasks –
run_backwards –
- Type
- Type
-
create_dagrun
(self, run_id, state, execution_date=None, start_date=None, external_trigger=False, conf=None, session=None)[source]¶ Creates a dag run from this dag including the tasks associated with this dag. Returns the dag run.
- Parameters
run_id (str) – defines the the run id for this dag run
execution_date (datetime.datetime) – the execution date of this dag run
state (airflow.utils.state.State) – the state of the dag run
start_date (datetime) – the date this dag run should be evaluated
external_trigger (bool) – whether this dag run is externally triggered
session (sqlalchemy.orm.session.Session) – database session
-
sync_to_db
(self, owner=None, sync_time=None, session=None)[source]¶ Save attributes about this DAG to the DB. Note that this method can be called for both DAGs and SubDAGs. A SubDag is actually a SubDagOperator.
- Parameters
dag (airflow.models.DAG) – the DAG object to save to the DB
sync_time (datetime) – The time that the DAG should be marked as sync’ed
- Returns
None
-
static
deactivate_unknown_dags
(active_dag_ids, session=None)[source]¶ Given a list of known DAGs, deactivate any other DAGs that are marked as active in the ORM
- Parameters
active_dag_ids (list[unicode]) – list of DAG IDs that are active
- Returns
None
-
static
deactivate_stale_dags
(expiration_date, session=None)[source]¶ Deactivate any DAGs that were last touched by the scheduler before the expiration date. These DAGs were likely deleted.
- Parameters
expiration_date (datetime) – set inactive DAGs that were touched before this time
- Returns
None
-
static
get_num_task_instances
(dag_id, task_ids, states=None, session=None)[source]¶ Returns the number of task instances in the given DAG.
-
class
airflow.models.
Chart
[source]¶ Bases:
airflow.models.base.Base
-
class
airflow.models.
KnownEventType
[source]¶ Bases:
airflow.models.base.Base
-
class
airflow.models.
KnownEvent
[source]¶ Bases:
airflow.models.base.Base
-
class
airflow.models.
Variable
[source]¶ Bases:
airflow.models.base.Base
,airflow.utils.log.logging_mixin.LoggingMixin
-
classmethod
setdefault
(cls, key, default, deserialize_json=False)[source]¶ Like a Python builtin dict object, setdefault returns the current value for a key, and if it isn’t there, stores the default value and returns it.
- Parameters
key (str) – Dict key for this Variable
default (Mixed) – Default value to set and return if the variable isn’t already in the DB
deserialize_json – Store this as a JSON encoded value in the DB and un-encode it when retrieving a value
- Returns
Mixed
-
classmethod
-
class
airflow.models.
XCom
[source]¶ Bases:
airflow.models.base.Base
,airflow.utils.log.logging_mixin.LoggingMixin
Base class for XCom objects.
-
__table_args__
[source]¶ TODO: “pickling” has been deprecated and JSON is preferred. “pickling” will be removed in Airflow 2.0.
-
classmethod
set
(cls, key, value, execution_date, task_id, dag_id, session=None)[source]¶ Store an XCom value. TODO: “pickling” has been deprecated and JSON is preferred. “pickling” will be removed in Airflow 2.0.
- Returns
None
-
classmethod
get_one
(cls, execution_date, key=None, task_id=None, dag_id=None, include_prior_dates=False, session=None)[source]¶ Retrieve an XCom value, optionally meeting certain criteria. TODO: “pickling” has been deprecated and JSON is preferred. “pickling” will be removed in Airflow 2.0.
- Returns
XCom value
-
classmethod
get_many
(cls, execution_date, key=None, task_ids=None, dag_ids=None, include_prior_dates=False, limit=100, session=None)[source]¶ Retrieve an XCom value, optionally meeting certain criteria TODO: “pickling” has been deprecated and JSON is preferred. “pickling” will be removed in Airflow 2.0.
-
-
class
airflow.models.
DagRun
[source]¶ Bases:
airflow.models.base.Base
,airflow.utils.log.logging_mixin.LoggingMixin
DagRun describes an instance of a Dag. It can be created by the scheduler (for regular runs) or by an external trigger
-
refresh_from_db
(self, session=None)[source]¶ Reloads the current dagrun from the database :param session: database session
-
static
find
(dag_id=None, run_id=None, execution_date=None, state=None, external_trigger=None, no_backfills=False, session=None)[source]¶ Returns a set of dag runs for the given search criteria.
- Parameters
run_id (str) – defines the the run id for this dag run
execution_date (datetime.datetime) – the execution date
state (airflow.utils.state.State) – the state of the dag run
external_trigger (bool) – whether this dag run is externally triggered
no_backfills (bool) – return no backfills (True), return all (False). Defaults to False
session (sqlalchemy.orm.session.Session) – database session
-
get_task_instances
(self, state=None, session=None)[source]¶ Returns the task instances for this dag run
-
get_task_instance
(self, task_id, session=None)[source]¶ Returns the task instance specified by task_id for this dag run
- Parameters
task_id – the task id
-
get_previous_scheduled_dagrun
(self, session=None)[source]¶ The previous, SCHEDULED DagRun, if there is one
-
update_state
(self, session=None)[source]¶ Determines the overall state of the DagRun based on the state of its TaskInstances.
- Returns
State
-
verify_integrity
(self, session=None)[source]¶ Verifies the DagRun by checking for removed tasks or tasks that are not in the database yet. It will set state to removed or add the task if required.
-
-
class
airflow.models.
Pool
[source]¶ Bases:
airflow.models.base.Base
-
class
airflow.models.
Connection
(conn_id=None, conn_type=None, host=None, login=None, password=None, schema=None, port=None, extra=None, uri=None)[source]¶ Bases:
airflow.models.base.Base
,airflow.LoggingMixin
Placeholder to store information about different database instances connection information. The idea here is that scripts use references to database instances (conn_id) instead of hard coding hostname, logins and passwords when using operators or hooks.
-
__tablename__
= connection¶
-
id
¶
-
conn_id
¶
-
conn_type
¶
-
host
¶
-
schema
¶
-
login
¶
-
_password
¶
-
port
¶
-
is_encrypted
¶
-
is_extra_encrypted
¶
-
_extra
¶
-
_types
= [['docker', 'Docker Registry'], ['fs', 'File (path)'], ['ftp', 'FTP'], ['google_cloud_platform', 'Google Cloud Platform'], ['hdfs', 'HDFS'], ['http', 'HTTP'], ['hive_cli', 'Hive Client Wrapper'], ['hive_metastore', 'Hive Metastore Thrift'], ['hiveserver2', 'Hive Server 2 Thrift'], ['jdbc', 'Jdbc Connection'], ['jenkins', 'Jenkins'], ['mysql', 'MySQL'], ['postgres', 'Postgres'], ['oracle', 'Oracle'], ['vertica', 'Vertica'], ['presto', 'Presto'], ['s3', 'S3'], ['samba', 'Samba'], ['sqlite', 'Sqlite'], ['ssh', 'SSH'], ['cloudant', 'IBM Cloudant'], ['mssql', 'Microsoft SQL Server'], ['mesos_framework-id', 'Mesos Framework ID'], ['jira', 'JIRA'], ['redis', 'Redis'], ['wasb', 'Azure Blob Storage'], ['databricks', 'Databricks'], ['aws', 'Amazon Web Services'], ['emr', 'Elastic MapReduce'], ['snowflake', 'Snowflake'], ['segment', 'Segment'], ['azure_data_lake', 'Azure Data Lake'], ['azure_container_instances', 'Azure Container Instances'], ['azure_cosmos', 'Azure CosmosDB'], ['cassandra', 'Cassandra'], ['qubole', 'Qubole'], ['mongo', 'MongoDB'], ['gcpcloudsql', 'Google Cloud SQL']]¶
-
password
¶
-
extra
¶
-
extra_dejson
¶ Returns the extra property by deserializing json.
-
parse_from_uri
(self, uri)¶
-
get_password
(self)¶
-
set_password
(self, value)¶
-
get_extra
(self)¶
-
set_extra
(self, value)¶
-
rotate_fernet_key
(self)¶
-
get_hook
(self)¶
-
__repr__
(self)¶
-
debug_info
(self)¶
-
-
class
airflow.models.
SkipMixin
[source]¶ Bases:
airflow.utils.log.logging_mixin.LoggingMixin
-
skip
(self, dag_run, execution_date, tasks, session=None)¶ Sets tasks instances to skipped from the same dag run.
- Parameters
dag_run – the DagRun for which to set the tasks to skipped
execution_date – execution_date
tasks – tasks to skip (not task_ids)
session – db session to use
-