Metrics¶
Airflow can be set up to send metrics to StatsD.
Setup¶
First you must install StatsD requirement:
pip install 'apache-airflow[statsd]'
Add the following lines to your configuration file e.g. airflow.cfg
[metrics]
statsd_on = True
statsd_host = localhost
statsd_port = 8125
statsd_prefix = airflow
If you want to avoid sending all the available metrics to StatsD, you can configure an allow list of prefixes to send only the metrics that start with the elements of the list:
[metrics]
statsd_allow_list = scheduler,executor,dagrun
If you want to redirect metrics to different name, you can configure stat_name_handler
option
in [metrics]
section. It should point to a function that validates the StatsD stat name, applies changes
to the stat name if necessary, and returns the transformed stat name. The function may looks as follow:
def my_custom_stat_name_handler(stat_name: str) -> str:
return stat_name.lower()[:32]
If you want to use a custom StatsD client instead of the default one provided by Airflow, the following key must be added
to the configuration file alongside the module path of your custom StatsD client. This module must be available on
your PYTHONPATH
.
[metrics]
statsd_custom_client_path = x.y.customclient
See Modules Management for details on how Python and Airflow manage modules.
Note
For a detailed listing of configuration options regarding metrics, see the configuration reference documentation - [metrics].
Counters¶
Name |
Description |
---|---|
|
Number of started |
|
Number of ended |
|
Number of failed Heartbeats for a |
|
Number of |
|
Operator |
|
Operator |
|
Overall task instances failures |
|
Overall task instances successes |
|
Number of previously succeeded task instances |
|
Zombie tasks killed |
|
Scheduler heartbeats |
|
Relative number of currently running DAG parsing processes (ie this delta is negative when, since the last metric was sent, processes have completed) |
|
Number of file processors that have been killed due to taking too long |
|
Number of SLA callbacks received |
|
Number of non-SLA callbacks received |
|
Number of times we’ve scanned the filesystem and queued all existing dags |
|
(DEPRECATED) same behavior as |
|
Number of stalled |
|
Number of failures loading any DAG files |
|
Number of tasks killed externally |
|
Number of Orphaned tasks cleared by the Scheduler |
|
Number of Orphaned tasks adopted by the Scheduler |
|
Count of times a scheduler process tried to get a lock on the critical section (needed to send tasks to the executor) and found it locked by another process. |
|
Number of SLA misses |
|
Number of failed SLA miss callback notification attempts |
|
Number of failed SLA miss email notification attempts |
|
Number of started task in a given dag. Similar to <job_name>_start but for task |
|
Number of completed task in a given dag. Similar to <job_name>_end but for task |
|
Number of exceptions raised from DAG callbacks. When this happens, it means DAG callback is not working. |
|
Number of |
|
Number of non-zero exit code from Celery task. |
|
Number of tasks removed for a given dag (i.e. task no longer exists in DAG) |
|
Number of tasks restored for a given dag (i.e. task instance which was previously in REMOVED state in the DB is added to DAG file) |
|
Number of tasks instances created for a given Operator |
|
Number of triggers that blocked the main thread (likely due to not being fully asynchronous) |
|
Number of triggers that errored before they could fire an event |
|
Number of triggers that have fired at least one event |
|
Number of updated datasets |
|
Number of datasets marked as orphans because they are no longer referenced in DAG schedule parameters or task outlets |
|
Number of DAG runs triggered by a dataset update |
Gauges¶
Name |
Description |
---|---|
|
Number of DAGs found when the scheduler ran a scan based on it’s configuration |
|
Number of errors from trying to parse DAG files |
|
Seconds taken to scan and import |
|
Number of DAG files to be considered for the next scan |
|
Seconds since |
|
Size of the dag file queue. |
|
Number of tasks that cannot be scheduled because of no open slot in pool |
|
Number of tasks that are ready for execution (set to queued) with respect to pool limits, DAG concurrency, executor state, and priority. |
|
Number of open slots on executor |
|
Number of queued tasks on executor |
|
Number of running tasks on executor |
|
Number of open slots in the pool |
|
Number of queued slots in the pool |
|
Number of running slots in the pool |
|
Number of starving tasks in the pool |
|
Number of triggers currently running (per triggerer) |
Timers¶
Name |
Description |
---|---|
|
Milliseconds taken to check DAG dependencies |
|
Seconds taken to finish a task |
|
Seconds taken to load the given DAG file |
|
Seconds taken for a DagRun to reach success state |
|
Milliseconds taken for a DagRun to reach failed state |
|
Seconds of delay between the scheduled DagRun start date and the actual DagRun start date |
|
Milliseconds spent in the critical section of scheduler loop – only a single scheduler can enter this loop at a time |
|
Milliseconds spent running the critical section task instance query |
|
Milliseconds spent running one scheduler loop |
|
Seconds elapsed between first task start_date and dagrun expected start |
|
Milliseconds taken for fetching all Serialized Dags from DB |