airflow.providers.amazon.aws.operators.neptune

Module Contents

Classes

NeptuneStartDbClusterOperator

Starts an Amazon Neptune DB cluster.

NeptuneStopDbClusterOperator

Stops an Amazon Neptune DB cluster.

Functions

handle_waitable_exception(operator, err)

Handle client exceptions for invalid cluster or invalid instance status that are temporary.

airflow.providers.amazon.aws.operators.neptune.handle_waitable_exception(operator, err)[source]

Handle client exceptions for invalid cluster or invalid instance status that are temporary.

After status change, it’s possible to retry. Waiter will handle terminal status.

class airflow.providers.amazon.aws.operators.neptune.NeptuneStartDbClusterOperator(db_cluster_id, wait_for_completion=True, waiter_delay=30, waiter_max_attempts=60, deferrable=conf.getboolean('operators', 'default_deferrable', fallback=False), **kwargs)[source]

Bases: airflow.providers.amazon.aws.operators.base_aws.AwsBaseOperator[airflow.providers.amazon.aws.hooks.neptune.NeptuneHook]

Starts an Amazon Neptune DB cluster.

Amazon Neptune Database is a serverless graph database designed for superior scalability and availability. Neptune Database provides built-in security, continuous backups, and integrations with other AWS services

See also

For more information on how to use this operator, take a look at the guide: Start a Neptune database cluster

Parameters
  • db_cluster_id (str) – The DB cluster identifier of the Neptune DB cluster to be started.

  • wait_for_completion (bool) – Whether to wait for the cluster to start. (default: True)

  • deferrable (bool) – If True, the operator will wait asynchronously for the cluster to start. This implies waiting for completion. This mode requires aiobotocore module to be installed. (default: False)

  • waiter_delay (int) – Time in seconds to wait between status checks.

  • waiter_max_attempts (int) – Maximum number of attempts to check for job completion.

  • aws_conn_id – The Airflow connection used for AWS credentials. If this is None or empty then the default boto3 behaviour is used. If running Airflow in a distributed manner and aws_conn_id is None or empty, then default boto3 configuration would be used (and must be maintained on each worker node).

  • region_name – AWS region_name. If not specified then the default boto3 behaviour is used.

  • botocore_config – Configuration dictionary (key-values) for botocore client. See: https://botocore.amazonaws.com/v1/documentation/api/latest/reference/config.html

Returns

dictionary with Neptune cluster id

aws_hook_class[source]
template_fields: Sequence[str][source]
execute(context, event=None, **kwargs)[source]

Derive when creating an operator.

Context is the same dictionary used as when rendering jinja templates.

Refer to get_template_context for more context.

execute_complete(context, event=None)[source]
class airflow.providers.amazon.aws.operators.neptune.NeptuneStopDbClusterOperator(db_cluster_id, wait_for_completion=True, waiter_delay=30, waiter_max_attempts=60, deferrable=conf.getboolean('operators', 'default_deferrable', fallback=False), **kwargs)[source]

Bases: airflow.providers.amazon.aws.operators.base_aws.AwsBaseOperator[airflow.providers.amazon.aws.hooks.neptune.NeptuneHook]

Stops an Amazon Neptune DB cluster.

Amazon Neptune Database is a serverless graph database designed for superior scalability and availability. Neptune Database provides built-in security, continuous backups, and integrations with other AWS services

See also

For more information on how to use this operator, take a look at the guide: Start a Neptune database cluster

Parameters
  • db_cluster_id (str) – The DB cluster identifier of the Neptune DB cluster to be stopped.

  • wait_for_completion (bool) – Whether to wait for cluster to stop. (default: True)

  • deferrable (bool) – If True, the operator will wait asynchronously for the cluster to stop. This implies waiting for completion. This mode requires aiobotocore module to be installed. (default: False)

  • waiter_delay (int) – Time in seconds to wait between status checks.

  • waiter_max_attempts (int) – Maximum number of attempts to check for job completion.

  • aws_conn_id – The Airflow connection used for AWS credentials. If this is None or empty then the default boto3 behaviour is used. If running Airflow in a distributed manner and aws_conn_id is None or empty, then default boto3 configuration would be used (and must be maintained on each worker node).

  • region_name – AWS region_name. If not specified then the default boto3 behaviour is used.

  • botocore_config – Configuration dictionary (key-values) for botocore client. See: https://botocore.amazonaws.com/v1/documentation/api/latest/reference/config.html

Returns

dictionary with Neptune cluster id

aws_hook_class[source]
template_fields: Sequence[str][source]
execute(context, event=None, **kwargs)[source]

Derive when creating an operator.

Context is the same dictionary used as when rendering jinja templates.

Refer to get_template_context for more context.

execute_complete(context, event=None)[source]

Was this entry helpful?