airflow.providers.amazon.aws.transfers.local_to_s3

Module Contents

class airflow.providers.amazon.aws.transfers.local_to_s3.LocalFilesystemToS3Operator(*, filename: str, dest_key: str, dest_bucket: Optional[str] = None, aws_conn_id: str = 'aws_default', verify: Optional[Union[str, bool]] = None, replace: bool = False, encrypt: bool = False, gzip: bool = False, acl_policy: Optional[str] = None, **kwargs)[source]

Bases: airflow.models.BaseOperator

Uploads a file from a local filesystem to Amazon S3.

Parameters
  • filename (str) -- Path to the local file. Path can be either absolute (e.g. /path/to/file.ext) or relative (e.g. ../../foo//.csv). (templated)

  • dest_key (str) --

    The key of the object to copy to. (templated)

    It can be either full s3:// style url or relative path from root level.

    When it's specified as a full s3:// url, including dest_bucket results in a TypeError.

  • dest_bucket (str) --

    Name of the S3 bucket to where the object is copied. (templated)

    Inclusion when dest_key is provided as a full s3:// url results in a TypeError.

  • aws_conn_id (str) -- Connection id of the S3 connection to use

  • verify (bool or str) --

    Whether or not to verify SSL certificates for S3 connection. By default SSL certificates are verified.

    You can provide the following values:

    • False: do not validate SSL certificates. SSL will still be used,

      but SSL certificates will not be verified.

    • path/to/cert/bundle.pem: A filename of the CA cert bundle to uses.

      You can specify this argument if you want to use a different CA cert bundle than the one used by botocore.

  • replace (bool) -- A flag to decide whether or not to overwrite the key if it already exists. If replace is False and the key exists, an error will be raised.

  • encrypt (bool) -- If True, the file will be encrypted on the server-side by S3 and will be stored in an encrypted form while at rest in S3.

  • gzip (bool) -- If True, the file will be compressed locally

  • acl_policy (str) -- String specifying the canned ACL policy for the file being uploaded to the S3 bucket.

template_fields = ['filename', 'dest_key', 'dest_bucket'][source]
execute(self, context)[source]

Was this entry helpful?