airflow.providers.amazon.aws.transfers.http_to_s3
¶
This module contains operator to move data from HTTP endpoint to S3.
Module Contents¶
Classes¶
Calls an endpoint on an HTTP system to execute an action and store the result in S3. |
- class airflow.providers.amazon.aws.transfers.http_to_s3.HttpToS3Operator(*, endpoint=None, method='GET', data=None, headers=None, extra_options=None, http_conn_id='http_default', log_response=False, auth_type=None, tcp_keep_alive=True, tcp_keep_alive_idle=120, tcp_keep_alive_count=20, tcp_keep_alive_interval=30, s3_bucket=None, s3_key, replace=False, encrypt=False, acl_policy=None, aws_conn_id='aws_default', verify=None, **kwargs)[source]¶
Bases:
airflow.models.BaseOperator
Calls an endpoint on an HTTP system to execute an action and store the result in S3.
See also
For more information on how to use this operator, take a look at the guide: HTTP to Amazon S3 transfer operator
- Parameters
http_conn_id (str) – The http connection to run the operator against
endpoint (str | None) – The relative part of the full url. (templated)
method (str) – The HTTP method to use, default = “POST”
data (Any) – The data to pass. POST-data in POST/PUT and params in the URL for a GET request. (templated)
headers (dict[str, str] | None) – The HTTP headers to be added to the GET request
response_check – A check against the ‘requests’ response object. The callable takes the response object as the first positional argument and optionally any number of keyword arguments available in the context dictionary. It should return True for ‘pass’ and False otherwise.
response_filter – A function allowing you to manipulate the response text. e.g response_filter=lambda response: json.loads(response.text). The callable takes the response object as the first positional argument and optionally any number of keyword arguments available in the context dictionary.
extra_options (dict[str, Any] | None) – Extra options for the ‘requests’ library, see the ‘requests’ documentation (options to modify timeout, ssl, etc.)
log_response (bool) – Log the response (default: False)
auth_type (type[requests.auth.AuthBase] | None) – The auth type for the service
tcp_keep_alive (bool) – Enable TCP Keep Alive for the connection.
tcp_keep_alive_idle (int) – The TCP Keep Alive Idle parameter (corresponds to
socket.TCP_KEEPIDLE
).tcp_keep_alive_count (int) – The TCP Keep Alive count parameter (corresponds to
socket.TCP_KEEPCNT
)tcp_keep_alive_interval (int) – The TCP Keep Alive interval parameter (corresponds to
socket.TCP_KEEPINTVL
)s3_bucket (str | None) – Name of the S3 bucket where to save the object. (templated) It should be omitted when
s3_key
is provided as a full s3:// url.s3_key (str) – The key of the object to be created. (templated) It can be either full s3:// style url or relative path from root level. When it’s specified as a full s3:// url, please omit
s3_bucket
.replace (bool) – If True, it will overwrite the key if it already exists
encrypt (bool) – If True, the file will be encrypted on the server-side by S3 and will be stored in an encrypted form while at rest in S3.
acl_policy (str | None) – String specifying the canned ACL policy for the file being uploaded to the S3 bucket.
aws_conn_id (str | None) – Connection id of the S3 connection to use
Whether or not to verify SSL certificates for S3 connection. By default SSL certificates are verified.
You can provide the following values:
- False: do not validate SSL certificates. SSL will still be used,
but SSL certificates will not be verified.
- path/to/cert/bundle.pem: A filename of the CA cert bundle to uses.
You can specify this argument if you want to use a different CA cert bundle than the one used by botocore.
- template_fields: collections.abc.Sequence[str] = ('http_conn_id', 'endpoint', 'data', 'headers', 's3_bucket', 's3_key')[source]¶
- template_ext: collections.abc.Sequence[str] = ()[source]¶