airflow.providers.google.cloud.hooks.gcs
¶
This module contains a Google Cloud Storage hook.
Module Contents¶
-
airflow.providers.google.cloud.hooks.gcs.
_fallback_object_url_to_object_name_and_bucket_name
(object_url_keyword_arg_name='object_url', bucket_name_keyword_arg_name='bucket_name', object_name_keyword_arg_name='object_name') → Callable[[T], T][source]¶ -
Decorator factory that convert object URL parameter to object name and bucket name parameter.
-
class
airflow.providers.google.cloud.hooks.gcs.
GCSHook
(gcp_conn_id: str = 'google_cloud_default', delegate_to: Optional[str] = None, google_cloud_storage_conn_id: Optional[str] = None, impersonation_chain: Optional[Union[str, Sequence[str]]] = None)[source]¶ Bases:
airflow.providers.google.common.hooks.base_google.GoogleBaseHook
Interact with Google Cloud Storage. This hook uses the Google Cloud connection.
-
copy
(self, source_bucket: str, source_object: str, destination_bucket: Optional[str] = None, destination_object: Optional[str] = None)[source]¶ Copies an object from a bucket to another, with renaming if requested.
destination_bucket or destination_object can be omitted, in which case source bucket/object is used, but not both.
- Parameters
source_bucket (str) – The bucket of the object to copy from.
source_object (str) – The object to copy.
destination_bucket (str) – The destination of the object to copied to. Can be omitted; then the same bucket is used.
destination_object (str) – The (renamed) path of the object if given. Can be omitted; then the same name is used.
-
rewrite
(self, source_bucket: str, source_object: str, destination_bucket: str, destination_object: Optional[str] = None)[source]¶ Has the same functionality as copy, except that will work on files over 5 TB, as well as when copying between locations and/or storage classes.
destination_object can be omitted, in which case source_object is used.
-
download
(self, object_name: str, bucket_name: Optional[str], filename: Optional[str] = None, chunk_size: Optional[int] = None, timeout: Optional[int] = DEFAULT_TIMEOUT)[source]¶ Downloads a file from Google Cloud Storage.
When no filename is supplied, the operator loads the file into memory and returns its content. When a filename is supplied, it writes the file to the specified location and returns the location. For file sizes that exceed the available memory it is recommended to write to a file.
-
provide_file
(self, bucket_name: Optional[str] = None, object_name: Optional[str] = None, object_url: Optional[str] = None)[source]¶ Downloads the file to a temporary directory and returns a file handle
You can use this method by passing the bucket_name and object_name parameters or just object_url parameter.
-
provide_file_and_upload
(self, bucket_name: Optional[str] = None, object_name: Optional[str] = None, object_url: Optional[str] = None)[source]¶ Creates temporary file, returns a file handle and uploads the files content on close.
You can use this method by passing the bucket_name and object_name parameters or just object_url parameter.
-
upload
(self, bucket_name: str, object_name: str, filename: Optional[str] = None, data: Optional[Union[str, bytes]] = None, mime_type: Optional[str] = None, gzip: bool = False, encoding: str = 'utf-8', chunk_size: Optional[int] = None, timeout: Optional[int] = DEFAULT_TIMEOUT)[source]¶ Uploads a local file or file data as string or bytes to Google Cloud Storage.
- Parameters
bucket_name (str) – The bucket to upload to.
object_name (str) – The object name to set when uploading the file.
filename (str) – The local file path to the file to be uploaded.
data (str) – The file’s data as a string or bytes to be uploaded.
mime_type (str) – The file’s mime type set when uploading the file.
gzip (bool) – Option to compress local file or file data for upload
encoding (str) – bytes encoding for file data if provided as string
chunk_size (int) – Blob chunk size.
timeout (int) – Request timeout in seconds.
-
exists
(self, bucket_name: str, object_name: str)[source]¶ Checks for the existence of a file in Google Cloud Storage.
-
get_blob_update_time
(self, bucket_name: str, object_name: str)[source]¶ Get the update time of a file in Google Cloud Storage
-
is_updated_after
(self, bucket_name: str, object_name: str, ts: datetime)[source]¶ Checks if an blob_name is updated in Google Cloud Storage.
- Parameters
bucket_name (str) – The Google Cloud Storage bucket where the object is.
object_name (str) – The name of the object to check in the Google cloud storage bucket.
ts (datetime.datetime) – The timestamp to check against.
-
is_updated_between
(self, bucket_name: str, object_name: str, min_ts: datetime, max_ts: datetime)[source]¶ Checks if an blob_name is updated in Google Cloud Storage.
- Parameters
bucket_name (str) – The Google Cloud Storage bucket where the object is.
object_name (str) – The name of the object to check in the Google cloud storage bucket.
min_ts (datetime.datetime) – The minimum timestamp to check against.
max_ts (datetime.datetime) – The maximum timestamp to check against.
-
is_updated_before
(self, bucket_name: str, object_name: str, ts: datetime)[source]¶ Checks if an blob_name is updated before given time in Google Cloud Storage.
- Parameters
bucket_name (str) – The Google Cloud Storage bucket where the object is.
object_name (str) – The name of the object to check in the Google cloud storage bucket.
ts (datetime.datetime) – The timestamp to check against.
-
is_older_than
(self, bucket_name: str, object_name: str, seconds: int)[source]¶ Check if object is older than given time
-
delete_bucket
(self, bucket_name: str, force: bool = False)[source]¶ Delete a bucket object from the Google Cloud Storage.
-
list
(self, bucket_name, versions=None, max_results=None, prefix=None, delimiter=None)[source]¶ List all objects from the bucket with the give string prefix in name
- Parameters
bucket_name (str) – bucket name
versions (bool) – if true, list all versions of the objects
max_results (int) – max count of items to return in a single page of responses
prefix (str) – prefix string which filters objects whose name begin with this prefix
delimiter (str) – filters objects based on the delimiter (for e.g ‘.csv’)
- Returns
a stream of object names matching the filtering criteria
-
get_size
(self, bucket_name: str, object_name: str)[source]¶ Gets the size of a file in Google Cloud Storage.
-
get_crc32c
(self, bucket_name: str, object_name: str)[source]¶ Gets the CRC32c checksum of an object in Google Cloud Storage.
-
get_md5hash
(self, bucket_name: str, object_name: str)[source]¶ Gets the MD5 hash of an object in Google Cloud Storage.
-
create_bucket
(self, bucket_name: str, resource: Optional[dict] = None, storage_class: str = 'MULTI_REGIONAL', location: str = 'US', project_id: Optional[str] = None, labels: Optional[dict] = None)[source]¶ Creates a new bucket. Google Cloud Storage uses a flat namespace, so you can’t create a bucket with a name that is already in use.
See also
For more information, see Bucket Naming Guidelines: https://cloud.google.com/storage/docs/bucketnaming.html#requirements
- Parameters
bucket_name (str) – The name of the bucket.
resource (dict) – An optional dict with parameters for creating the bucket. For information on available parameters, see Cloud Storage API doc: https://cloud.google.com/storage/docs/json_api/v1/buckets/insert
storage_class (str) –
This defines how objects in the bucket are stored and determines the SLA and the cost of storage. Values include
MULTI_REGIONAL
REGIONAL
STANDARD
NEARLINE
COLDLINE
.
If this value is not specified when the bucket is created, it will default to STANDARD.
location (str) –
The location of the bucket. Object data for objects in the bucket resides in physical storage within this region. Defaults to US.
project_id (str) – The ID of the Google Cloud Project.
labels (dict) – User-provided labels, in key/value pairs.
- Returns
If successful, it returns the
id
of the bucket.
-
insert_bucket_acl
(self, bucket_name: str, entity: str, role: str, user_project: Optional[str] = None)[source]¶ Creates a new ACL entry on the specified bucket_name. See: https://cloud.google.com/storage/docs/json_api/v1/bucketAccessControls/insert
- Parameters
bucket_name (str) – Name of a bucket_name.
entity (str) – The entity holding the permission, in one of the following forms: user-userId, user-email, group-groupId, group-email, domain-domain, project-team-projectId, allUsers, allAuthenticatedUsers. See: https://cloud.google.com/storage/docs/access-control/lists#scopes
role (str) – The access permission for the entity. Acceptable values are: “OWNER”, “READER”, “WRITER”.
user_project (str) – (Optional) The project to be billed for this request. Required for Requester Pays buckets.
-
insert_object_acl
(self, bucket_name: str, object_name: str, entity: str, role: str, generation: Optional[int] = None, user_project: Optional[str] = None)[source]¶ Creates a new ACL entry on the specified object. See: https://cloud.google.com/storage/docs/json_api/v1/objectAccessControls/insert
- Parameters
bucket_name (str) – Name of a bucket_name.
object_name (str) – Name of the object. For information about how to URL encode object names to be path safe, see: https://cloud.google.com/storage/docs/json_api/#encoding
entity (str) – The entity holding the permission, in one of the following forms: user-userId, user-email, group-groupId, group-email, domain-domain, project-team-projectId, allUsers, allAuthenticatedUsers See: https://cloud.google.com/storage/docs/access-control/lists#scopes
role (str) – The access permission for the entity. Acceptable values are: “OWNER”, “READER”.
generation (long) – Optional. If present, selects a specific revision of this object.
user_project (str) – (Optional) The project to be billed for this request. Required for Requester Pays buckets.
-
compose
(self, bucket_name: str, source_objects: List, destination_object: str)[source]¶ Composes a list of existing object into a new object in the same storage bucket_name
Currently it only supports up to 32 objects that can be concatenated in a single operation
https://cloud.google.com/storage/docs/json_api/v1/objects/compose
- Parameters
-
sync
(self, source_bucket: str, destination_bucket: str, source_object: Optional[str] = None, destination_object: Optional[str] = None, recursive: bool = True, allow_overwrite: bool = False, delete_extra_files: bool = False)[source]¶ Synchronizes the contents of the buckets.
Parameters
source_object
anddestination_object
describe the root sync directories. If they are not passed, the entire bucket will be synchronized. If they are passed, they should point to directories.Note
The synchronization of individual files is not supported. Only entire directories can be synchronized.
- Parameters
source_bucket (str) – The name of the bucket containing the source objects.
destination_bucket (str) – The name of the bucket containing the destination objects.
source_object (Optional[str]) – The root sync directory in the source bucket.
destination_object (Optional[str]) – The root sync directory in the destination bucket.
recursive (bool) – If True, subdirectories will be considered
recursive – If True, subdirectories will be considered
allow_overwrite (bool) – if True, the files will be overwritten if a mismatched file is found. By default, overwriting files is not allowed
delete_extra_files (bool) –
if True, deletes additional files from the source that not found in the destination. By default extra files are not deleted.
Note
This option can delete data quickly if you specify the wrong source/destination combination.
- Returns
none
-