AWS S3 Plugin
The plugin provides functionality to interact with Amazon Simple Storage Service (Amazon S3).
Installation
-
Copy the below line to
dependencies
section of the projectbuild.gradle
fileExample 1. build.gradleimplementation(group: 'org.vividus', name: 'vividus-plugin-aws-s3', version: '0.5.11')
-
If the project was imported to the IDE before adding new dependency, re-generate the configuration files for the used IDE and then refresh the project in the used IDE.
Configuration
Authentication
The plugin attempts to find AWS credentials by using the default credential provider chain. The provider chain looks for credentials using the provided below options one by one starting from the top. If credentials are found at some point, the search stops and further options are not evaluated.
-
The AWS credentials scoped to either current scenario or story (configured via the corresponding step).
-
Environment variables:
AWS_ACCESS_KEY_ID
andAWS_SECRET_ACCESS_KEY
(the optional variable for session token isAWS_SESSION_TOKEN
). -
The properties:
system.aws.accessKeyId
andsystem.aws.secretKey
(the optional property for session token issystem.aws.sessionToken
). -
Web Identity Token credentials from the environment or container.
-
In the default credentials file (the location of this file varies by platform).
-
Credentials delivered through the Amazon EC2 container service if the
AWS_CONTAINER_CREDENTIALS_RELATIVE_URI
environment variable is set and security manager has permission to access the variable. -
In the instance profile credentials, which exist within the instance metadata associated with the IAM role for the EC2 instance. This step is available only when running your application on an Amazon EC2 instance, but provides the greatest ease of use and best security when working with Amazon EC2 instances.
-
If the plugin still hasn’t found credentials by this point, client creation fails with an exception.
See the official "Working with AWS Credentials" guide to get more details.
Region Selection
The plugin attempts to find AWS region by using the default region provider chain. The provider chain looks for a region using the provided below options one by one starting from the top. If region is found at some point, the search stops and further options are not evaluated.
-
Environment variable:
AWS_REGION
. -
The property:
system.aws.region
. -
AWS shared configuration file (usually located at
~/.aws/config
). -
Use the Amazon EC2 instance metadata service to determine the region of the currently running Amazon EC2 instance.
-
If the plugin still hasn’t found a region by this point, client creation fails with an exception.
See the official "AWS Region Selection" guide to get more details.
Steps
Upload data
Upload the specified data to Amazon S3 under the specified bucket and key name.
When I upload data `$data` with key `$objectKey` and content type `$contentType` to S3 bucket `$bucketName`
-
$data
- the data to be uploaded -
$objectKey
- the key under which to store the specified data -
$contentType
- the MIME type of data -
$bucketName
- the name of an existing bucket
When I upload data `{"my":"json"}` with key `folder/name.json` and content type `application/json` to S3 bucket `testBucket`
Download S3 object
Retrieve the object by key from the provided S3 bucket and save its content to a variable. The specified bucket and object key must exist, or an error will result.
When I fetch object with key `$objectKey` from S3 bucket `$bucketName` and save result to $scopes variable `$variableName`
-
$objectKey
- the key under which the desired object is stored -
$bucketName
- the name of the bucket containing the desired object -
$variableName
- the variable name
When I fetch object with key `/path/file.json` from S3 bucket `some-bucket-name` and save result to scenario variable `my-json-var`
Set S3 object ACL
Set the canned access control list (ACL) for the specified object in Amazon S3. Each bucket and object in Amazon S3 has an ACL that defines its access control policy. When a request is made, Amazon S3 authenticates the request using its standard authentication procedure and then checks the ACL to verify the sender was granted access to the bucket or object. If the sender is approved, the request proceeds. Otherwise, Amazon S3 returns an error.
When I set ACL `$cannedAcl` for object with key `$objectKey` from S3 bucket `$bucketName`
-
$cannedAcl
- The new pre-configured canned ACL for the specified object. (See the official documentation for a complete list of the available ACLs) -
$objectKey
- The key of the object within the specified bucket whose ACL is being set. -
$bucketName
- The name of the bucket containing the object whose ACL is being set
When I set ACL `public-read` for object with key `/path/file.json` from S3 bucket `some-bucket-name`
Collect S3 objects keys
Collects a list of the S3 objects keys in the specified bucket. Because buckets can contain a virtually unlimited number of keys, the complete results can be extremely large, thus it’s recommended to use filters to retrieve the filtered dataset.
When I collect objects keys filtered by:$filters in S3 bucket `$bucketName` and save result to $scopes variable `$variableName`
-
$filters
- The ExamplesTable with filters to be applied to the objects to limit the resulting set.Table 1. The supported filter types Type Alias Description KEY_PREFIX
key prefix
The prefix parameter, restricting to keys that begin with the specified value
KEY_SUFFIX
key suffix
The suffix parameter, restricting to keys that end with the specified value
OBJECT_MODIFIED_NOT_EARLIER_THAN
object modified not earlier than
The ISO-8601 date, restricting to objects with last modified date after the specified value
The filters can be combined in any order and in any composition.
Example 5. The combination of filters|filterType |filterValue | |key suffix |.txt | |object modified not earlier than|2021-01-15T19:00:00+00:00 |
-
$bucketName
- The name of the S3 bucket which objects keys are to be collected -
$variableName
- The variable name to store the S3 objects keys. The keys are accessible via zero-based index,${my-keys[0]}
will return the first found key.
When I collect objects keys filtered by:
|filterType |filterValue |
|key prefix |folder/ |
in S3 bucket `some-bucket-name` and save result to scenario variable `s3-keys`
When I fetch object with key `${s3-keys[0]}` from S3 bucket `some-bucket-name` and save result to scenario variable `s3-object`
Delete S3 object
Delete the specified object in the specified bucket. Once deleted, the object can only be restored if versioning was enabled when the object was deleted. If attempting to delete an object that does not exist, Amazon S3 returns a success message instead of an error message.
When I delete object with key `$objectKey` from S3 bucket `$bucketName`
-
$objectKey
- The key of the object to delete. -
$bucketName
- The name of the Amazon S3 bucket containing the object to delete.
When I delete object with key `/path/file.json` from S3 bucket `some-bucket-name`