Amazon S3 - Put object(s)

Declaration

<AMAWSS3 ACTIVITY="put_object" SUBFOLDERS="yes/no" 
ARCHIVETURNOFF="yes/no" MATCHCASE="yes/no" 
CHECKSUM="yes/no" EXCLUDE="text" RE="yes/no" 
ACCESSKEY="text" SECRETKEY="text (encrypted)" 
PROTOCOL="text (options)" USERAGENT="text" 
MAXERRORRETRY="number" SERVICEURL="text" PROXYHOST="text" 
PROXYPORT="number" PROXYUSER="text" PROXYPWD="text 
(encrypted)" FILE="text" ACL="text (options)" 
STORAGECLASS="text (options)" ENCRYPTIONTYPE="text 
(options)" BUCKETNAME="text" KEYNAME="text" 
TIMEOUT="number" MD5="text" RESULTDATASET="text" 
CONTENTTYPE="text" GENERATEMD5="yes/no" />

Description: Adds (uploads) one or more objects to a bucket.

IMPORTANT: The AWS S3 activities are performed using Amazon's Simple Storage Service engine, therefore, launching and operating Amazon S3 a valid Access Key ID and Secret Access Key.

Practical Usage

Puts one or more objects into an S3 bucket.  You must have write permissions on a bucket to add an object to it.

Connection Parameters

Property

Type

Required

Default

Markup

Description

Connection

 

 

 

 

Indicates where AWS user credentials and preferences should originate from. This is a design mode parameter used only during task construction and configuration, thus, comprises no markup. The available options are:

  • Host (default) - Specifies that user credentials and/or advanced preferences are configured individually for this activity. This option is normally chosen if only a single activity is required to complete an operation.

  • Session - Specifies that user credentials and/or advanced preferences are obtained from a pre-configured session created in an earlier step with the use of the S3 - Create session activity. This option is normally chosen if a combination of related activities are required to complete an operation. Linking several activities to a single session eliminates redundancy. Additionally, a single task supports construction and simultaneous execution of multiple sessions, improving efficiency.

Session

Text

Yes if connection is session-based

EC2Session1

SESSION="S3Session1"

The name of an existing session to attach this activity to. This parameter is active only if the Connection parameter is set to Session. The default session name is 'S3Session1'.

Access key

Text

Yes if connection is host-based

(Empty)

ACCESSKEY=

"022QF06E7MXBSH9DHM02"

A 20-character alphanumeric string that uniquely identifies the owner of the AWS service account, similar to a username. This key along with a corresponding secret access key forms a secure information set that AWS uses to confirm a valid user's identity. This parameter is active only if the Connection parameter is set to Host.

Secret Access key

Text

Yes if connection is host-based

(Empty)

SECRETKEY=

"kWcrlUX5JEDGM/LtmEENI/

aVmYvHNif5zB+d9+ct"

A 40-character string that serves the role as password to access the AWS service account. This along with an associated access key forms a secure information set that EC2 uses to confirm a valid user's identity. This parameter is active only if the Connection parameter is set to Host.

Protocol

Text (options)

No

HTTP

PROTOCOL="HTTPS"

The protocol required. The available options are:

  • HTTP (default)

  • HTTPS

User agent

Text

No

AutoMate

USERAGENT="AutoMate"

The name of the client or application initiating requests to AWS, which in this case, is AutoMate. This parameter's default value is 'AutoMate'. 

Service URL

Text

No

(Empty)

SERVICEURL=

"https://s3.eu-west-1.amazonaws.com"

The URL that provides the service endpoint. To make the service call to a different region, you can pass the region-specific endpoint URL. For example, entering  https://s3.us-west-1.amazonaws.com points to US West (Northern California) region. A complete list of S3 regions, along with associated endpoints and valid protocols can be found below under S3 Endpoints and Regions.

Maximum retry on error

Number

No

(Empty)

MAXERRORRETRY="4"

The total amount of times this activity should retry its request to the server before returning an error. Network components can generate errors anytime in the life of a request, thus, implementing retries can increase reliability. 

Proxy host

Text

No

(Empty)

PROXYHOST="proxy.host.com"

The host name (e.g., server.domain.com) or IP address (e.g., xxx.xxx.xxx.xxx) of the proxy server to use when connecting to AWS.  

Proxy port

Number

No

(Empty)

PROXYPORT="1028"

The port that should be used to connect to the proxy server.

Proxy username

Text

No

(Empty)

PROXYUSER="username"

The username that should be used to authenticate connection with the proxy server (if required).

Proxy password

Text

No

(Empty)

PROXYPWD="encrypted"

The password that should be used to authenticate connection with the proxy server (if required).

Object Parameters

Property

Type

Required

Default

Markup

Description

Put file(s)

Text

No

 (Empty)

  1. FILE="c:\folder1\file.txt"

  2. FILE="c:\folder1\*.txt"

  3. FILE="c:\folder1\*.*"

If enabled, specifies the local file(s) to upload onto the S3 bucket. To specify more than one file, use wildcard characters (* or ?). To specify multiple objects or wildcard masks, separate them with a pipe symbol. Example: *.txt|*.bak. If this parameter is enabled, the Put data parameter is ignored (enabled by default).

Put data

Text

No

(Empty)

DATA="dataString"

If enabled, specifies the text string that should be populated into the object being put into the S3 bucket. If this parameter is enabled, the Put file(s) parameter is ignored (disabled by default).

Bucket name

Text

Yes

(Empty)

BUCKETNAME="MyBucket"

The name of the bucket in which to put the object(s) into.

Key name

Text

Yes

(Empty)

  1. KEYNAME="filename.txt"

  2. KEYNAME="*.txt"

  3. KEYNAME=*.txt|*.doc"

  4. KEYNAME="*.*"

The key name of the object(s) to be placed in the bucket. A key is the unique identifier for an object within a bucket. To specify more than one object, use wildcard characters (* or ?).

Canned ACL

Text

Yes

Private

  1. ACL="noACL"

  2. ACL="Private"

  3. ACL="PublicRead"

  4. ACL="PublicReadWrite"

  5. ACL="AuthenticatedRead"

  6. ACL="BucketOwnerRead"

  7. ACL="BucketOwnerFullControl"

Sets the S3 canned access policy associated to the object being placed into the bucket. The available Canned ACL options are:

  • No ACL - No access policies.

  • Private (Default) - Owner gets full control. No one else has access rights.

  • Public read - Owner gets full control and the anonymous principal is granted read access.

  • Public Read Write - Owner gets full control, the anonymous principal is granted read/write access. Useful policy to apply to a bucket, but is generally not recommended.

  • Authenticated read - Owner gets full control, and any principal authenticated as a registered Amazon S3 user is granted read access.

  • Bucket owner read - Object owner gets full control. Bucket owner gets read access. This ACL applies only to objects and is equivalent to Private when used with Create Bucket activity. Use this ACL to let someone other than the bucket owner write content (get full control) in the bucket but still grant the bucket owner read access to the objects.

  • Bucket owner full control - Object owner gets full control. Bucket owner gets full control. Applies only to objects and is equivalent to Private when used with Create Bucket activity. Use this ACL to let someone other than the bucket owner write content (get full control) in the bucket but still grant the bucket owner full rights over the objects.

Timeout (in minutes)

Number

No

20

TIMEOUT="25"

The timeout value (in minutes) that should be set for this activity. The value is assigned to the Timeout properties of the requested object used for S3 Put requests. The default value is 20 minutes.

Storage class

Text (options)

No

Standard

  1. STORAGECLASS="standard"

  2. STORAGECLASS="reduced_redundancy"

  3. STORAGECLASS="glacier"

The storage class to use. This provides the option to reduce costs by storing non-critical, reproducible data at lower levels of redundancy than Amazon S3's standard storage. The available options are:

  • Standard (default) - Use Amazon's standard storage configuration.

  • Reduced redundancy - Use reduced redundancy storage which stores objects on multiple devices across multiple facilities, providing 400 times the durability of a typical disk drive, but does not replicate objects as many times as standard Amazon S3 storage. Thus, using RRS is even more cost effective.

  • Glacier - To transition objects to the Glacier storage class you can use lifecycle configuration.

Server side encryption

Text (options)

No

AES-256

  1. ENCRYPTIONTYPE="none"

  2. ENCRYPTIONTYPE="AES256"

Specifies a server-side encryption algorithm to use when Amazon S3 creates an object. The available objects are:

  • None (default) - No server side encryption algorithm will be used.

  • AES-256 - Server side encryption will be set to AES-256.

MD5 digest (optional)

Text

No

(Empty)

MD5="MD5Value"

The base64 encoded 128-bit MD5 digest of the message used to ensure that data is not corrupted traversing the network. When you use the Content-MD5 option, Amazon S3 checks the object against the provided MD5 value. If they do not match, Amazon S3 returns an error. Additionally, you can calculate the MD5 while putting an object to Amazon S3 and compare the returned ETag to the calculated MD5 value.

Generate MD5 value

Yes/No

No

No

GENERATEMD5="YES"

If set to YES, generates an MD5 value used as a message integrity check to verify that the data is the same data that was originally sent. Set to NO by default.

Content type (optional)

Text

No

(Empty)

CONTENTTYPE="text/plain"

A standard MIME type describing the format of the contents (e.g., text/plain).

Create and populate dataset with S3 object information

Text

No

(Empty)

RESULTDATASET="S3ObjectInfo"

The name of the dataset to create and populate with information about the object(s) that were added.

File Parameters

Property

Type

Required

Default

Markup

Description

Include subfolders

Yes/No

No

No

SUBFOLDERS="YES"

If set to YES, denotes that, if present, subfolders should be searched for files matching the mask specified in the Put file(s) or Put data parameter. If set to NO (default), subfolders are ignored. Only files that exist in the root of the source folder will be searched.

Turn archive attribute off

Yes/No

No

No

ARCHIVETURNOFF="YES"

If set to YES, specifies that the archive attribute of the source files should be switched OFF. The Windows archive attribute is generally used to track whether a file has been backed-up. Turning the source file's archive attribute off indicates to many backup programs that the file has already been backed-up. This parameter is set to NO by default.

Match case

Yes/No

No

No

MATCHCASE="yes"

If set to YES, causes this activity to become case sensitive. Set to NO by default.

Validate checksum

Yes/No

No

No

CHECKSUM="yes"

If set to YES, causes this activity to validate file checksum before uploading. Set to NO by default.

Exclude mask

Text

No

(Empty)

  1. EXCLUDE="*.txt"

  2. EXCLUDE="*.txt|*.bak

  3. EXCLUDE="c:\foldename"

Causes this activity to omit files matching the mask(s) specified. File names or wildcard masks may be used. Multiple entries may be specified by separating them with a pipe symbol (|), for example: *.txt|*.bak

Regular expression

Yes/No

No

No

RE="yes"

If set to YES, indicates that the value entered in the Exclude mask parameter will be interpreted as a regular expression. If set to NO (default) the value will be interpreted as normal readable text.

Only if newer than

Date

No

(Empty)

ISNEWERTHAN=

"%DateSerial(2001,10,12) + TimeSerial(00,00,00)%"

If enabled, causes this activity to only act on files that are newer than the date/time specified. If this parameter is left blank or disabled (default), file dates are ignored. Click the Custom button to select from a list of pre-defined date parameters. Enable the Expression option to allow entry of a date/time expression.

Only if older than

Date

No

(Empty)

ISOLDERTHAN=

"%DateSerial(2001,10,12) + TimeSerial(00,00,00)%"

If enabled, causes this activity to only act on files that are older than the date/time specified. If this parameter is left blank or disabled (default), file dates are ignored. Click the Custom button to select from a list of pre-defined date parameters. Enable the Expression option to allow entry of a date/time expression.

File Filter Parameters

Property

Type

Required

Default

Markup

Description

Attributes

Text (Options)

No

(Empty)

ATTRFILTER="+R+A-H" (copy read-only & archive files but not hidden files)

This group of settings causes this activity to filter which files are affected by the attribute change based on the original attribute settings of the source files. In visual mode, a group of controls are provided to assist in the selection of this parameter. In AML mode, a single text item must be specified that contains the original attribute mask of the files you wish to affect. Available options are:

  • R—Read-only: Specifying "+R" causes files with this attribute turned on to be included, "-R" causes files with this attribute turned off to be included, not specifying the letter (default) causes this attribute to be ignored.

  • A—Archive: Specifying "+A" causes files with this attribute turned on to be included, "-A" causes files with this attribute turned off to be included, not specifying the letter (default) causes this attribute to be ignored.

  • S—System: Specifying "+S" causes files with this attribute turned on to be included, "-S" causes files with this attribute turned off to be included, not specifying the letter (default) causes this attribute to be ignored.

  • H—Hidden: Specifying "+R" causes files with this attribute turned on to be included, "-H" causes files with this attribute turned off to be included, not specifying the letter (default) causes this attribute to be ignored.

  • C—Compression: Specifying "+C" causes files with this attribute turned on to be included, "-C" causes files with this attribute turned off to be included, not specifying the letter (default) causes this attribute to be ignored.

Advanced Parameters

Each Amazon S3 object has a set of key-value pairs with which it is associated called Headers or Metadata. Metadata can provide important details about an object, such as file name, type, date of creation/modification etc. There are two kinds of metadata in S3; system metadata, and user metadata. System metadata is used and processed by Amazon S3. User metadata (also known as custom header) is specified by you, the user. Amazon S3 simply stores it and passes it back to you upon request. S3 lets you to store your personal information as custom headers or user metadata such as First Name, Last Name, Company Name, Phone Numbers, etc, so that you can distinguish specific files. Using this parameter, you can add new custom header/user metadata to existing S3 objects, edit default S3 metadata on a bucket or store/upload new objects with custom header or metadata.

Property

Type

Required

Default

Markup

Description

Name

Text

No

(Empty)

HEADER NAME="myHeader"

Specifies the "key" in a key-value pair. This is the handle that you assign to an object. In Amazon S3, details about each file and folder are stored in key value pairs called metadata or headers. System metadata is used and processed by Amazon S3, however, user metadata or custom headers can be specified by you. This adds more flexibility and enables you to better distinguish specific files by adding or editing custom headers on existing S3 objects or assigning custom headers to new objects. Press Click here to add new row... to add a key-value pair. Press the red X to remove an existing key-value pair.

Value

Text

No

(Empty)

VALUE="theValue"

Specifies the "value" in a key-value pair. This is the content that you are storing for an object. In Amazon S3, details about each file and folder are stored in key value pairs called metadata or headers. System metadata is used and processed by Amazon S3, however, user metadata or custom headers can be specified by you. This adds more flexibility and enables you to better distinguish specific files by adding or editing custom headers on existing S3 objects or assigning custom headers to new objects. Press Click here to add new row... to add a key-value pair. Press the red X to remove an existing key-value pair.

Description tab - A custom description can be provided on the Description tab to convey additional information or share special notes about a task step.

Error Causes tab - Specify how this step should behave upon the occurrence of an error. (Refer to Task Builder > Error Causes Tab for details.)

On Error tab - Specify what AWE should do if this step encounters an error as defined on the Error Causes tab. (Refer to Task Builder > On Error Tab for details.)

S3 Endpoints and Regions

This table contains a complete list of Amazon endpoints, along with their corresponding regions, supported protocols and location constraints.

Endpoint

Region

Protocol

Location Constraints

s3.amazonaws.com

US Standard *

HTTP and HTTPS

(none required)

s3.us-west-2.amazonaws.com

US West (Oregon) Region

HTTP and HTTPS

us-west-2

s3.us-west-1.amazonaws.com

US West (Northern California) Region

HTTP and HTTPS

us-west-1

s3.eu-west-1.amazonaws.com

EU (Ireland) Region

HTTP and HTTPS

EU

s3.ap-southeast-1.amazonaws.com

Asia Pacific (Singapore) Region

HTTP and HTTPS

ap-southeast-1

s3.ap-southeast-2.amazonaws.com

Asia Pacific (Sydney) Region

HTTP and HTTPS

ap-southeast-2

s3.ap-northeast-1.amazonaws.com

Asia Pacific (Tokyo) Region

HTTP and HTTPS

ap-northeast-1

s3.sa-east-1.amazonaws.com

South America (Sao Paulo) Region

HTTP and HTTPS

sa-east-1

* The US Standard region automatically routes requests to facilities in Northern Virginia or the Pacific Northwest using network maps.

Example

The sample AML code below can be copied and pasted directly into the Steps panel of the Task Builder.

Description:: Put file "C:\Temp\Book1.xlsx" in bucket "myBucket". Key name is "Book1.xlsx". Use "my_session" S3 session.

<AMAWSS3 ACTIVITY="put_object" 
BUCKETNAME="myBucket" KEYNAME="Book1.xlsx" 
FILE="C:\Temp\Book1.xlsx" SESSION="my_session" />