Defining S3 Monitor File Jobs
This job allows you to monitor an S3 bucket and check if files have been created or updated in the bucket.
This page includes the following:
S3 Monitor File Job Parameters
On the Monitor File Job section, you define the parameters relevant to run that job on the S3 system from Automic Automation.
Some fields allow you to open a picker dialog from where you can select the file and the bucket. By default, only 200 entries are displayed. If the relevant file or bucket is not displayed among those 200, please type in the relevant name on the Search field to narrow down the list.
-
Connection
Select the S3 Connection object containing the relevant information to connect to the Simple Storage Service system.
To search for a Connection object, start typing its name to limit the list of the objects that match your input.
-
Monitor Job Type
Select the type of job that you want to monitor in the specified S3 bucket:
-
CREATE
The job monitors whether the file that you specify in File Name has been created in the bucket. The job keeps monitoring the file until the time specified in Steady State has elapsed (see below). Only then does the job send a positive response.
-
UPDATE
The job monitors whether the file that you specify in File Name has been updated in the bucket. The job keeps monitoring the file until the time specified in Steady State has elapsed. Only then does the job send a positive response.
-
GENERATE
The jobs combines the monitoring capabilities of the CREATE and the UPDATE types. Use this type if you are not sure whether the file is already available in the bucket.
-
If the file is not available yet, it monitors whether it is created and sends a response as described for CREATE job types.
-
If the file is already available, it monitors whether it has been modified and sends a response as described for UPDATE job types.
In either case, the job keeps track of the file once it has been either created or updated until the time specified in Steady State has elapsed. Only then does the job send a positive response.
-
-
-
Region
Defining the region in the job is optional and only relevant for AWS. If you choose not to define it, the job takes the URL defined in the Connection object.
However, if you decide to define the region in the job, make sure that the definition matches the one defined in the Connection object that you have selected for the job. If both region definitions do not match, the job execution fails and an error message is logged in both, the Agent (PLOG) and the Job (REP) reports (see Monitoring S3 Jobs).
-
Bucket Name
Define the bucket name that should be monitored. You can click the browse button to the right of the field to open a picker dialog where you can select the relevant name.
-
File Name
Define the name of the file to be watched on the bucket. The file name definition is case sensitive. You can specify the exact name or write a regular expression.
-
Use Regex
Select this checkbox if you want to use regular expressions rather than the exact name of the file in the File Name field. If you do so, the job will return multiple file names.
Examples:
Aut.*.txt: An asterisk (*) stands for any number of characters; therefore, the regex searches for text files starting with Aut and ending with .txt with any number of characters where the * is located, such as Automic.txt.
tes.?.txt: A question mark (?) stands for a single character; therefore, the regex searches for text files with 4-character names starting with tes and ending with .txt with a single in the position where the ? is located.
Note:If the Use Regex checkbox is not selected, any special characters (?, *) used are considered standard characters and part of the name string.
-
Sorting
This field is displayed only when the Use Regex checkbox is selected.
When you use a regular expression, the job returns multiple file names. Choosing a sorting strategy allows you to define how you want to sort and save the files monitored by the job. The options are the following:
-
Native Order: There is no particular order to sort the files.
-
Alpha Ascending: Sorted alphabetically by file name in ascending order.
-
Alpha Descending: Sorted alphabetically by file name in descending order.
-
Time Ascending: Sorted by last modified time in ascending order.
-
Time Descending: Sorted by last modified time descending order.
-
Size Ascending: Sorted by file size ascending order.
-
Size Descending: Sorted by file size in descending order.
Important!Even though the job returns multiple file names, only the first one - as per the sorting order - is displayed in the Job Details in the Process Monitoring.
-
-
Steady State
Define the amount of time in seconds (s), minutes (m), or hours (h) the file should remain in a steady state before being ready for further use. For example, when a file is created, you see that it was created but the content of the file might not be uploaded completely yet. This value allows you to define the period of time that the file should remain as is, without changes.
Default value: 60s
Values allowed: 1 to 500
Format: number + time unit (s, m, h), for example, 60s /1m / 1h
-
Sleep interval
Define the amount of time in seconds before resubmitting the monitor status request.
Default value: 60
Values allowed: 0 to 1800
Format: no time unit required, as this parameter can be defined only in seconds, for example, 90
-
Query Param
Allows you to filter the query and therefore the query response.
Examples
When using a regex, the prefix parameter allows you to optimize the search and get results that are more efficient. For example, if your bucket has the following files:
/opt/files/example_04_08.pdf
/opt/files/example_05_08.pdf
/opt/files/example_06_08.pdf
/opt/files/demo_07_08.pdf
/opt/files/demo_08_08.pdf
If you want to check for files starting with example and with a .pdf extension, you can specify .*.pdf on the File Name field and enable the Use Regex option.
You can further specify the query using the following query parameter:
prefix=/opt/files/example.
You can also send multiple query parameters using the format <param1>=<value1>&<param2>=<value2>.
For example, you can add the list-type=2 parameter to use version 2 of the AWS API operation:
prefix=/opt/files/example&list-type=2
-
Return File URI
Select this checkbox if you want the execution details to include the full URI of the file name. Otherwise, the details return only the file name.
The Pre-Process page allows you to define the settings all S3 Jobs using script statements. These statements are processed before the Schedule Job is executed, see Setting S3 Job Properties Through Scripts.
AWS S3 Server-Side Encryption Parameters
Amazon S3 encrypts your objects at their destination as it writes them in the respective AWS S3 data center and decrypts them when you access them. You can set a default encryption configuration for your buckets. However, you can also override the default bucket encryption and define a different one per object to be stored in an AWS S3 bucket.
You can only apply one type of server-side encryption to an object at the time.
If the file that you want to monitor was originally uploaded to the bucket using a custom encryption type (SSE-C), you need to provide an algorithm and a key to be able to monitor it in the bucket.
If the file that you want to monitor was uploaded to the bucket using either SSE-S3, SSE-KMS, or DSSE-KMS encryption keys, you do not need to define any parameters.
Specify Encryption Key Allows you to define that you want to use a custom server-side encryption, when the file that you want to monitor was uploaded using custom encryption.
Select the check box to define the following parameters:
-
Customer Algorithm: AES256 is the only supported algorithm.
-
Customer Key: Enter the encryption key that you want to use to execute the job.
See also: