Defining Run Pipeline Job Properties
Automic Automation Azure Data Factory Run Pipeline Jobs allow you to run and rerun a pipeline and, therefore, all the activities grouped in the pipeline, in your Data Factory from Automic Automation.
To run a pipeline successfully, you have to define all the parameters relevant and pass them to the application. These parameters allow you to control the pipeline's behavior and activities, such as passing dataset connection details or the path to a file to be processed. They are the pipeline name, the data factory in which the pipeline is located, the resource group that holds the data factory and the subscription that comprises the resource group. You can also define the pipeline parameters you want to pass to the application in JSON format.
To start the pipeline the first time, you need to define the connection object, the subscription ID, the resource group, the data factory and the pipeline name. Optionally, you can also define other parameters in JSON format to be passed on to the pipeline. You can decide how the JSON is created. For example, it could be created using a custom job or scripts in Automic Automation. Regardless of how the file is created, you have to make sure that it is available on the Agent machine (host).
This page includes the following:
Run Pipeline Job
On the Run Pipeline Job page you define the parameters relevant to run a pipeline in Azure Data Factory.
To start the pipeline the first time, you need to define the connection object, the subscription ID, as well as the resource group, data factory and pipeline name. Optionally, you can also define other parameters in JSON format to be passed on to the pipeline.
-
Connection
Select the Azure Data Factory Connection object containing the relevant information to connect to the application.
To search for a Connection object, start typing its name to limit the list of the objects that match your input.
-
Subscription ID
Enter the ID of the Azure subscription in which your resource group is located.
-
Resource Group Name
Define the name of the resource group that holds the relevant data factory.
-
Data Factory Name
Define the name of the data factory in which the pipeline is located.
-
Pipeline Name
Define the name of the relevant pipeline.
Value: text
-
Pipeline Parameters (Optional)
Select one of the options available:
-
JSON
Use the JSON field to enter the JSON payload definition.
Important!There are many options available to define the JSON payload. For more information and examples of the JSON definition, see Defining the JSON.
-
File Path
Use the JSON File Path field to define the path to the JSON file containing the attributes that you want to pass to the application. Make sure that the file is available on the Agent machine (host).
-
The Pre-Process page allows you to define the settings of the Jobs using script statements. These statements are processed before the Pipeline Run Job is executed, see Setting Job Properties Through Scripts.
Defining the JSON
This section gives you examples of how you could define the JSON field when defining a Run Pipeline Job. You have different options available.
Simple JSON Definition
The first option to define the JSON field is a simple payload definition. To do so, make sure you define the parameters required to define the pipeline, such as the subscription ID, the resource group, the data factory and the pipeline name.
Using Variables
You can also use variables in the payload definition.
Example
In the Pipeline Parameters field, enter the following:
&PIPELINEPARA#
If the variable is not defined yet, you must define it now. You do it on the Variables page of the Run Pipeline Job definition:
(Click to expand)
When you execute the Job, the variables will be replaced with the value you have just defined. This is visible in the Agent log (PLOG), see Monitoring Azure Data Factory Jobs.
Run Pipeline Job in a Workflow
You can also use the JSON field if you want to include a Run Pipeline Job in a Workflow and you want to use Automation Engine variables in it.
Example
In the Workflow, a Script object (SCRI) with the variable definition relevant for the Data Factory Name, the Pipeline Name, and the Pipeline Parameters precedes your Run Pipeline Job:
(Click to expand)
In the Run Pipeline Job definition, you include those variables:
(Click to expand)
When the Job is executed, the variables will be replaced with the value you have just defined. This is visible in the Agent (PLOG) report, see Monitoring Azure Data Factory Jobs.
See also: