Monitoring Azure Data Factory Jobs

When you execute an object in Automic Automation, the corresponding task is visible in the list of Tasks in the Process Monitoring perspective. Process Monitoring gives you the full range of possibilities to monitor, analyze, identify, and remediate problems. It displays comprehensive data on active and inactive tasks, it provides tools to filter and group tasks. In this perspective, you can modify active tasks, open their reports, and execution lists and troubleshoot.

If you want to learn more about how to monitor your jobs, please refer to the Automic Automation product documentation at Monitoring Tasks.

For more information on how to work with tasks on the Process Monitoring perspective, refer to the Automic Automation documentation at Working with Tasks.

This page includes the following:

Statuses

The Process Monitoring perspective contains two status-related columns:

  • Status

    This column shows the status of the Automic Automation Run Pipeline Job.

  • Remote Status

    This column shows the status of the Run Pipeline Job on Azure Data Factory.

Possible Statuses

The tasks can have the following statuses:

Status Column

  • While executing, the status in Automic Automation is Active.

  • On completion and depending on the result, ENDED_OK or ENDED_NOT_OK.

Remote Status

  • While executing, the status in Azure Data Factory is Running.

  • On completion and depending on the result, Success or Failed.

This information is written into the Report (REP) log file, see Reports.

Monitoring Job Details

  1. Go to the list of tasks in the Process Monitoring perspective.

  2. Find the Azure Data Factory task.

  3. Select the task and select the Details button.

    The Details pane opens on the right hand side and shows a summary of the execution of the selected task. The General and Job sections of the Details pane display information about the object configuration and its execution in Automic Automation.

    The Object Variables section displays the information that the Data Factory system reports back to Automic Automation for the Run Pipeline Jobs:

    • When you run a task, Data Factory sends back the &PIPELINERUNID# variable value, which you can use to restart a task from a failed activity, see Canceling and Restarting Jobs

Canceling and Restarting Jobs

You can execute Jobs and cancel the corresponding tasks in the Process Monitoring perspective.

Canceling means that the Automic Automation task is canceled as well as the pipeline on the Azure Data Factory. If for some reason canceling the execution fails - for example, if you do not have the rights to cancel the execution - the execution remains active and that information is visible on the job report.

You can also restart a task either that was canceled or that failed, thus allowing you to restart the pipeline in your Azure Data Factory. You can restart it either from scratch or starting from a failed activity. That means that, when you restart a pipeline from Automic Automation, the activities that were executed successfully are skipped and the pipeline execution restarts from failed activities.

Example

You trigger a Job in Automic Automation which, in turn, triggers the pipeline in the Data Factory. The job execution returns the PIPELINERUNID. If, for some reason, the pipeline fails on the Data Factory, the Job also fails in Automic Automation.

Restarting the Job in Automic Automation triggers a pipeline run in Data Factory. However, you can also restart the pipeline in the Data Factory from a failed activity.

To do so, you need to pass the RESTART_PIPELINERUNID variable, which needs to include the value of the &PIPELINERUNID# variable returned by the Data Factory in the Object Variables section.

You can pass that value using the Post Process in the object definition, for example:

:PSET &RESTART_PIPELINERUNID# = &PIPELINERUNID#

For more information, refer to the Automic Automation documentation at Conditions, Preconditions, Postconditions.

In this case, if the object definition includes a definition for the RESTART_PIPELINERUNID variable, the task is restarted from the failed activity. This can be defined on the same or a subsequent job.

If there is no RESTART_PIPELINERUNID variable definition, a complete new run is started and the Object Variables section returns the &PIPELINERUNID# variable.

Also, if you try to restart a job after its job definition has been changed, then it will run the pipeline with the values currently available in the job definition.

Example

You run a pipeline called PipelineTest. The job either gets canceled or fails and you want to restart it but, before doing so, you change the pipeline name from PipelineTest to PipelineWeb. In this case you cannot restart PipelineTest because the changed, current job definition states that the pipeline is called PipelineWeb.

Note:

Restarting a task is not the same as starting one. For example, when restarting, the task does not re-run the preconditions as they have been fulfilled already. For more information, see Restarting Tasks in the Automic Automation documentation.

For more information on how to work with tasks on the Process Monitoring perspective, refer to the Automic Automation documentation at Working with Tasks.

Agent Connection

If the Agent stops working or loses connection, the Job execution stops but resumes as soon as the Agent is connected again.

Reports

When jobs are executed, Automic Automation generates output files and reports that you can open from the user interface. Combined with the Execution Data, these reports provide comprehensive information about the status and the history of your executions. This information tracks all processes and allows you to control and monitor tasks, ensuring full auditing capability.

Reports are kept in the system until they are explicitly removed by a reorganization run. During a reorganization run, the Execution Data can be kept. Keeping the execution data is an advantage, because reports can consume a large amount of hard drive space. Removing the reports from your database does not mean loss of important historical data.

The following reports, combined with execution data, offer comprehensive information about the status and history of executions, ensuring full auditing capability:

  • Agent log (PLOG)

    This report captures everything that has to do with the Job execution from the Automic Automation perspective:

    It provides a comprehensive log of a Job’s execution, detailing its start time, configuration parameters, validation results, and the response from the target system, though the response is in an unprocessable format.

    It serves as the primary resource for troubleshooting issues, including those related to Job configuration. For example, it shows the actual values of variables used in configuration, rather than their names, as these are resolved during runtime. Additionally, the agent log includes all parameters used during execution, execution results, the final integration URL, and the various job states throughout the process, which aids in identifying and addressing potential problems.

  • Report (REP)

    This report captures information that the target system sends to Automic Automation during and after Job execution. It provides relevant information about the Job execution and its results in a format that can be easily parsed for further processing, such as JSON or XML. This report's content can be used to trigger further actions based on that content.

    Note:

    The Report (REP) cannot be parsed if the job throws an exception. In this case, only the exception message is written into both the Report (REP) and Agent (PLOG) logs.

    You will most probably include the Jobs in other Automic Automation objects such as Workflows or Schedules to orchestrate operations. When you do so, you can use the content of this report on the Post Process pages of the Job definition to trigger further actions based on that content. For more information see, Process Pages in the Automic Automation product documentation.

  • Directory

    This report provides a list of job output files created during a particular execution and available for downloading. It may also contain files downloaded from the target system.

To Access the Reports

Do one of the following:

  • From the Process Assembly perspective

    • After executing a Job, a notification with a link to the report is displayed. Click it to open the Last Report page.

    • Right-click a job and select Monitoring > Last Report.

    In either case, the most recent report that is created for the object opens up.

  • From the Process Monitoring perspective

    • Right-click a task and select Open Report.

      The report for that particular execution opens. Its runID is displayed in the title bar to help you identify it.

    • In the list of Executions for a particular task, right-click and select Open Report.

See also: