Condition Commands
This page includes the following:
add_custom_dataset_creator()
Add a custom dataset creator on the to_dataset to the predecessor from_job, with an optional offset from the start time of the predecessor run. A zero offset indicates that the to_dataset run is predicted to begin using the average close to start latency of the to_dataset.
Parameters:
-
scheduler
- name of the scheduler containing to_dataset, can be None if only one scheduler is used -
from_job_name
- name of the job from which to create the dependency -
to_dataset_name
- name of the dataset on which to create the dependency -
offset
(optional) - finish to start delay between from_job and to_dataset in milliseconds or "hh:mm:ss" formatDefault: '00:00:00'
-
rebuild_jobstreams
(optional) - Automic Automation Intelligence will rebuild all the jobstreams as part of this command unless this parameter is False. Set this to False if you intend to add multiple custom dependencies. Set it to True on the last invocation of this command to rebuild all the jobstreamsDefault: True
Dependencies:
- Must be logged in
Result:
- New dataset creator created from from_job to to_dataset
Example add_custom_dataset_creator() command line usage:
>>> add_custom_dataset_creator(scheduler='CA7', from_job_name='RDAYWE', to_dataset_name='SDONRDAY') 1
Example add_custom_dataset_creator() script usage:
import sys from jaws import * login() print 'begin' try: add_custom_dataset_creator(scheduler='CA7', from_job_name='EWNPTF2E', to_dataset_name='MHDOCE') print 'end' finally: logout()
add_custom_dataset_hard_start_time()
CA7 only. Add a custom hard start time for dataset_name. No return value. A custom hard start time is created on the dataset on the given schedId and calendar. The hard start time exists only in the Automic Automation Intelligence database and no changes are made in the scheduler.
Parameters:
-
scheduler
(optional) - name of the scheduler containing the dataset None if only one dataset with given name exists in any scheduler defined toDefault: None
-
dataset_name
- name of the dataset on which to create the hard start time -
days
- a comma separated list of name(s) of the days of the week to run this hard start time -
start_time
- the start time in "HH:MM:SS" format
Dependencies:
- Must be logged in
Result:
- New hard start time created for a specified dataset
Example add_custom_dataset_hard_start_time() command line usage:
>>> add_custom_dataset_hard_start_time(scheduler='CA7', dataset_name='ADCDMST.TRIGGERS.ENG', days='mon, tue, wed, thu, fri, sat, sun', start_time='01:30:00')
Example add_custom_dataset_hard_start_time() script usage:
import sys from jaws import * login() print 'begin' try: add_custom_dataset_hard_start_time(scheduler='CA7', dataset_name='ADCDMST.TRIGGERS.ENG', days='mon, tue, wed, thu, fri, sat, sun', start_time='01:30:00') print 'end' finally: logout()
add_custom_hard_start_time()
Add a custom hard start time for a job. No return value.
A custom hard start time is created on the job on the given schedId. The hard start time exists only in the Automic Automation Intelligence database and no changes are made in the scheduler.
Parameters:
-
scheduler
- name of the scheduler containing the job. None if only one job with given name exists in any scheduler defined to AAI -
job_name
- name of the job on which to create the hard start time -
days
- a comma separated string of name(s) of the days of the week to run this hard start time, can be None for AutoSys schedulers which means use existing calendar or if there is not one use all days. If a calendar already exists in AutoSys this field is ignored -
start_time
- time of the hard start in "HH:MM:SS" format -
schedId
[CA7 only] - string schedId on which job_name starts. May not be '000' or None. For non-CA7 schedulers, this value is ignored
Dependencies:
- Must be logged in
Result:
- New hard start time created for a specified job
Example add_custom_hard_start_time() command line usage:
>>> add_custom_hard_start_time(scheduler='CA7', job_name='US2913T', days='mon, tue, wed, thu, fri, sat, sun', start_time='01:00:00', schedId='001') 1
Example add_custom_hard_start_time() script usage:
import sys from jaws import * login() print 'begin' try: add_custom_hard_start_time(scheduler='CA7', job_name='US2913T', days='mon, tue, wed, thu, fri, sat, sun', start_time='01:00:00', schedId='001') print 'end' finally: logout()
add_custom_trigger_dependency()
Add a custom trigger-type dependency on the to_job to the predecessor from_job, with the specified sched_ids and an optional offset from the start time of the predecessor run. A zero offset indicates that the to_job run is predicted to begin when the from_job run ends.
Parameters:
-
scheduler
- name of the scheduler containing to_job, can be None if only one scheduler is used -
from_job_name
- name of the job from which to create the dependency -
to_job_name
- name of the job on which to create the dependency -
from_job_schedId
[CA7 only] - schedId of from_jobDefault: '000'
-
to_job_schedId
[CA7 only] - schedId of to_jobDefault:: '000'
-
offset
(optional) - finish to start delay between from_job and to_job in milliseconds or "hh:mm:ss" formatDefault: '00:00:00'
-
rebuild_jobstreams
(optional) - Automic Automation Intelligence will rebuild all the jobstreams as part of this command unless this parameter is False. Set this to False if you intend to add multiple custom dependencies. Set it to True on the last invocation of this command to rebuild all the jobstreamsDefault: True
Dependencies:
- Must be logged in
Result:
- New trigger-type dependency created from from_job to to_job
Example add_custom_trigger_dependency() command line usage:
>>> add_custom_trigger_dependency(scheduler='CA7', from_job_name='RDAYWE', to_job_name='SDONRDAY', from_job_schedId='001', to_job_schedId='001') 1
Example add_custom_trigger_dependency() script usage:
import sys from jaws import * login() print 'begin' try: add_custom_trigger_dependency(scheduler='CA7', from_job_name='EWNPTF2E', to_job_name='MHDOCE', from_job_schedId='001', to_job_schedId='001') print 'end' finally: logout()
add_custom_job_dependency()
Add a custom job dependency between two jobs. A zero offset indicates that the job run is predicted to begin when the dependent job run ends.
Parameters:
-
scheduler
- name of the scheduler containing the job -
jobName
- name of the job on which to create the dependency -
dependentScheduler
- name of the scheduler containing the dependent job -
dependentJobName
- name of the job from which to create the dependency -
type
- can be one of "Success", "Failure", "Terminated", "NotRunning", "ExitCode" -
operator
(optional) - only for ExitCode, can be "=", "!=", "<=", ">=", "<", ">" -
value
(optional) - Integer value for ExitCode -
offset
(optional) - finish to start delay between the dependent job and the job in milliseconds or "hh:mm:ss" formatDefault: '00:00:00'
-
lookbackSecs
(optional) - default: -1 (no look back) -
parentJobName
(optional) - name or container path of the parent job -
dependentParentJobName
(optional) - name or container path of the parent job of the dependent -
override_type
[optional, AutoSys and IWS only] - can be one of the following:-
0: do not override existing conditions (default)
-
1: only override existing job dependency conditions
-
-
rebuild_jobstreams
(optional) - Automic Automation Intelligence will rebuild all the jobstreams as part of this command unless this parameter is False. Set this to False if you intend to add multiple custom dependencies. Set it to True on the last invocation of this command to rebuild all the jobstreamsDefault: True
Dependencies:
- Must be logged in
Result:
- New job dependency condition created from the dependent job to the job
Example add_custom_job_dependency() command line usage:
>>> add_custom_job_dependency(scheduler='prod1', jobName='jobB', dependentScheduler='prod2', dependentJobName='jobA', type="Success") 1 >>> add_custom_job_dependency(scheduler='prod1', jobName='jobB', dependentScheduler='prod2', dependentJobName='jobA', type="ExitCode", operator="=", value=0) 1
Example add_custom_job_dependency() script usage:
import sys from jaws import * login() print 'begin' try: add_custom_job_dependency(scheduler='prod1', jobName='jobB', dependentScheduler='prod2', dependentJobName='jobA', type="Success") print 'end' finally: logout()
get_all_custom_conditions()
Get all custom conditions.
Parameters:
-
scheduler
(optional) - name of the scheduler containing the job (no scheduler means search all schedulers)Default: None
Dependencies:
- Must be logged in
Result:
- A set of CustomConditionKeys representing custom conditions, see CustomConditionKey
- See the script print_custom_dependencies.py in the 'examples' directory for an example of accessing the members of custom conditions.
Example get_all_custom_conditions() command line usage:
>>> get_all_custom_conditions() done [CustomCalendarStartKey[ job[e-topsval1-sh; null]; calendar[JAWSCALENDAR_mo_we_fr] ], CustomHardStartTimeKey[job[e-topsval1-sh; null]; startTime[08:00:00] ], CustomHardStartTimeKey[ job[ADCDMST.TRIGGERS.ENG; 001]; calendar[JAWS_D_F_M_S_SA_T_TH_W]; startTime[01:30:00] ], CustomHardStartTimeKey[ job[mkt_price_Paris; null]; startTime[02:00:00] ]] >>> get_all_custom_conditions(scheduler='autosys45')
Example get_all_custom_conditions() script usage:
import sys from jaws import * login() print 'begin output' try: scheds = schedulers() for s in scheds: print s + ':' custom = get_all_custom_conditions(s) for c in custom: print ' ' + str(c) print 'end output' finally: logout()
get_datasets_with_no_start_conditions()
CA7 only. Get datasets that have recent runs but no apparent start conditions.
Parameters:
-
scheduler
(optional) - name of the scheduler containing the datasets (no scheduler means search all schedulers)Default: None
-
job_stream
(optional) - name of the jobstream in which to search for datasets (no jobstream means search all datasets independent of jobstreams)Default: None
-
recent_run_window
(optional) - duration in days of recent past during which jobs with runs will be consideredDefault: 30
-
include_custom_dep
(optional) - specifies whether datasets with only custom dependencies should be includedDefault: False
Dependencies:
- Must be logged in
Result:
- A list of datasets
Example get_datasets_with_no_start_conditions() command line usage:
>>> get_datasets_with_no_start_conditions(scheduler='CA7', recent_run_window=15, include_custom_dep=True) done [JawsJobLite[ADCDMST.TRIGGERS.ENG], JawsJobLite[ADCDMST.TRIGGERS.TRIGGER3], JawsJobLite[ADCDMST.TRIGGERS.MANZANIT], JawsJobLite[ADCDMST.TRIGGERS.TRIGGER1], JawsJobLite[ADCDMST.TRIGGERS.ENGJAWS1], JawsJobLite[ADCDMST.TRIGGERS.TRIGGER2], JawsJobLite[ADCDMST.TRIGGERS.ENGJAWS2], JawsJobLite[ADCDMST.TRIGGERS.BRYAN], JawsJobLite[ADCDMST.TRIGGERS.DEVSJA01]] >>> get_datasets_with_no_start_conditions(scheduler='CA7', job_stream='ca7jobstream')
Example get_datasets_with_no_start_conditions() script usage:
import sys from jaws import * login() print 'begin output' try: sets = get_datasets_with_no_start_conditions() for s in sets: print s print 'end output' finally: logout()
get_jobs_with_no_start_conditions()
Get jobs that have recent runs but no apparent start conditions.
Parameters:
-
scheduler
(optional) - name of the scheduler containing the job (no scheduler means search all schedulers)Default: None
-
job_stream
(optional) - name of the jobstream in which to search for jobs (no jobstream means search all jobs independent of jobstreams)Default: None
-
recent_run_window
(optional) - duration in days of recent past during which jobs with runs will be consideredDefault: 30
-
include_custom_dep
(optional) - specifies whether jobs with only custom dependencies should be includedDefault: False
Dependencies:
- Must be logged in
Result:
- A set of objects representing jobs. These objects containing data fields including name, parentPath, and schedulerId. For a complete list of fields and methods execute 'dir('job')' on one of these objects.
Example get_jobs_with_no_start_conditions() command line usage:
>>> get_jobs_with_no_start_conditions(scheduler='autosys45', recent_run_window=20, include_custom_dep=True) done [JawsJobLite[FS45_jobD], JawsJobLite[FS45_jobC], JawsJobLite[FS45_jobB]] >>> get_jobs_with_no_start_conditions(scheduler='autosys45', job_stream='autosys45-2')
Example get_jobs_with_no_start_conditions() script usage:
import sys from jaws import * login() print 'begin output' try: scheds = schedulers() for s in scheds: print s, get_jobs_with_no_start_conditions(s) print 'end output' finally: logout()
remove_custom_hard_start_time()
If present, the custom hard start time in place for the given job or dataset and schedId will be removed.
Parameters:
-
scheduler
- name of the scheduler containing the job or dataset. None if only one job/dataset with given name exists in any scheduler defined to AAI -
job_name
- name of the job or dataset from which to remove the hard start time -
schedId
[CA7 only] - schedId on which job_name starts. May not be '000' or None. For non-CA7 schedulers, this value is ignoredDefault: ‘001’
Dependencies:
- Must be logged in
Result:
- Existing custom hard start time for the given job or dataset is removed
Example remove_custom_hard_start_time() command line usage:
>>> remove_custom_hard_start_time(scheduler='CA7', job_name='US2913T', schedId='001') 1 >>> remove_custom_hard_start_time(scheduler='CA7', job_name='ADCDMST.TRIGGERS.ENG')
Example remove_custom_hard_start_time() script usage:
import sys from jaws import * login() print 'begin' try: remove_custom_hard_start_time(scheduler='CA7', job_name='US2913T', schedId='001') print 'end' finally: logout()
remove_custom_trigger_dependency()
If present, the custom dependency in place with predecessor from_job and successor to_job will be removed.
Parameters:
-
scheduler
- name of the scheduler containing to_job, can be None if only one scheduler is used -
from_job_name
- name of the job on which to remove the dependency -
to_job_name
- name of the job on which to remove the dependency -
from_job_schedId
[CA7 only] - schedId of from_jobDefault: '000'
-
to_job_schedId
[CA7 only] - schedId of to_jobDefault: '000'
-
rebuild_jobstreams
(optional) - Automic Automation Intelligence will rebuild all the jobstreams as part of this command unless this parameter is False. Set this to False if you intend to add multiple custom dependencies. Set it to True on the last invocation of this command to rebuild all the jobstreamsDefault: True
Dependencies:
- Must be logged in
Result:
- Existing custom dependency from from_job_name to to_job_name is removed
Example remove_custom_trigger_dependency() command line usage:
>>> remove_custom_trigger_dependency(scheduler='CA7', from_job_name='RDAYWE', to_job_name='SDONRDAY', from_job_schedId='001', to_job_schedId='001', rebuild_jobstreams=True) 1
Example remove_custom_trigger_dependency() script usage:
import sys from jaws import * login() print 'begin' try: remove_custom_trigger_dependency(scheduler='CA7', from_job_name='RDAYWE', to_job_name='SDONRDAY', from_job_schedId='001', to_job_schedId='001', rebuild_jobstreams=True) print 'end' finally: logout()
remove_custom_dataset_creator()
If present, the custom dependency in place with creator from_job and successor to_dataset will be removed.
Parameters:
-
scheduler
- name of the scheduler containing to_dataset, can be None if only one scheduler is used -
from_job_name
- name of the job on which to remove the dependency -
to_dataset_name
- name of the dataset on which to remove the dependency -
rebuild_jobstreams
(optional) - Automic Automation Intelligence will rebuild all the jobstreams as part of this command unless this parameter is False. Set this to False if you intend to remove multiple custom dependencies. Set it to True on the last invocation of this command to rebuild all the jobstreamsDefault: True
Dependencies:
- Must be logged in
Result:
- Existing custom creator dependency from from_job_name to to_dataset_name is removed
Example remove_custom_dataset_creator() command line usage:
>>> remove_custom_dataset_creator(scheduler='CA7', from_job_name='RDAYWE', to_dataset_name='SDONRDAY', rebuild_jobstreams=True) 1
Example remove_custom_dataset_creator() script usage:
import sys from jaws import * login() print 'begin' try: remove_custom_dataset_creator(scheduler='CA7', from_job_name='RDAYWE', to_dataset_name='SDONRDAY', rebuild_jobstreams=True) print 'end' finally: logout()
remove_custom_job_dependency()
If present, the custom dependency in place with the dependent job and the job will be removed.
Parameters:
-
scheduler
- name of the scheduler containing the job -
jobName
- name of the job on which to remove the dependency -
dependentScheduler
- name of the scheduler containing the dependent job -
dependentJobName
- name of the job from which to remove the dependency -
parentJobName
(optional) - name or container path of the parent job -
dependentParentJobName
(optional) - name or container path of the parent job of the dependent -
rebuild_jobstreams
(optional) - Automic Automation Intelligence will rebuild all the jobstreams as part of this command unless this parameter is False. Set this to False if you intend to add multiple custom dependencies. Set it to True on the last invocation of this command to rebuild all the jobstreamsDefault: True
Dependencies:
- Must be logged in
Result:
- Existing custom dependency condition for the given job is removed
Example remove_custom_job_dependency() command line usage:
>>> remove_custom_job_dependency(scheduler='prod1', jobName='jobB', dependentScheduler='prod2', dependentJobName='jobA') 1
Example remove_custom_job_dependency() script usage:
import sys from jaws import * login() print 'begin' try: remove_custom_job_dependency(scheduler='prod1', jobName='jobB', dependentScheduler='prod2', dependentJobName='jobA') print 'end' finally: logout()