Recommendations

Processes

On Windows systems, when running a large number of Windows-based programs, “Out of Memory” error messages appear if the user starts a new program or tries to use programs that are already running, even if there is plenty of physical and page file memory available. In Dollar Universe, some jobs abort with the advanced status “Could not submit jobs”.

Follow the official procedure described at this URL:

http://support.microsoft.com/kb/126962/en

Recommended values are:

Number of concurrent jobs

Third Shared Section value

100

768 (default value on Windows 2008)

200

1536

500

2304

On UNIX/Linux systems, if the expected maximum number of concurrent jobs exceeds the system parameter for maximum user processes (verify the quota with the command ulimit -u), you should try to dispatch jobs on several submission accounts. Otherwise, you must set a higher maximum user processes value. For example, the default on Linux Redhat / CentOS 6 is 1024.

Disk Space

If the installation file system is bigger than 4,2 TB, you need to use a Dollar Universe 64b kit for the installation.

It is recommended to monitor of the number of jobs logs in $UNI_DIR_LOG/<area> regardless of their total size. There is the possibility that the inodes table (command df -i) could be filled while there is still remaining file system space.

Small job logs require a high level of storage for Windows and Linux platforms. Even the smallest file occupies at least one file system block. Therefore the occupied space on a disk is larger than the total size of all job logs. This recommendation has a higher impact if the log directory on an ReFS file system is implemented (a new file system type available on Windows Server 2012). The allocation unit on ReFS is 64 KB.

Memory

Job limit = (T - R) / (C x Q)

T: Total amount of RAM in MB

R: reserved memory space for DUAS engines, system processes, and other applications (Minimum is estimated to 300 MB on Linux, and to 700 MB on Solaris, 1000 MB on Windows)

C: Memory cost for one job (see table above)

Q: number of batch queues defined

Known Limitations

As an example, during benchmark:

Regarding history, u_fmhs60 file became full with 149,000 records, 163,000 on Solaris/Sparc T5, and 160,000 on Solaris/Sparc T4.

Regarding job runs data, u_fmcx60 became full with 1,017,000 records.

As of version 6.10.41, documentation new updates are posted on the Broadcom Techdocs Portal.
Look for Dollar Universe.