Analytics - Sizing Requirements
Before beginning with the Analytics installation, check the version dependencies for the database, Java, and so forth. You can access the page where we list dependencies and product compatibility information directly with this link: Automic Compatibility Matrix.
For instructions on how to navigate through the Automic Compatibility Matrix, see Compatibility Information.
Analytics Backend, Datastore and Streaming Platform
Setup Recommendations
- The UI plug-in is always added to one or more hosts where AWI is installed.
- The Analytics Datastore and Backend should be both installed on a dedicated host.
- The Backend must be accessible using HTTP or HTTPs from the AWI host. The Backend must be able to connect to the Datastore and to all required databases ( AE, ARA).
Important! Setup a data retention period for 6 months.
Small Configuration
To keep up with the workload of a small configuration, ensure your system has the following minimum requirements:
CPU | Memory | Disk |
---|---|---|
4 Cores | 16 GB | 512 GB |
A typical small Analytics system is considered to have the following configuration:
-
AE concurrent users: <10
-
Agents: <20
-
Object definitions: <1 000
-
Total executions per day: <350 000
-
Server processes:
-
WP: 2 x 4
-
DWP: 2 x 3
-
JWP: 2 x 1
-
CP: 2 x 2
-
JCP: 1 x n (n= number of servers)
-
Important! Database storage should always to be fail safe and redundant.
Medium Configuration
To keep up with the workload of a medium configuration, ensure your system has the following minimum requirements:
CPU | Memory | Disk |
---|---|---|
16 Cores | 64 -128 GB | 1 TB |
A typical medium Analytics system is considered to have the following configuration:
-
AE concurrent users: <10
-
Agents: <20
-
Object definitions: <50 000
-
Total executions per day: <700 000
-
Server processes:
-
WP: 2 x 8
-
DWP: 2 x 15
-
JWP: 2 x 5
-
CP: 2 x 2
-
JCP: 1 x n (n= number of servers)
-
Important! Database storage should always to be fail safe and redundant.
Big Configuration
To keep up with the workload of a big configuration, ensure your system has the following minimum requirements:
CPU | Memory | Disk |
---|---|---|
32 Cores | 256 GB | 2 TB |
A typical big Analytics system is considered to have the following configuration:
-
AE concurrent users: <200
-
Agents: <1000
-
Object definitions: <100 000
-
Total executions per day: <1 500 000
-
Server processes:
-
WP: 2 x 16
-
DWP: 2 x 45
-
JWP: 2 x 10
-
CP: 2 x 2
-
JCP: 1 x n (n= number of servers)
-
Important! Database storage should always to be fail safe and redundant.
High-end Configuration
To keep up with the workload of a high-end configuration, ensure your system has the following minimum requirements:
CPU | Memory | Disk |
---|---|---|
32 Cores | >256 GB | 4 TB |
A typical high-end Analytics system is considered to have the following configuration:
-
AE concurrent users: >200
-
Agents: >1000
-
Object definitions: >100 000
-
Total executions per day: >1 500 000
-
Server processes:
-
WP: 4 x 16
-
DWP: 4 x 45
-
JWP: 4 x 10
-
CP: 4 x 4
-
JCP: 1 x n (n= number of servers)
-
Important! Database storage should always to be fail safe and redundant.
Notes:
- The numbers of JWP and DWP ensure stable response times up to the maximum amount of concurrent users. You can adjust the number of processes according to the expected amount of users.
- If you are using a Core X AMD CPU, deactivate low-current functions and dynamical cycle adjustments in the BIOS on the Automation Engine computer.
Sizing and Storage Recommendations
Note: For medium-sized and bigger installations, setting up a regular back-up and truncate process for the Analytics Datastore is recommended. To provide a stable chart performance, back-up and truncate. Keep only the last 6 months in the Datastore.
-
Required disk space
One GB for every hundred thousand executions in the Automation Engine.
-
Datastore backup
The Analytics Datastore was created to store large amounts of data.
To save space, remove data older than 1 year from the Analytics Datastore. You can use backup actions in the ANALYTICS ACTION PACK.
-
General database rules
The following information is valid for all database vendors. The log files must be placed on the fastest available disks (example: SSDs).
ORACLE: REDO LOG FILE DESTINATION
SQL SERVER: TRANSACTION LOG and TEMPDB files
LOG and DATA files must always be on separate DISKS /LUNS
For further information, see: Analytics Datastore Delete Action
Analytics Rule Engine
Important! Message queue systems and database storage have always to be fail safe and redundant. This section does not deal with this subject.
Sizing and Storage Recommendations
-
IA Agent Nodes
- See existing recommendations for Analytics Backend
- On a single box: 16 Cores for a small sized configuration and 32 cores for a medium-sized configuration
- + 8-16 GB RAM to existing memory recommendations
-
Streaming Platform Nodes
- 1x4 Cores
- 16 GB RAM
- Disk: Expected event size * expected events per second * how many seconds kept in the Streaming Platform (retention period) * replication factor / # of brokers
That is, 80 Bytes * 30000 events per second * 86400 seconds (= 1 day) of retention * 1 (no replication) / 1 (one broker) ~ 210 GB. A single 80-Bytes raw event results in around 3 KB of disk usage in the Streaming Platform - Disk buffer cache lives in memory. Sufficient RAM is required on each broker. The RAM depends on how often the Streaming Platform flushes, the more flushes, the less throughput.
-
A single broker can host only a single replica per partition, hence # brokers > # replicas
-
Rule Engine Nodes
- 1x8 Cores
- 32 GB RAM
- Disk: 32 GB
- Memory is critical. The Rule Engine would otherwise start spilling to disk, and decreases throughput.
Other Factors
- To increase throughput by a factor of 5-10 (depending on the batch size). Run Rule Engine, the Automation Engine processes, and the Streaming Platform on separate machines.
- Maximum throughput 1000 concurrent users on a single box, after that backpressure occurs.
- Throughput scales with batch size
- 22.9 GB Streaming Platform logs.dir size for ~ 67-m events ~ 3 KB per event
Note: A single event ingestion using a single box installation is limited to, approximately 2500 events per second. The ingestion rate can be improved by distributing services, selecting a higher batch size, or using more than one IA Agent.