Configuring Container-Based Systems

As a system administrator, you set up and configure your Automic Automation Kubernetes Edition system.

Once your system is deployed, you can configure your system either using the values.yaml file or the relevant sections of the configmap. The values.yaml file that is provided with the AAKE includes examples of default Helm Chart and external database configuration.

This page includes the following:

Configuring Kubernetes-Specific Components after Deployment

This section comprises different settings that are specific to Kubernetes.

Service Accounts

A default service account is created and automatically mounted into all pods in the namespace, which could be a security risk. Therefore, it is recommended to disable the automount function in the default service account. To do so, set the automountServiceAccountToken parameter to false.

Example

apiVersion: v1
kind: ServiceAccount
metadata:
  name: automic
automountServiceAccountToken: false

Only the operator requires a service account for the installation of AAKE. Therefore, you have to define the name of the dedicated serviceAccount that is used by the operator in the operator: section of the values.yaml file. Make sure you create the service account with the relevant permissions in the namespace.

Example

operator:
  serviceAccount:
    create: true
    name: automic-operator-sa

Service Annotations

In Kubernetes, effective management and organization of resources are key to maintaining a well-structured system. You can enhance the visibility and control of your AAKE resources by setting annotations and labels for all resources (deployments, pods and services) directly in the values.yaml file. Once configured, these annotations and labels are automatically applied to the Kubernetes objects created, allowing you to add meaningful metadata. This not only improves resource management but also boosts interoperability, enabling more efficient filtering and organization of your system.

Example

awi:
  annotations:
    example.com/environment: "production" 
  labels:
    tier: "frontend" 
  pod:
    annotations:
      example.com/environment: "production" 
    labels:# Custom label
  service:
    annotations:
      service.beta.kubernetes.io/aws-load-balancer-type: "nlb" 
    labels:
      app.kubernetes.io/instance: "awi-prod-service" 

Examples of Service Annotations

Managed Kubernetes services, for example, those provided by AWS, Azure or Google Cloud Platform use different ingress controllers and might require additional annotations in the values.yaml file for the operator, awi, jcp-rest, and jcp-ws services:

Set GKE annotation for AWI service:

awi:
  service:
    annotations:
      cloud.google.com/neg: '{"ingress": true}'

Set AWS annotation for JCP service:

jcp-ws:
  service:
    annotations:
      service.beta.kubernetes.io/aws-load-balancer-backend-protocol: https

You can also use the values.yaml file to set an Ingress class name for the automatically generated ingresses.

If you want to use certificates, you have to make sure you define the secret in the values.yaml file before deployment:

ingress:
  applicationHostname: <app host name>
  enabled: true
  secretName: <secret name>
  ingressClassName: <ingress class name>

Example of Ingress Class Names

ingress:
  enabled: true
  applicationHostname: your-domain.example.com
  secretName: certificate-tls-secret
  ingressClassName: traefik-nginx-internal-public-example-name

More information:

Configuring your Automic Automation System after Deployment

This section comprises different settings that affect your entire Automic Automation system or a specific component, such as the Automation Engine, the Automic Web Interface, a server process and so on.

Important! When configuring containers, take into account that on-disk files are ephemeral. They are lost when containers or pods stop and restart. To store and refer to resources in the cluster (for example, to files such as logs, traces, images and so on), you must first define the names of the corresponding Persistent Volumes (pv) and Persistent Volume Claims (pvc) in the respective sections of the values.yaml file. For more information about Volumes in Kubernetes please refer to the official Kubernetes product documentation.

Setting Environment Variables after Deployment

In an AAKE environment, you use environment variables to define or change settings for the Automation Engine and the Automic Web Interface. These environment variables not only allow you to substitute values in the AE INI file or the AWI configuration files, they also allow you to define system and other settings.

You can set environment variables using the values.yaml file before or after you install your system. Make sure you carry out a Helm upgrade to apply your changes. The ae and awi pods are restarted automatically.

Note: Make sure you use the correct format when defining environment variables.

Also, after the installation is provided, you can use the ae-properties, awi-properties, and automation-ai sections of the configmap to change the respective environment variables. The Install Operator watches the configmap and it restarts all ae, awi and/or automation.ai pods (wp, cp, jcp-rest, jcp-ws, jwp, awi) respectively once you have saved your changes.

More information:

Setting the System Name

You cannot set a system name after deploying AAKE. Make sure you have done it while preparing for the deployment using the AUTOMIC_GLOBAL_SYSTEM and AUTOMIC_SYSTEM_NAME environment variables. For more information, see Preparing for the Container-Based Installation.

Environment Variables for Automation Engine

Each environment variable for the AE uses the syntax pattern ${PREFIX_SECTION_KEY:DEFINITION}:

  • PREFIX: AUTOMIC

    Constant prefix for these environment variables.

  • SECTION:

    Defines the section of the INI file in which the key you want to define is located.

  • KEY:

    Specifies the key you want to substitute.

  • DEFINITION:

    The value that you want to define for the key.

The configuration of the Automation Engine server processes is defined in the AE INI file (ucsrv.ini), which comprise all relevant parameters organized in different sections. In an AAKE environment, you can use the values.yaml file to modify these settings before and after the deployment.

Notes:

  • Only the system administrator should change parameter values in configuration (INI) files since these kind of changes interfere in and have a great impact in the Automation Engine system.

  • You can substitute whole keys of the INI file but not a complete section.

Example

${AUTOMIC_GLOBAL_SYSTEM:AUTOMIC}

This environment variable sets AUTOMIC as the value of the system= key in the [GLOBAL] section of the INI file of the Automation Engine.

After your system is successfully provisioned, you can also use the ae-properties section of the configmap to change these settings, for example, using the following kubectl command to edit it:

kubectl edit configmap ae-properties

Example of the ae-properties configmap

apiVersion: v1
data:
  AUTOMIC_TRACE_TRC03: '1'
  AUTOMIC_REST_CORSSUPPORTENABLED: '1'
  AUTOMIC_REST_CORSACCESSCONTROLALLOWORIGIN: '*'
  AUTOMIC_GLOBAL_STARTMODE: 'COLD'
kind: ConfigMap
metadata:
  ...

You can also use environment variables to set system and other settings that are not part of the AE INI file.

Before the deployment, these variables can be defined only in the values.yaml file. Once your system has been provisioned, you can also modify them using the Automic Web Interface.

For example, you can use them to populate the JCP_ENDPOINT and REST_ENDPOINT parameters of the UC_SYSTEM_SETTINGS variable. To do so, define the respective URLs in the JCP_WS_EXTERNAL_ENDPOINT and JCP_REST_EXTERNAL_ENDPOINT environment variables in the values.yaml file.

Example

JCP_WS_EXTERNAL_ENDPOINT: "https://jcp-ws.<your-domain.example.com>:443"

More information:

Environment Variables for AWI

All AWI settings use the syntax pattern ${PREFIX_KEYNAME:DEFINITION}:

  • PREFIX: AUTOMIC

    Constant prefix for all AWI settings.

  • KEYNAME:

    Matches the property name as defined in the uc4config.xml, configuration.properties and colors.properties files. The KEYNAME is written in uppercase and dots are replaces by underscores. For example, the session.colors property becomes AUTOMIC_SESSION_COLORS.

  • DEFINITION:

    The value that you want to define for the key

AWI has mandatory and optional configuration files that you must/can modify. In an AAKE environment, you can use the values.yaml file to modify these settings before and after the deployment.

Example

  • Environment variable key: AUTOMIC_SSO_SAML_ENABLED

    Value: Specifies whether single sign-on (SSO) can be used for AWI log in

    Example: true

After your system is successfully provisioned, you can also use the awi-properties section of the configmap to change these settings, for example, using the following kubectl command to edit it:

kubectl edit configmap awi-properties

Example of the awi-properties configmap

apiVersion: v1
data:
  AUTOMIC_SYSTEM_NAME: 'AUTOMIC'
  AUTOMIC_CONNECTION_NAME: 'AE_TEST'
  AUTOMIC_MAINCOLOR: '#E61063'
  AUTOMIC_SESSION_COLORS: '#21DE4A,#DE6621,#133AD4'
  AUTOMIC_SSO_SAML_ENABLED: 'true'
kind: ConfigMap
metadata:
    ...

You can also use environment variables to define settings that are not part of any of the AWI configuration files.

Before the deployment, these variables can be defined only in the values.yaml file. Once your system has been provisioned, you can also modify them using the Automic Web Interface.

For example, you can use AUTOMIC_LOGO_BASE64 environment variable to enter the text that corresponds to the AWI logo in BASE64 format.

More information:

Environment Variables for Automation.AI

After your system is successfully provisioned, you can also use the automation-ai section of the configmap to change the settings relevant to the Automation.AI component, for example, using the following kubectl command to edit it:

kubectl edit configmap automation-ai

You can set all relevant parameters using environment variables.

Example of the automation-ai configmap

apiVersion: v1
data:
 AUTOMATION_AI_MODEL_NAME: vertex.ai.gemini
 AUTOMATION_AI_HTTP_USERAGENT=Automation-AI/1.0.0
 AUTOMATION_AI_CHAT_CONVERSATION-TIMEOUT: "1440"
 SPRING_AI_VERTEX_AI_GEMINI_PROJECTID: <your GCP project id>
 SPRING_AI_VERTEX_AI_GEMINI_LOCATION: <your GCP location>
 AUTOMATION_AI_MCP_OPENAPI_EXTERNAL_PROVIDERS_AE-PROD_DEFINITIONLOCATIONURL: https://ae-prod:8080/ae/api/v1/openapi2/swagger.json
 AUTOMATION_AI_MCP_OPENAPI_EXTERNAL_PROVIDERS_AE-PROD_BASEURL: https://ae_prod:8080/ae/api/v1
 AUTOMATION_AI_MCP_OPENAPI_EXTERNAL_PROVIDERS_AE-PROD_INCLUDEMETHODTYPES: GET
 AUTOMATION_AI_MCP_OPENAPI_EXTERNAL_PROVIDERS_AE-PROD_INCLUDEPARAMETERSINCONTEXT: client_id,Authorization
 AUTOMATION_AI_MCP_OPENAPI_EXTERNAL_PROVIDERS_AE-PROD_EXCLUDEOPERATIONIDS:
kind: ConfigMap
metadata:
 name: "automation-ai"
 namespace: "<your namespace>"

More information:

Allowing Underscores in HTTP Headers for AAKE using NGINX Ingresses

If you use Gen AI in an AAKE instance with automatically generated NGINX ingresses, make sure that NGINX allows the use of underscores in HTTP header names as NGINX drops headers containing underscores by default. To prevent this, set the enable-underscores-in-headers parameter to true in the ConfigMap of the Ingress Controller.

Example

apiVersion: v1
kind: ConfigMap
metadata:
  name: <name of nginx ingress controller config map>
  namespace: <namespace of nginx controller>
data:
  enable-underscores-in-headers: "true"

More information:

Setting the Log Target

All AE, Analytics and AWI processes write their activities (logs and traces) to the console by default. However, you can also configure a file to which logs, traces and dumps can be written. If you want to write logs and traces to a file, make sure you define a file for each component (jcp-rest, jcp-ws, jwp, cp, wp, analytics, awi, initialdata).

If you want to use a file instead of the console (default), it is mandatory to define the name of the persistent volume claim (pvc name) to which you want to write log/trace files.

Note: Make sure the access nodes of the PV and PVC you use are set to ReadWriteMany if the pods that access them run on different nodes. If all pods run on a single node, you can also set them to ReadWriteOnce.

Example

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-logging
spec:
  storageClassName: manual
  capacity:
    storage: 4Gi
  accessModes:
    - ReadWriteMany
  hostPath:
    path: /home/luf/helm/log
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-logging
spec:
  storageClassName: manual
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 2Gi

Important! To ensure AAKE functions correctly, verify the mount path for the volume has the correct permissions. The AAKE processes run as a User with ID 1000 and/or with the groups 0 and 2000, so the mount path must be writable by at least one of these. If the processes lack the necessary permissions, they will be unable to log information and will fail to start. If your system uses user ID remapping (for example OpenShift), ensure the folder is writable by the group with ID 0 as per their specifications. This is crucial for proper operation.

For more information and examples of persistent volumes and persistent volume claims, see the official Kubernetes documentation.

In the values.yaml file, change the definition of the log and trace parameters from console to file to write the log and trace to the corresponding pvc.

Example

...
awi:
  logging2:
    pvc: pvc-log
    log: console
  logo2:
    pvc: pvc-awi

cp:
  logging:
    pvc: pvc-log
    log: file
  tracing:
    pvc: pvc-trace
    trace: file

wp:
  logging:
    pvc: pvc-log
    log: file
  tracing:
    pvc: pvc-trace
    trace: file

jcp-rest:
  logging:
    pvc: pvc-log
    log: file
  tracing:
    pvc: pvc-trace
    trace: file

jcp-ws:
  logging:
    pvc: pvc-log
    log: file
  tracing:
    pvc: pvc-trace
    trace: file
...

Make sure you carry out a Helm update to apply your changes. The ae and awi pods are restarted automatically.

By default, log and trace files are overwritten every time the system is restarted. To keep older files for later use, you can specify the number of history files to be kept by changing the trace level in the respective section of the configmap.

The history files are named 01, 02, and so on, and are moved with every restart. The oldest file (with the highest number) is deleted and all other files are renamed (number is increased by 1).

More information:

Setting the Custom Logo

Changing the AE Certificate within the Cluster

Within the Kubernetes cluster all the communication is encrypted. The jcp-ws, jcp-rest, and awi services by default use generated, self-signed certificates; therefore, you do not have to prepare a TLS/SSL certificate for them. These certificate are stored using standard Kubernetes TLS secrets and using the name of the respective service: jcp-ws-cert, jcp-rest-cert, awi-cert.

Optionally, you can replace one or more of the generated, self-signed certificates with custom ones and configure them accordingly.

Note: Make sure that you still use the names jcp-ws-cert, jcp-rest-cert, awi-cert even if you use a custom certificate.

If you do not use custom ones for all services, the remaining ones are generated automatically.

To use custom certificates, you have to do the following before the installation:

  1. Depending on how you want to connect to the system, define either the IP address or DNS name for the Subject Alternative Name (SAN). You can choose multiple SANs

  2. Create the Kubernetes TLS secret using the certificate and key files. For example, to use a custom certificate for AWI (awi-cert) you can execute the following command:

    kubectl create secret tls awi-cert --cert=path/to/awi.crt --key=path/to/awi.key

  3. Make sure that you run the AAKE installation in the same namespace in which you created the secret.

  4. Connect to the respective service (jcp-ws, jcp-rest, awi) using one of the Subject Alternative Name (SAN) addresses and the port of the service.

Note: The install operator creates the Java KeyStores automatically.

More information:

Changing the AE DB Password

When you change the password of the AE database, make sure that the ae-db secret that you created while preparing for the AAKE installation is updated with the new DB password. For more information, see Preparing the AE and Analytics Database for the Container Installation.

You also need to update the password in the ae secret containing the DB connection strings.

The AAKE pods must be restarted to use the new password.

Scaling Deployments

Changing the number of replicas for a new installation or before/after an upgrade allows you to scale your deployments for AWI, JCP, REST, JWP, CP, and WP.

Example

The following example shows the values.yaml file configuration for a default namespace:

spec:
  version: 26.0.0
  awiReplicas: 1
  cpReplicas: 1
  jcpRestReplicas: 1
  jcpWsReplicas: 3 
  jwpReplicas: 3
  jwpAutReplicas: 1
  jwpUtlReplicas: 1
  wpReplicas: 5
  installCustomization: saas

You can use autoscaling to automatically update your deployments but you also have the option to do so manually. This section describes both approaches.

For more information about server processes and sizing requirements, see:

Autoscaling AAKE

AAKE uses Horizontal Pod Autoscaling (HPA) to automatically update your resources according to workload demands. Two scaling modes are supported:

  • AWI, JCP, CP and REST deployments: Use standard CPU and memory-based autoscaling.

  • JWP and WP deployments: Use external metrics derived from message queue counters. This setup requires Prometheus and the Prometheus Adapter to be available and properly configured for custom metric collection, see Enabling JWP and WP Autoscaling via Prometheus

The values.yaml file not only allows you to define if you want to use autoscaling or not; it also allows you to define thresholds for CPU and memory utilization and the minimum and maximum number of replicas.

Ensure that CPU and memory requests and limits are defined for each autoscaled process so that autoscaling can apply the specified thresholds. For more information, see Sizing of Automic Automation Kubernetes Edition.

Example

The following example shows how to enable autoscaling for AWI:

awi:
  resources:
    requests:
      memory: "700Mi"
      cpu: "250m"
    limits:
      memory: "2Gi"
      cpu: "500m"
  horizontalAutoscaling:
    enabled: true
    cpuUtilization: 60
    memoryUtilization: 60
    minReplicas: 2
    maxReplicas: 8

If you want to enable autoscaling for WPs, depending on the number of minimum and maximum replicas, the work processes might be started as DWPs and perform specialized tasks. To have more than five work processes at the time, you need to change the default value defined in the WP_MIN_NUMBER key of the UC_SYSTEM_SETTINGS variable. For more information, see WP_MIN_NUMBER and Types of Server Processes.

You can also do it the other way around and enable autoscaling for dialog work processes (DWPs). Depending on the number of minimum and maximum replicas, the dialog work processes might be started as WPs and perform specialized tasks. To have more than five dialog work processes at the time, you need to change the default value defined in the DWP_MIN_NUMBER key of the UC_SYSTEM_SETTINGS variable. For more information, see DWP_MIN_NUMBER and Types of Server Processes.

Enabling JWP and WP Autoscaling via Prometheus

To enable Prometheus-based autoscaling for JWPs and WPs, several components must be configured in sequence. The following procedure describes how to set up metric collection through Prometheus, expose these metrics to Kubernetes using the Prometheus Adapter, and enable autoscaling via the Horizontal Pod Autoscaler (HPA).

  1. Configure Prometheus scraping

    Prometheus must be able to collect metrics from the HTTPS REST endpoint exposed by the jcp-rest deployment.

    Define a scrape job in the Prometheus configuration that specifies this endpoint. Because the endpoint is TLS-protected, configure basic authentication with a valid username and password so that Prometheus can securely access the metrics.

  2. Ensure the Kubernetes secret is available

    Prometheus requires a Kubernetes secret containing the credentials for authenticating against the jcp-rest endpoint. This secret must be created and referenced in the Prometheus configuration so that metric scraping can be performed securely. The secret also ensures that Prometheus annotations are correctly applied.

    Example

    apiVersion: v1
    kind: Secret
    metadata:
      name: monitor-user
    type: Opaque
    data:
      client_readonly: <base64_encoded_value>
      user: <base64_encoded_value>
      password: <base64_encoded_value>

  3. Set up Prometheus access with DNS configuration

    By default, Prometheus uses Kubernetes service discovery to locate endpoints; however, this method is incompatible with the signed certificate used by the jcp-rest service because the certificate expects the DNS name jcp-rest rather than the pod IP.

    To avoid TLS validation errors, explicitly define the jcp-rest DNS name in the Prometheus scrape job configuration.

    Example

    server:
      persistentVolume:
        enabled: false
    
      extraVolumes:
        - name: jcp-rest-cert
          secret:
            secretName: jcp-rest-cert
    
      extraVolumeMounts:
        - name: jcp-rest-cert
          mountPath: /aa-certificates/jcp-rest.pem
          subPath: jcp-rest.pem
          readOnly: true
    
    extraScrapeConfigs: |
      - job_name: '<enter_job_name>'
        metrics_path: '/ae/api/v1/<enter_client_id>/system/metrics/prometheus'
        scheme: 'https'
        static_configs:
          - targets: ['jcp-rest:8088']
        basic_auth:
          username: '<enter_username>'
          password: '<enter_password>'
        tls_config:
          insecure_skip_verify: true

  4. Install and configure the Prometheus Adapter

    The Prometheus Adapter is required to expose Prometheus metrics to Kubernetes through the External Metrics API, enabling the Horizontal Pod Autoscaler (HPA) to scale WP and JWP workloads based on message-queue metrics.

    The adapter is configured with external metric rules that map Prometheus time-series (for example, ae_mq_size with keys such as mqwp_past or mqjwp_past) to Kubernetes external metric names. Each rule uses a Prometheus query that typically computes an average over a defined time window to smooth short-lived spikes and provide a stable scaling signal for the HPA.

    1. Create the Prometheus Adapter configuration file:

      /prometheus-adapter/values.yaml

      Define the Prometheus connection, external metric rules, and required RBAC settings. For example:

      rbac:
        # Disable RBAC creation since it conflicts with existing rancher-monitoring ClusterRole
        create: false
      
      # Prometheus connection configuration
      prometheus:
        url: http://prometheus-server.prometheus.svc
        port: 9090
        path: ""
      
      rules:
        # Disable default custom metrics rules to avoid creating custom metrics APIService
        default: false
        # No custom metrics rules - only external metrics
        custom: []
        external:
          - seriesQuery: 'ae_mq_size{key="mqwp_past",wpname!=""}'
            resources:
              namespaced: false
            name:
              as: "average_mqwp_past"
            metricsQuery: 'avg_over_time(ae_mq_size{key="mqwp_past",<<.LabelMatchers>>}[2m])'
          - seriesQuery: 'ae_mq_size{key="mqjwp_past",wpname!=""}'
            resources:
              namespaced: false
            name:
              as: "average_mqjwp_past"
            metricsQuery: 'avg_over_time(ae_mq_size{key="mqjwp_past",<<.LabelMatchers>>}[2m])'
          - seriesQuery: 'ae_mq_size{key="mqaut_past",wpname!=""}'
            resources:
              namespaced: false
            name:
              as: "average_mqaut_past"
            metricsQuery: 'avg_over_time(ae_mq_size{key="mqaut_past",<<.LabelMatchers>>}[2m])'
          - seriesQuery: 'ae_mq_size{key="mqutl_past",wpname!=""}'
            resources:
              namespaced: false
            name:
              as: "average_mqutl_past"
            metricsQuery: 'avg_over_time(ae_mq_size{key="mqutl_past",<<.LabelMatchers>>}[2m])'
      
      # Extra manifests to deploy (e.g., RoleBinding for extension-apiserver-authentication)
      # This creates the RoleBinding needed for the adapter to read extension-apiserver-authentication configmap
      # and ClusterRoleBinding for authorization permissions (subjectaccessreviews)
      extraManifests:
        - apiVersion: rbac.authorization.k8s.io/v1
          kind: RoleBinding
          metadata:
            name: prometheus-adapter-helm-auth-reader
            namespace: kube-system
            labels:
              app.kubernetes.io/managed-by: Helm
              app.kubernetes.io/name: prometheus-adapter
          roleRef:
            apiGroup: rbac.authorization.k8s.io
            kind: Role
            name: extension-apiserver-authentication-reader
          subjects:
          - kind: ServiceAccount
            name: prometheus-adapter
            namespace: prometheus
        - apiVersion: rbac.authorization.k8s.io/v1
          kind: ClusterRoleBinding
          metadata:
            name: prometheus-adapter-helm-resource-reader
            labels:
              app.kubernetes.io/managed-by: Helm
              app.kubernetes.io/name: prometheus-adapter
          roleRef:
            apiGroup: rbac.authorization.k8s.io
            kind: ClusterRole
            name: prometheus-adapter-resource-reader
          subjects:
          - kind: ServiceAccount
            name: prometheus-adapter
            namespace: prometheus
        - apiVersion: rbac.authorization.k8s.io/v1
          kind: ClusterRole
          metadata:
            name: prometheus-adapter-helm-subjectaccessreviews
            labels:
              app.kubernetes.io/managed-by: Helm
              app.kubernetes.io/name: prometheus-adapter
          rules:
          - apiGroups: ["authorization.k8s.io"]
            resources: ["subjectaccessreviews"]
            verbs: ["create"]
        - apiVersion: rbac.authorization.k8s.io/v1
          kind: ClusterRoleBinding
          metadata:
            name: prometheus-adapter-helm-subjectaccessreviews
            labels:
              app.kubernetes.io/managed-by: Helm
              app.kubernetes.io/name: prometheus-adapter
          roleRef:
            apiGroup: rbac.authorization.k8s.io
            kind: ClusterRole
            name: prometheus-adapter-helm-subjectaccessreviews
          subjects:
          - kind: ServiceAccount
            name: prometheus-adapter
            namespace: prometheus    

      Review and adjust the rules, Prometheus connection parameters, and RBAC settings as needed to match your cluster and monitoring setup.

    2. Deploy (or update) the Prometheus Adapter using Helm:

      helm install prometheus-adapter prometheus-community/prometheus-adapter -f /prometheus-adapter/values.yaml

    After deployment, the Prometheus Adapter publishes the configured message-queue metrics through the External Metrics API. The HPA can then consume these external metrics and automatically scale WP and JWP deployments based on the target values defined in the HPA configuration.

  5. Enable Horizontal Pod Autoscaling (HPA)

    To enable autoscaling for JWP and WP, update the Helm values in values.yaml:

    horizontalAutoscaling:
      enable: true
    

    Optionally, adjust the scaling behavior using stabilizationWindowSeconds:

    stabilizationWindowSeconds: 300 # Default value: 5 minutes

    Modifying this parameter allows you to control how rapidly the deployment responds to workload changes. Increasing the value results in smoother scale-down transitions, while decreasing it enables faster reaction times.

    Example

    jwp:
      logging:
        pvc: pvc-log
        log: console
      tracing:
        pvc: pvc-trace
        trace: file
      index:
        pvc: pvc-index
      cert-management:
        pvc: pvc-cert-management
      output:
        pvc: pvc-output
      resources:
        requests:
          memory: "700M"
          cpu: "250m"
        limits:
          memory: "2G"
          cpu: "500m"
      horizontalAutoscaling:
        jwp:
          enabled: true
          minReplicas: 1
          maxReplicas: 2
          averageMqCount: 5
          stabilizationWindowSeconds: 500
        jwp-aut:
          enabled: false
          minReplicas: 1
          maxReplicas: 2
          averageMqCount: 5
          stabilizationWindowSeconds: 500
        jwp-utl:
          enabled: false
          minReplicas: 3
          maxReplicas: 4
          averageMqCount: 5
          stabilizationWindowSeconds: 500

  6. Deploy and validate the autoscaling setup

    After all components are configured, redeploy the updated Helm chart or apply the modified manifests, then verify that the HPA objects are created and active by running the following command:

    kubectl get hpa

    Finally, confirm that external metrics are available through the Prometheus Adapter by executing the following command:

    kubectl get --raw "/apis/external.metrics.k8s.io/v1beta1" | jq ..

When all configurations are in place, the HPA automatically scales JWP and WP pods based on message queue utilization metrics collected by Prometheus and exposed through the Prometheus Adapter. Other deployments can continue to rely on standard CPU or memory-based autoscaling mechanisms without requiring any additional configuration.

Manually Scaling AAKE

If you do not want to use autoscaling, you can manually set the number of replicas in the values.yaml file in the Helm Chart. Make sure you follow the sizing guidelines relevant for your installation.

Example

The following example shows the values.yaml file configuration for a default namespace:

By default, the CP replicas are set to zero because a new AAKE environment does not require CPs. However, if you want to connect non-TLS/SSL Agents and/or CallAPIs, you do require a CP. For more information, see Connecting to the CP.

Use the Helm upgrade to apply the replica changes:

helm upgrade aake automic-automation.tgz --install -f values.yaml

You can also scale deployments using kubectl commands. However, these values do not persist after an upgrade or after changing the configuration map.

Example

kubectl scale deployment awi --replicas=2
kubectl scale deployment jcp-rest --replicas=2
kubectl scale deployment jwp --replicas=3

The -- replicas parameter changes the number of running pods for the corresponding deployment thus allowing you to scale your Automation Engine.

For more information, see Automic Automation Helm Plugin.

See also: