Preparing for the Container-Based Installation

As a system administrator, you follow a number of steps before installing your container-based system.

Tip! This section contains information relevant for container-based systems. If you are searching for information relevant to manually installed, on premises systems, see Preparing for the Manual Installation.

Notes:

This page includes the following:

TLS/SSL Implementation Overview

The chart below depicts the process required for the TLS/SSL implementation and helps you understand not only the components involved but also the process itself.

Click the image to expand it.

Upgrade flow from the TLS perspective

The communication between the Automation Engine and the different components uses TLS/SSL through a secure WebSocket (WSS). These components establish a connection with the Java communication process (JCP) and/or the REST process (REST), which use trusted certificates to prove their identity to other communication partners, such as Agents.

Therefore, you have to decide which kind of certificates you are going to use to secure the communication in your system. This decision must be considered carefully, as it determines not only how secure the connections are but also the time and effort you have to invest in renewing and deploying the certificates.

Important! TLS/SSL Agents (in containers and on-premises) as well as the TLS Gateway, when used for the Automic Automation Kubernetes Edition, establish a connection to an ingress / HTTPS load balancer, which requires a certificate for authentication.

Make sure that address of the load balancer is defined on both sides: the Automation Engine and the Agent / TLS Gateway and that your HTTPS load balancer has the required certificates in place. For more information, see Connecting to AWI, the JCP and REST Processes Using an Ingress.

Related topics:

System Overview

Click the image to expand it.

Graphic depicting the container installation

Downloading the Automic Automation Kubernetes Edition Offering

The Automic Automation Kubernetes Edition (AAKE) offering comprises all core and container-specific components, the provided installation customizations for Automic Automation and Automic Intelligent Remediation, as well as a number of Action Packs that are loaded by default. You can download the offering from https://downloads.automic.com/, where you can also see the complete content of the offering.

To download the offering follow these steps:

  1. Log into https://downloads.automic.com/ with your Broadcom credentials and select Automic Automation Kubernetes Edition.

    On the offering download page, download the AAKE offering and request access to the Automic public docker registry.

  2. Once you have access to your company's GCP Service account, download the automic-image-pull-secret.json file and create the secret with the command provided.

    This allows your container platform to pull images from our password protected repository.

The offering zip file includes:

  • All Automic Automation components

  • Automic Automation Helm Chart

    Includes the values.yaml file that you can use to customize the AAKE deployment/installation.

  • Automic Automation Helm Plugin

    Used to monitor the status of the AAKE installation and to run a Helm upgrade for an existing AAKE instalaltion.

Installing the Automic Automation Helm Plugin

You have to install the Automic Automation Helm plugin to manage and monitor the update process. You can also use during the first installation. The Automic Automation Helm plugin requires the Helm Kubernetes Packagemanager.

Use the following commands to install the Helm plugin:

tar zxvf automic-automation-plugin-<version>.tgz

helm plugin install automic-automation-plugin-<version>

The following commands allow you to check the status of the update:

  • update updated the instance in the current namespace
  • status show the status of the instance in the current namespace
  • logs show the logs of the operator

Example

To monitor the installation use the plugin with the following command:

$ helm automic-automation status

Once the installation is complete the status is provisioned, the target and current versions should match the desired version. If this is not the case, check the Troubleshooting section of README.md file found in the Helm chart.

Note: This Helm Plugin works with Linux and can be used from a Linux CLI. If you use a Windows CLI, you might not be able to run all upgrade commands from it and might have to run them directly in Linux.

For more information, see Automic Automation Helm Plugin.

Configuring Kubernetes-Specific Components before Deployment

There are a number of settings specific to Kubernetes that you must or can define before the installation.

Sizing, Requests and Limits

Make sure you have considered all sizing requirements and that you have define all necessary requests and limits to define the resources allocated to each pod. For more information, see Sizing of Automic Automation Kubernetes Edition.

Preparing the Ingress and TLS/SSL Certificates

Make sure you have all required certificates for the TLS/SSL communication in place.

In Automic Automation Kubernetes Edition, the TLS/SSL Agents, the Automic Web Interface and REST clients connect to the Kubernetes cluster through an ingress / HTTPS load balancer. In this case, the required certificates must be in place for the ingress/load balancer.

Ingresses contain the configuration and rules that external (HTTP or HTTPS) components must use to reach services within the Kubernetes cluster. Ingress controllers from cloud based services such as AWS, Azure, or Google Cloud Platform use the information in the ingress and the services associated to it to create a load balancer (HTTP or HTTPS) and configure it accordingly.

Before the installation you must decide which kind of ingress you require. You can use automatically generated ingresses or other ingresses.

More information:

Using Automatically Generated NGINX Ingresses and Corresponding Certificates

You can adapt the values.yaml file to enable the ingress (enabled: true), to provide an application hostname and a TLS/SSL secret, which is used for all the generated ingresses. The application hostname is usually a public address or domain name used to reach the ingress/load balancer from outside the cluster.

You have to create the private key and certificate before deploying AAKE.

For additional information about the different certificate types and examples of how they could be created and used, see What Kind of Certificates Should I Use for Automic Automation v21.

Important! Please note that these are only examples, not a requirement for Automic Automation and they are not meant to replace the product documentation.

Once you have decided which kind of certificate to use and have a private key and certificate, make sure you also create the corresponding TLS secret, for example:

kubectl create secret tls certificate-tls-secret --key private_key.pem --cert certificate.pem

You must configure the TSL secret in the values.yaml file before the deployment.

The TLS Agents use this application hostname to connect to the ingress/load balancer and perform the TLS hostname verification.

When you enable the ingress (enabled: true), ingresses for the AWI, the JCP, and the REST process are created by default and are configured for an NGINX controller.

Important!

  • If you want to use a different ingress controller, do not enable the ingress (enabled: false) in the values.yaml file and deploy your own ingresses, see Other Ingresses and Corresponding Certificates.

  • If you face any issues when uploading large files/packages to AWI, the cause might be a networking restriction by a third-party software component (such as NGINX) handling your data.

Example

ingress:
  enabled: true
  applicationHostname: <your-domain.example.com>
  secretName: certificate-tls-secret

These parameters create the ingress rules to expose the Automic Automation, JCP REST and JCP WS:

  • awi.your-domain.example.com> for the Automic Web Interface

  • jcp-rest.<your-domain.example.com> for the JCP REST process

  • jcp-ws.<your-domain.example.com> for the Java WS communication process

When using an NGINX Ingress Controller with AAKE, by default there is a disconnect after 5 minutes, causing AWI to timeout. To increase the websocket timeout for AWI, set the worker-shutdown-timeout parameter in the configmap of the Ingress Controller.

Example

The following setting keeps the AWI Websocket connection open for one hour:

data:
  worker-shutdown-timeout: 3600s

Important! TLS/SSL Agents (in containers and on-premises) as well as the TLS Gateway, when used for the Automic Automation Kubernetes Edition, establish a connection to an ingress / HTTPS load balancer, which requires a certificate for authentication.

Make sure that address of the load balancer is defined on both sides: the Automation Engine and the Agent / TLS Gateway and that your HTTPS load balancer has the required certificates in place. For more information, see Connecting to AWI, the JCP and REST Processes Using an Ingress.

If you want to use the automatically generated NGINX ingresses in your system (enabled: true), you have to install an HTTPS load balancer and make sure that you cover the following issues:

  • create a certificate for the load balancer and the corresponding Kubernetes TLS/SSL secret so that the Agent can connect to the load balancer

  • configure the ingress to use the IP address of the load balancer using the applicationHostname: parameter

  • configure the ingress to use that certificate using the secretName: parameter

  • define the JCP_ENDPOINT parameter of the UC_SYSTEM_SETTINGS variable, as this is the endpoint that the Agent uses to connect to the load balancer

  • define in the connection= parameter on the [TCP/IP] section of the INI file of the respective TLS/SSL Agent and/or TLS Gateway

When an Agent starts using the value defined in the connection= parameter, it receives all entries from the JCP_ENDPOINT variable and stores the information in the JCPLIST section of its configuration (INI) file. In this case, the list contains the addresses of all load balancers available. The Agent can then select an available endpoint from the list the next time it starts or reconnects to the Automation Engine.

More information:

Other Ingresses and Corresponding Certificates

You can leave the ingress disabled (enabled: false) for example, if you want to have only one ingress or want to use a different ingress controller. In this case you have to deploy your own ingresses.

Managed Kubernetes services, for example, those provided by AWS, Azure or Google Cloud Platform use different ingress controllers and might require additional annotations in the values.yaml file for the existing services. For more information, see Service Annotations.

If you use cloud based services such as AWS, Azure, or Google Cloud Platform, make sure you follow their guidelines.

Examples

  • For Google Cloud Platform, you can create a managed certificate and configure it accordingly in the ingress:

    kind: Ingress
    metadata:
      name: aake-ingress
      annotations:
        kubernetes.io/ingress.global-static-ip-name: aake-static-ip
        networking.gke.io/managed-certificates: aake-cert-default
    

  • For AWS, you can use the AWS Load Balancer Controller with the ingress.

    In this case, the certificate is created beforehand and uploaded to the AWS Certificate Manager. Once uploaded, the certificate arn is created and you can use it to configure the ingress as required.

    kind: Ingress
    metadata:
      name: aake-ingress
      annotations:
        kubernetes.io/ingress.class: alb
        alb.ingress.kubernetes.io/target-type: ip
        alb.ingress.kubernetes.io/scheme: internet-facing
        alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS": 443}, {"HTTPS":8443}]'
        alb.ingress.kubernetes.io/backend-protocol: HTTPS
        alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:eu-central-1:<aws account>:certificate/<certificate arn>

When you use a cloud provider and you deploy an ingress, an HTTPS load balancer and the corresponding certificate might be created automatically. You must configure the ingress to use the certificate and the address of the load balancer. This address is the endpoint that the Agent, AWI and/or REST client use to connect to and it is configured in the JCP_ENDPOINT or REST_ENDPOINT parameter of the UC_SYSTEM_SETTINGS variable. The JCP_ENDPOINT is also the value defined in the connection= parameter on the [TCP/IP] section of the INI file of the respective TLS/SSL Agent and/or TLS Gateway.

When you use your own ingress and not automatically generated ones, you can define where to reach the JCP and REST processes before the installation if the following applies: 

  • you know the relevant JCP and REST addresses to be used before the installation

  • you have not enabled an ingress (ingress: false)

In this case, you can use the JCP_WS_EXTERNAL_ENDPOINT and JCP_REST_EXTERNAL_ENDPOINT environment variables in the values.yaml file to add the URLs.

Example

JCP_WS_EXTERNAL_ENDPOINT: "https://jcp-ws.<your-domain.example.com>:443"

If you set these endpoints through the values.yaml file before the installation, they are automatically configured as JCP_ENDPOINT and REST_ENDPOINT in the UC_SYSTEM_SETTINGS variable during deployment. Otherwise you have to set them manually after the installation.

More information:

Service Accounts

A default service account is created and automatically mounted into all pods in the namespace, which could be a security risk. Therefore, it is recommended to disable the automount function in the default service account. To do so, set the automountServiceAccountToken parameter to false.

Example

apiVersion: v1
kind: ServiceAccount
metadata:
  name: automic
automountServiceAccountToken: false

Only the operator requires a service account for the installation of AAKE. Therefore, you have to define the name of the dedicated serviceAccount that is used by the operator in the operator: section of the values.yaml file. Make sure you create the service account with the relevant permissions in the namespace.

Example

operator:
  serviceAccount:
    create: true
    name: automic-operator-sa

Service Annotations

Managed Kubernetes services, for example, those provided by AWS, Azure or Google Cloud Platform use different ingress controllers and might require additional annotations in the values.yaml file for the operator, awi, jcp-rest, and jcp-ws services:

Examples

Set GKE annotation for AWI service:

awi:
  service:
    annotations:
      cloud.google.com/neg: '{"ingress": true}'

Set AWS annotation for JCP service:

jcp-ws:
  service:
    annotations:
      service.beta.kubernetes.io/aws-load-balancer-backend-protocol: https

Preparing the AE and Analytics Databases for the Container Installation

When preparing your Automation Engine and Analytics databases for a container installation you have the following options:

  • You can use a backup of an existing database.

  • You can use managed databases provided on platforms such as AWS, Azure, and Google Cloud Platform.

  • You can prepare a new, empty database if you want to install a new AAKE system including an Analytics deployment.

For detailed instructions on how to prepare your AE and Analytics databases for a containerized system, see Preparing the AE and Analytics Databases for the Container Installation.

You can also decide not do deploy Analytics, see Disabling the Analytics Installation.

Stating Alternative Database Tablespaces

When you use cloud-hosted databases, you might not have the permissions required to create or rename tablespaces. In this case, you can state the default tablespaces provided for the AAKE installation as alternative ones in the values.yaml file.

Example

databases:
  automationEngine:
    ...
    dataTablespaceName: <AE data tablespace name>
    indexTablespaceName: <AE index tablespace name>
    ...
  analytics:
    ...
    dataTablespaceName: <Analytics data tablespace name>
    indexTablespaceName: <Analytics index tablespace name>
    ...

Disabling the Analytics Installation

All required components to install and upgrade your system are provided as pre-built container images which are automatically installed by the Install Operator. For more information, see Container-Based Installation - Automic Automation Kubernetes Edition.

Analytics is one of these components. However, you can choose not to install it.

To do so, you have to disable the relevant parameter in the values.yaml file before the installation:

analytics:
  enabled: false

Important! If you already have an Automic Automation Kubernetes Edition instance with Analytics running, do not disable it.

Configuring your Automic Automation System before Deployment

There are a number of settings that you must or can define before the installation.

Important! When configuring containers, take into account that on-disk files are ephemeral. They are lost when containers or pods stop and restart. To store and refer to resources in the cluster (for example, to files such as logs, traces, images and so on), you must first define the names of the corresponding Persistent Volumes (pv) and Persistent Volume Claims (pvc) in the respective sections of the values.yaml file. For more information about Volumes in Kubernetes please refer to the official Kubernetes product documentation.

Setting Environment Variables

In an AAKE environment, you use environment variables to define or change settings for the Automation Engine and the Automic Web Interface. These environment variables not only allow you to substitute values in the AE INI file or the AWI configuration files, they also allow you to define system and other settings.

You can set environment variables using the values.yaml file before or after you install your system. Make sure you carry out a Helm upgrade to apply your changes. The ae and awi pods are restarted automatically.

Note: Make sure you use the correct format when defining environment variables.

After the installation is provided, you can also use the configmap to change environment variables settings related to the INI file and AWI configuration files.

More information:

Environment Variables for Automation Engine

Each environment variable for the AE uses the syntax pattern ${PREFIX_SECTION_KEY:DEFINITION}:

  • PREFIX: AUTOMIC

    Constant prefix for these environment variables.

  • SECTION:

    Defines the section of the INI file in which the key you want to define is located.

  • KEY:

    Specifies the key you want to substitute.

  • DEFINITION:

    The value that you want to define for the key.

The configuration of the Automation Engine server processes is defined in the AE INI file (ucsrv.ini), which comprise all relevant parameters organized in different sections. In an AAKE environment, you can use the values.yaml file to modify these settings before and after the deployment.

Notes:

  • Only the system administrator should change parameter values in configuration (INI) files since these kind of changes interfere in and have a great impact in the Automation Engine system.

  • You can substitute whole keys of the INI file but not a complete section.

Example

${AUTOMIC_GLOBAL_SYSTEM:AUTOMIC}

This environment variable sets AUTOMIC as the value of the system= key in the [GLOBAL] section of the INI file of the Automation Engine.

You can also use environment variables to set system and other settings that are not part of the AE INI file.

Before the deployment, these variables can be defined only in the values.yaml file. Once your system has been provisioned, you can also modify them using the Automic Web Interface.

For example, you can use them to populate the JCP_ENDPOINT and REST_ENDPOINT parameters of the UC_SYSTEM_SETTINGS variable. To do so, define the respective URLs in the JCP_WS_EXTERNAL_ENDPOINT and JCP_REST_EXTERNAL_ENDPOINT environment variables in the values.yaml file.

Example

JCP_WS_EXTERNAL_ENDPOINT: "https://jcp-ws.<your-domain.example.com>:443"

More information:

Environment Variables for AWI

All AWI settings use the syntax pattern ${PREFIX_KEYNAME:DEFINITION}:

  • PREFIX: AUTOMIC

    Constant prefix for all AWI settings.

  • KEYNAME:

    Matches the property name as defined in the configuration.properties.xml and corlors.properties.xml files. The KEYNAME is written in uppercase and dots are replaces by underscores. For example, the session.colors property becomes AUTOMIC_SESSION_COLORS.

  • DEFINITION:

    The value that you want to define for the key

AWI has mandatory and optional configuration files that you must/can modify. In an AAKE environment, you can use the values.yaml file to modify these settings before and after the deployment.

Example

  • Environment variable key: AUTOMIC_SSO_SAML_ENABLED

    Value: Specifies whether single sign-on (SSO) can be used for AWI log in

    Example: true

You can also use environment variables to define settings that are not part of any of the AWI configuration files.

Before the deployment, these variables can be defined only in the values.yaml file. Once your system has been provisioned, you can also modify them using the Automic Web Interface.

For example, you can use AUTOMIC_LOGO_BASE64 environment variable to enter the text that corresponds to the AWI logo in BASE64 format.

More information:

Environment Variables Relevant for Migration

Important! When you migrate to a container-based system, make sure that you use environment variables to define all the INI parameters and AWI configuration settings that you had defined for your on-premises system and that you would like to keep in your AAKE environment.

Use the corresponding environment variables to define the names of your existing Automation Engine and Automic Web Interface instances in the values.yaml file :

  • Environment variable key: AUTOMIC_GLOBAL_SYSTEM

    Value: Name of the AE system

    Example: AUTOMIC

  • Environment variable key: AUTOMIC_SYSTEM_NAME

    Value: Name of the Automic Web Interface system

    Example: AUTOMIC

Note: For new installations you can choose how to define these variables. For migrations, make sure that you use the correct definition.

Setting Logs and Traces

All AE, Analytics and AWI processes write their activities (logs and traces) to the console by default. However, you can also configure a file to which logs, traces and dumps can be written. If you want to write logs and traces to a file, make sure you define a file for each component (jcp-rest, jcp-ws, jwp, cp, wp, analytics, awi, initialdata).

If you want to use a file instead of the console (default), it is mandatory to define the name of the persistent volume claim (pvc name) to which you want to write log/trace files.

Note: Make sure the access nodes of the PV and PVC you use are set to ReadWriteMany if the pods that access them run on different nodes. If all pods run on a single node, you can also set them to ReadWriteOnce.

Example

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-logging
spec:
  storageClassName: manual
  capacity:
    storage: 4Gi
  accessModes:
    - ReadWriteMany
  hostPath:
    path: /home/luf/helm/log
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-logging
spec:
  storageClassName: manual
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 2Gi

For more information and examples of persistent volumes and persistent volume claims, see the official Kubernetes documentation.

In the values.yaml file, change the definition of the log and trace parameters from console to file to write the log and trace to the corresponding pvc.

Example

logging:
    pvc:   pvc-logging
    log:   file
    trace: file

Make sure you carry out a Helm update to apply your changes. The ae and awi pods are restarted automatically.

By default, log and trace files are overwritten every time the system is restarted. To keep older files for later use, you can specify the number of history files to be kept by changing the trace level in the respective section of the values.yaml file.

The history files are named 01, 02, and so on, and are moved with every restart. The oldest file (with the highest number) is deleted and all other files are renamed (number is increased by 1).

More information:

Setting up Single Sign-On

As a system administrator, you can set up single sign-on (SSO) for the Automation Engine system, which allows users to login only once, without having to enter user credentials over and over again. The Automic Automation Kubernetes Edition supports the Security Assertion Markup Language 2.0 (SAML 2.0) protocol.

You must enable single sign-on to use the SAML protocol. If you want to do is before the installation, set the AUTOMIC_SSO_SAML_ENABLED environment variable to true in the values.yaml file.

More information:

Reaching the JCP and REST Endpoints

When you use your own ingress and not automatically generated ones, you can define where to reach the JCP and REST processes before the installation if the following applies: 

  • you know the relevant JCP and REST addresses to be used before the installation

  • you have not enabled an ingress (ingress: false)

In this case, you can use the JCP_WS_EXTERNAL_ENDPOINT and JCP_REST_EXTERNAL_ENDPOINT environment variables in the values.yaml file to add the URLs.

Example

JCP_WS_EXTERNAL_ENDPOINT: "https://jcp-ws.<your-domain.example.com>:443"

For more information on how to reach the endpoints after the installation and/or if you are using an ingress, see Connecting to AWI, the JCP and REST Processes Using an Ingress.

Reaching the CP Endpoint

A new AAKE environment does not require communication processes (CPs). However, if you want to connect non-TLS/SSL Agents or OS CallAPIs and cannot use the TLS Gateway, you do require a CP.

The CP replicas are set to zero by default. If you do require a CP, make sure set the CP replicas to scale your deployment as required. For more information, see Scaling Deployments.

If you know the relevant address before the installation, you can also use the CP_EXTERNAL_ENDPOINT environment variable in the values.yaml file to add the URL.

For more information, see Connecting to the CP.

Optional Configuration Settings

There are a number of additional, optional setting that you can set up for your Automic Automation Kubernetes Edition system:

  • You can set up LDAP for your AAKE system

  • You can configure the connection between you Analytics on-premises environment and AAKE

More information:

See also: