Connecting to AWI, the JCP and REST Processes Using an Ingress

Using an ingress allows you to expose AWI, the REST and JCP endpoints, and the Install Operator service. By default, ingresses are not enabled (enabled: false) in the values.yaml file.

This page includes the following:

Reaching the JCP and REST Endpoints

Since the TLS/SSL Agents, the TLS Gateway, and the CallAPIs for Java and SAP in AAKE establish a connection to an ingress / HTTPS load balancer, you have to make sure that it is reachable and that the required certificates are in place. The value defined in the JCP_ENDPOINT is the address that the Agent uses to connect to the ingress / load balancer. The endpoint is also the value defined in the connection= parameter on the [TCP/IP] section of the INI file of the respective TLS/SSL Agent and/or TLS Gateway.

More information:

Reaching the Automic Web Interface

The Automic Web Interface uses TLS/SSL and WebSockets to secure the communication between the AWI browser and the web server within the cluster. Therefore, you have to make sure that the ingress / HTTPS load balancer to which the AWI browser connects is reachable and that the required certificates are in place.

Important! Be aware that the AWI ingress / HTTPS load balancer must support sticky sessions. For more information, see Setting Up the Load Balancer and Configure Proxies.

The specific configuration of your load balancer depends on the cloud provider services you use. For detailed information, please refer to the respective official documentation.

Using Automatically Generated NGINX Ingresses

When you enable the ingress (enabled: true), ingresses for the AWI, the JCP, and the REST process are created by default and are configured for an NGINX controller.

Example

ingress:
  enabled: true
  applicationHostname: <your-domain.example.com>
  secretName: certificate-tls-secret

These parameters create the ingress rules to expose the Automic Automation, JCP REST and JCP WS:

  • awi.your-domain.example.com> for the Automic Web Interface

  • jcp-rest.<your-domain.example.com> for the JCP REST process

  • jcp-ws.<your-domain.example.com> for the Java WS communication process

When using an NGINX Ingress Controller with AAKE, by default there is a disconnect after 5 minutes, causing AWI to timeout. To increase the websocket timeout for AWI, set the worker-shutdown-timeout parameter in the ConfigMap of the Ingress Controller.

Example

The following setting keeps the AWI websocket connection open for one hour:

data:
  worker-shutdown-timeout: 3600s

Important!

  • If you want to use a different ingress controller, do not enable the ingress (enabled: false) in the values.yaml file and deploy your ingresses manually, see Using Other Ingresses.

  • If you face any issues when uploading large files/packages to AWI, the cause might be a networking restriction by a third-party software component (such as NGINX) handling your data.

TLS/SSL Agents (in containers or on-premises) and the TLS Gateway, when used for the Automic Automation Kubernetes Edition, establish a connection to an ingress / HTTPS load balancer and not the JCP directly. The ingress / HTTPS load balancer must be reachable and requires a certificate for authentication. The address of the load balancer must be defined on both sides: the Automation Engine and the Agent / TLS Gateway.

If you want to use the automatically generated NGINX ingresses in your system (enabled: true), you have to install an HTTPS load balancer and make sure that you cover the following issues:

  • create a certificate for the load balancer and the corresponding Kubernetes TLS/SSL secret so that the Agent can connect to the load balancer

  • configure the ingress to use the IP address of the load balancer using the applicationHostname: parameter

  • configure the ingress to use that certificate using the secretName: parameter

  • define the JCP_ENDPOINT parameter of the UC_SYSTEM_SETTINGS variable, as this is the endpoint that the Agent uses to connect to the load balancer

  • define in the connection= parameter on the [TCP/IP] section of the INI file of the respective TLS/SSL Agent and/or TLS Gateway

When an Agent starts using the value defined in the connection= parameter, it receives all entries from the JCP_ENDPOINT variable and stores the information in the JCPLIST section of its configuration (INI) file. In this case, the list contains the addresses of all load balancers available. The Agent can then select an available endpoint from the list the next time it starts or reconnects to the Automation Engine.

More information:

Using Other Ingresses

If you use ingresses that are not automatically generated by AAKE, you must not enable (enabled: false) the ingress in the values.yaml file.

Managed Kubernetes services, for example, those provided by AWS, Azure or Google Cloud Platform use different ingress controllers and might require additional annotations in the values.yaml file for the existing services:

operator: 
  service:
    annotations: {}

awi: 
  service:
    annotations: {}

jcp-rest: 
  service:
    annotations: {}

jcp-ws: 
  service:
    annotations: {}

Examples

Set GKE annotation for AWI service:

awi:
  service:
    annotations:
      cloud.google.com/neg: '{"ingress": true}'

Set AWS annotation for JCP service:

jcp-ws:
  service:
    annotations:
      service.beta.kubernetes.io/aws-load-balancer-backend-protocol: https

When you use a cloud provider and you deploy an ingress, an HTTPS load balancer and the corresponding certificate might be created automatically. You must configure the ingress to use the certificate and the address of the load balancer. This address is the endpoint that the Agent, AWI and/or REST client use to connect to and it is configured in the JCP_ENDPOINT or REST_ENDPOINT parameter of the UC_SYSTEM_SETTINGS variable. The JCP_ENDPOINT is also the value defined in the connection= parameter on the [TCP/IP] section of the INI file of the respective TLS/SSL Agent and/or TLS Gateway.

When you use your own ingress and not automatically generated ones, you can define where to reach the JCP and REST processes before the installation if the following applies: 

  • you know the relevant JCP and REST addresses to be used before the installation

  • you have not enabled an ingress (ingress: false)

In this case, you can use the JCP_WS_EXTERNAL_ENDPOINT and JCP_REST_EXTERNAL_ENDPOINT environment variables in the values.yaml file to add the URLs.

Example

JCP_WS_EXTERNAL_ENDPOINT: "https://jcp-ws.<your-domain.example.com>:443"

If you set these endpoints through the values.yaml file before the installation, they are automatically configured as JCP_ENDPOINT and REST_ENDPOINT in the UC_SYSTEM_SETTINGS variable during deployment. Otherwise you have to set them manually after the installation.

See also: