AAKE: Preparing for the Automation.AI Installation
The Automation.AI component allows you to communicate with multiple Large Language Models (LLMs), thus allowing you to incorporate state-of-the-art AI technology to help you design automated processes, write scripts, analyze and explain the automation output, troubleshoot issues and suggest potential solutions to those issues.
Before installing Automation.AI, you have to make sure your system, once installed, will be able to connect to your system and to the LLM of your choice.
This page includes the following:
Enabling/Disabling the Automation.AI Installation
Automation.AI is one of the components that is provided as a pre-built container image and is installed by the Install Operator automatically. However, you can choose not to install the Automation.AI component with the AAKE cluster.
To do so, you have to disable the relevant parameter in the values.yaml file before the installation by setting it to false:
automation-ai: enabled: false
If you deploy AAKE initially without the Automation.AI component and you want to enable it later on, you can do so through a helm upgrade by simply changing this value to true:
automation-ai: enabled: true
If there were no other changes done in the values.yaml file, only the automation-ai pod is restarted while all other pods remain running. If the values.yaml file included more changes, the other pods are also restarted automatically.
For more information, see:
Configuring the Automation.AI Installation Before Deployment
Before deploying AAKE, you must define the relevant configuration to the automation-ai configmap and create the automation-ai secret in your Kubernetes namespace.
You can configure one or more of the LLMs supported using the following parameters:
Gemini
-
SPRING_AI_VERTEX_AI_GEMINI_LOCATION=
-
SPRING_AI_VERTEX_AI_GEMINI_API-ENDPOINT=
Ollama
-
SPRING_AI_OLLAMA_BASE-URL=
-
SPRING_AI_OLLAMA_CHAT_OPTIONS_MODEL=
OpenAI
-
SPRING_AI_OPENAI_API-KEY=
-
SPRING_AI_OPENAI_CHAT_OPTIONS_MODEL=
Once you have configured the relevant LLM(s), you need to specify which one should the Automation.AI component use. Optionally, you can also define a timeout for the chat history or how often the system should check if any conversations timed out using the following parameters:
-
AUTOMATION_AI_MODEL
Define the default LLM that you want to incorporate to your system. The options available are vertex.ai.gemini, ollama, and openai.
-
AUTOMATION_AI_CHAT_CONVERSATION-TIMEOUT
Define the conversation history timeout in minutes
-
AUTOMATION_AI_CHAT_CHECK-PERIOD
Define in minutes how often should the system check for conversations
Examples
These examples display the configmap and secret configuration for Automation.AI using the Vertex AI Gemini LLM model:
automation-ai configmap
apiVersion: v1 data: AUTOMATION_AI_MODEL: vertex.ai.gemini AUTOMATION_AI_CHAT_CONVERSATION-TIMEOUT: "1440" SPRING_AI_VERTEX_AI_GEMINI_PROJECTID: <your GCP project id> SPRING_AI_VERTEX_AI_GEMINI_LOCATION: <your GCP location> SPRING_AI_VERTEX_AI_GEMINI_TRANSPORT: rest kind: ConfigMap metadata: name: "automation-ai" namespace: "<your namespace>"
automation-ai secret
Use the following kubectl command to create the automation-ai secret in your Kubernetes namespace:
kubectl --namespace <your namespace> create secret generic automation-ai --from-file=SPRING_AI_VERTEX_AI_GEMINI_CREDENTIALS=<path to your credentials file>.json
The content of the secret and configmap are mapped as environment variables to the automation-ai pod. You can also edit them after deployment. For more information, see Environment Variables for Automation.AI.
See also: