On-Premises: Preparing for the Automation.AI Installation
The Automation.AI component allows you to communicate with multiple Large Language Models (LLMs), thus allowing you to incorporate state-of-the-art AI technology to help you design automated processes, write scripts, analyze and explain the automation output, troubleshoot issues and suggest potential solutions to those issues.
Before installing Automation.AI, you have to make sure your system, once installed, will be able to connect to your system and to the LLM of your choice.
For on-premises systems, you must configure the application.properties file that is delivered with the offering to match your system needs.
Defining the application.properties File
The application.properties file allows you to define the connection to one or more of the LLMs supported:
#gemini llm settings spring.ai.vertex.ai.gemini.location= spring.ai.vertex.ai.gemini.api-endpoint=
# ollama llm settings spring.ai.ollama.base-url= spring.ai.ollama.chat.options.model=
# openai llm settings spring.ai.openai.api-key= spring.ai.openai.chat.options.model=
For more information about the different settings for Google VertexAI, Ollama and OpenAI, please refer to the Chat Model API documentation on the Spring AI website, see Chat Model API.
Once you have configured the relevant LLM(s), you need to specify which of the services should the Automation.AI component use along with the port that the system should use:
-
server.port
Define the relevant port. Port 8080 is used by default.
-
automation.ai.model.name
Define the default LLM that you want to incorporate to your system. The options available are vertex.ai.gemini, ollama, and openai.
Note: When using the ONE Installer, you only need to define the LLM, as the port is defined automatically.
Optionally, you can also define a timeout for the chat history or how often the system should check if any conversations timed out using the following parameters:
-
automation.ai.chat.conversation-timeout
Define the conversation history timeout in minutes
-
automation.ai.chat.check-period
Define in minutes how often should the system check for conversations
Example
The following example shows a connection using the default port and the Vertex AI Gemini LLM model, where the chat history times out every 24 hours and the system checks every 60 minutes if there are any conversations that have timed out.
# server settings server.port=8080 # default llm model automation.ai.model.name=vertex.ai.gemini # chat conversation timeout in minutes automation.ai.chat.conversation.timeout=1440 automation.ai.chat.check-period=60
You can also use environment variables to define these parameters using the following keys:
-
SERVER_PORT
-
AUTOMATION_AI_MODEL_NAME
-
AUTOMATION_AI_CHAT_CONVERSATION-TIMEOUT
-
AUTOMATION_AI_CHAT_CHECK-PERIOD
See also: