Setting Up the Generative AI Capabilities
Automic Automation’s Generative AI features are enabled through the Automation.AI component, which connects Automic Automation systems to Large Language Models (LLMs). Depending on the environment, Automic Automation is installed manually (on-premises), automatically (AAKE), or preconfigured (Automic SaaS). Proper setup ensures secure, isolated AI communication, flexible provider choice, and seamless integration across your Automic Automation environment.
Note: The AI Augmented Workflow Creation feature is only supported by the Google Gemini LLM. If you plan to use this capability, ensure you select Gemini as your provider during configuration.
For more information, see AI-Augmented Workflow Creation with the Automation Assistant.
This page includes the following:
Enabling and Setting up Gen AI across Automic Automation Environments
How you enable Gen AI depends on your Automic Automation environment setup.
-
Gen AI in On-Premises Environments
You are responsible for installing and configuring the Automation.AI component and ensuring network connectivity between Automic Automation components and the AI backend. For testing environments, you can use the ONE Installer which installs the Automation.AI component by default.
For more information, see:
-
Gen AI in AAKE Environments
AAKE environments deploy the Automation.AI component by default during the installation or upgrade of an AAKE system. However, you can choose not to deploy the Automation.AI component, if your company decides not to take advantage of its benefits.
For more information, see:
-
Gen AI in Automic SaaS Environments
All Automic SaaS environments provide Gen AI by default. Each Automic SaaS instance has its own Automation.AI component, ensuring that data exchanges, conversations and so on are always kept within an instance.
Overview of Configuration Properties
Setting up Gen AI involves configuring several application properties organized into the following four categories.
-
Required: These properties are essential for application startup and define the core configuration of the LLM, database, and AI model provider. They ensure that system components can communicate and the application can start, but no additional functionality is enabled at this stage. These include:
-
the AI model selection
-
the LLM provider configuration
-
the database configuration
-
-
Recommended: These properties enable full application functionality by allowing communication with external systems, ensuring secure operations, and supporting effective monitoring. The MCP OpenAPI Provider Configuration allows integration with external OpenAPI-based services such as the AE REST API by defining one or more providers for external communication. The TLS/SSL Configuration secures the embedded server for production through appropriate certificate and protocol settings. The Logging Configuration manages log levels and output to ensure effective diagnostics and traceability. These include:
-
the TLS/SSL configuration
-
the MCP OpenAPI provider configuration
-
the MCP server authentication configuration
-
the logging configuration
-
-
Optional: These properties have sensible defaults but can be customized to suit specific environments or preferences. They cover general server settings, MCP server and client behavior, chat memory management, and HTTP client configuration. Adjusting these values allows for fine-tuning performance, connection handling, and conversation retention while keeping default functionality intact. These include:
-
the server settings
-
the MCP server settings
-
the MCP client settings
-
the HTTP client settings
-
the chat memory settings
-
-
Restricted Properties (Do Not Change): These properties are preconfigured to keep the system stable, manage the database correctly, and ensure secure operation. They define critical internal functions and must not be modified; changing them can break core functionality or compromise system security. Only adjust them if absolutely necessary and with a full understanding of the consequences. These are split into three sub-categories:
Internal configuration:
-
spring.jpa.hibernate.ddl-auto/SPRING_JPA_HIBERNATE_DDLAUTO, set to none by default. Changing it can corrupt the database.
-
spring.liquibase.enabled/SPRING.LIQUIBASE_ENABLED, set to true by default. Disabling it will prevent schema updates.
-
spring.liquibase.change-log/SPRING.LIQUIBASE_CHANGELOG, set to classpath:db/changelog/db.changelog-master.yaml by default. Changing the path breaks migrations.
-
springdoc.api-docs.version/SPRINGDOC_APIDOCS_VERSION, set to the relevant openapi version by default. Changing it might break API clients.
Spring AI internal settings:
-
spring.ai.chat.memory.repository.jdbc.initialize-schema/SPRING_AI_CHAT_MEMORY_REPOSITORY_JDBC_INITIALIZESCHEMA, set to never by default. Changing the default will cause conflicts.
-
spring.ai.vertex.ai.gemini.transport/SPRING_AI_VERTEX_AI_GEMINI_TRANSPORT, set to GRPC by default. Changing the transport may cause compatibility issues.
-
spring.ai.mcp.client.type/SPRING_AI_MCP_CLIENT_TYPE, set to async by default and should not be changed.
TLS/ SSL security defaults:
-
automation.ai.server.ssl.enabled-protocols/AUTOMATION_AI_SERVER_SSL_ENABLEDPROTOCOLS, set to the supported TLS versions by default. Do not enable protocols that are not listed by default.
-
automation.ai.server.ssl.exclude-ciphers/AUTOMATION_AI_SERVER_SSL_EXCLUDECIPHERS, lists all excluded weak cipher suites. Do not remove any entries from the list.
-
Important! The documentation covers all configuration parameters relevant to using Automation.AI with the Automic MCP server. The Do Not Change guidelines apply to this setup as well, but you can use additional Spring parameters to extend or customize the configuration beyond the default Automation.AI integration.
For more information about each property category, see:
-
On-Premises: Defining the application.properties File before the Installation
-
AAKE: Configuring the Automation.AI Installation before Deployment
Enabling the Gen AI Capabilities to Use the AE REST API
Gen AI seamlessly interacts with the AE REST API to dynamically query the Automation Engine for data, enhancing the accuracy of large language model (LLM) responses about your Automic Automation system. This capability is enabled via an MCP (Model Context Protocol) server that acts as an intermediary, allowing the LLM to leverage data from the MCP server to call specific AE REST API endpoints (GET, POST, PATCH, DELETE, PUT).
For setup, you need to configure the MCP server to communicate with both the Automation.AI component and the AE REST API. For detailed instructions on how to do so in on-premises systems as well as in Automic Automation Kubernetes Edition, see:
Note: You can also configure your own MCP server to talk to the Automation.AI.
To enable Gen AI requests to access the AE REST API, a temporary bearer token is generated and used for authorization. This token is valid for 24 hours but will be automatically deleted as soon as your AI request is completed, regardless of its remaining validity. If you do not want to create authentication tokens for external systems, you can disable this feature by setting the AUTOMATION_AI_USE_USER_CREDENTIALS parameter of the UC_SYSTEM_SETTINGS variable to N. However, disabling this option will prevent you from accessing the AE REST API and result in an authentication failure, if attempted. For more information, see AUTOMATION_AI_USE_USER_CREDENTIALS.
By leveraging AE REST API calls, Gen AI retrieves precise, real-time data for user queries, enabling you to ask context-aware, system-specific questions and receive accurate answers that would otherwise be unattainable. These queries can be phrased as complete and specific as you expect the answer to be and refer to the Client the User is currently logged into.
This function cannot be leveraged in the initial request when you are analyzing reports, executions or scripts but can be used in any subsequent follow-up questions. When using the Ask AI script function, the functionality is determined by the nature of the question.
Examples
:PRINT&PROMPT#
:SET &ANSWER# = ASK_AI("Ping the server. If it is reachable, get a detailed health check and a list of all executions.")
:PRINT&ANSWER#
See also: