GenAI threads

This section highlights the various features provided by the GenAI product.

The GenAI capabilities are divided into three main threads:

  • GenAI Embedded Experience: Exposing the Generative AI capabilities such as text generation, summarization, translation and text analysis features by embedding them into the application screens or exposing through widgets. These capabilities are available:
    • Use the Prompt Catalog within the GenAI Platform part of Infor OS to test differences in response from our supported models.
    • Use the GenAI suite within API Gateway. The suite provides application programming interfaces such as:
      • Getting the list of the LLMs
      • Accessing the LLMs

Application

With GenAI, you can test and validate the prompt you intend to send to Large Language Models (LLMs).

The GenAI includes these menu options:

  • Prompt Catalog: To create the GenAI prompt and test and validate it across up to three different LLMs.
  • Conversation Starters: create the prompts that guide users on what to ask when chatting with the GenAI Assistant.

GenAI API Suite

Features and capabilities within the GenAI user interface and beyond that are also available with corresponding RESTful APIs for programmatic development.

These APIs are useful in facilitating these scenarios:

  • Get list of LLMs.
  • Get the completion back from the LLM

This table shows a few of the primary GenAI APIs available for each service endpoint:

Endpoints Description
GENAI/chatsvc
  • /api/v1/chat for a prompt with Tools to be submitted to the LLM
  • /info to retrieve more application details
  • /api/v1/sessions/{session_id} to delete a session history
GENAI/llmsvc

(previously resided in GENAI/chatsvc)

  • /api/v1/embeddings (POST) to transform text into numerical vectors that capture semantic meaning, which enables similarity search and retrieval for RAG applications using the Titan Text Embeddings V2 model
  • /api/v1/prompt for a single prompt to be submitted to the LLM
  • /api/v1/messages for sending multiple messages to the LLM
  • /api/v1/prompt/stream for a single prompt to be submitted to the LLM and getting response as a stream
  • /api/v1/messages/stream for sending multiple messages to the LLM and getting a response as a stream
  • /api/v2/messages for sending multiple messages to the LLM with additional function calling capabilities
  • /api/v2/messages/stream for sending multiple messages to the LLM with additional function calling capabilities and getting a response as a stream
  • /api/v1/models for a list that provides details on all supported LLMs in the GenAI Assistant