Sending prompt and message using GenAI LLM Passthrough endpoints
This topic describes how to use API endpoints.
- Use these endpoints to send either a single prompt (/prompt or /prompt/stream) or multiple messages (/messages or /messages/stream):
api/v1/promptapi/v1/messagesapi/v1/prompt/streamapi/v1/messages/streamapi/v2/messagesapi/v2/messages/stream
- Complete the request body with properties from
GET/api/v1/modelsto specify the model and its version.
Ensure that these requirements are met:
- A valid Infor Registry logical ID header is required to invoke the endpoints. Failure to provide a valid header value results in an invocation error.
- You can only select a single model at the time. If a model and version is not selected, the endpoint selects Claude 3 Haiku by default.
Note: When invoking the same prompt, we remind users that mild variations to the output must be expected from every invocation of the LLM.
For reference, review these sample payloads:
-
/prompt and /prompt/streaming API
{ "model": "CLAUDE", "version": "claude-3-haiku-20240307-v1:0", "prompt": "Hello world!" } -
v1/messages and v1/messages/streaming API
{ "system": "You are a math professor, you help people answer math questions.", "model": "CLAUDE", "version": "claude-3-7-sonnet-20250219-v1:0", "messages": [ { "role": "user", "content": [ { "type": "text", "data": "What is your favorite algorithm?" }, { "type": "text", "data": "Can you tell me yours first?." } ] }, { "role": "assistant", "content": [ { "type": "text", "data": "Can you tell me yours first?." } ] }, { "role": "user", "content": [ { "type": "text", "data": "But I asked first" } ] } ] } - v2/messages and v2/messages/streaming
o { "modelId": "claude-sonnet-4-20250514-v1: 0", "system": "You are Claude, an AI assistant made by Anthropic. You are helpful, harmless, and honest. Be concise, warm, and professional in your responses.", "messages": [ { "role": "user", "content": [ { "text": "Summarize the key responsibilities of a Procurement Manager in 3 bullet points." } ] } ] }
Where:
- Model parameter inside
/prompt,v1/messagesandv1/messages/streamingis the Name parameter in theGET/api/v1/models - Version parameter inside
/prompt,v1/messagesandv1/messages/streamingis the ID parameter in theGET/api/v1/models - Model ID inside
v2/messagesandv2/messages/streamingis the ID parameter in theGET/api/v1/models.