Sending prompt using GenAI endpoint
This topic describes how to use API endpoints.
To send the prompt:
- Use the
api/v1/prompt
endpoint to send the prompt. - Fill in the request body with properties from
GET/api/v1/models
to specify the model and its version.
Ensure that these requirements are met for successfully sending a prompt:
- A valid Infor
Registry logical ID header is required to invoke
the
/prompt
endpoint. Failure to provide a valid header value results in an invocation error. - You can only select a single model at the time. If a model and version is not selected,
the
/prompt
endpoint selects Claude 3 Haiku by default.
Note: When invoking the same prompt, we remind users that mild
variations to the output must be expected from every invocation of the LLM.
For reference, review this sample payload:
{
"model": "CLAUDE",
"version": "claude-3-haiku-20240307-v1:0",
"prompt": "Hello world!"
}
Where:
- Model parameter inside the
api/v1/prompt
is the Name parameter in theGET/api/v1/models
- Version parameter inside the
api/v1/prompt
is the ID parameter in theGET/api/v1/models