Getting the list of the models

This topic explains how to get the list of available LLM models that must send back the completion.

Before you send the prompt to the LLM API, you need to select the model which must send back the completion. To select the list of LLM models:

  1. Call the GET/api/v1/models to get the list of the models and their version.
  2. Use the selected model in the request for api/v1/prompt to get the completion

The response is the array of JSON with following parameters:

Parameter Type Description
Name string Name of the model to use, which should be used inside of the api/v1/prompt under the property model
provider string The provider of the LLM
versions string The version of the specific LLM enumerating these values:
  • Id: The actual version of the selected model must be used inside the api/v1/prompt under the property version.
  • Description: High level description of the model.
  • Token_context_size: The max size of the context window of the model.
Example:
[
  {
    "name": "CLAUDE",
    "provider": "Anthropic",
    "versions": [
      {
        "id": "claude-instant-v1",
        "description": "Optimized for rapid performance and competitive affordability.",
        "token_context_size": 100000
      },
      {
        "id": "claude-v2",
        "description": "Extensive context window to support larger prompts, such as document inputs",
        "token_context_size": 100000
      }
    ]
}
]