Request parameters for /messages

This topic explains the messages request parameters.

This table shows the list of the api/v1/messages endpoint parameters together with other details for further model configuration:

Table 1. Message request
Parameter Type Required Description
model string false Name of the model to use; CLAUDE is the default value when the name is not provided.
version string false Specific id or version of the model to use; model is required for this field.
safetyGuardrail Boolean false The base guardrails from AWS.
Messages string true The array of objects containing
config ModelConfig false Configuration to customize model parameters
Table 2. Configuration
Name Type Required Description
max_response int false

Maximum number of tokens to generate a response.

Note: The default value is 1024.
temperature float false What sampling temperature to use, value must be between 0 and 2.
top_p float false An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass.
stop_sequence string false Up to 4 sequences where the API stops generating further tokens.
frequency_penalty float false Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.
presence_penalty float false

Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.

reasoning false
  • Enabled: the flag for enabling the extensive thinking

  • Budget_tokens: the maximum of tokens used for reasoning

Table 3. Messages
Name Type Required Description
Role string true The role of the factor sending the message.
Content string true

The array of objects containing:

Note: When building the message array, structure it to reflect a natural conversation. Alternate between User and Assistant roles. Do not place two messages from the same role next to each other.

Prompt request example

{
    "system": "You are a math profressor, you help people answer math questions.",
    "model": "CLAUDE",
    "version": "claude-3-7-sonnet-20250219-v1:0",  
    "messages": [
        {
            "role": "user",
            "content": [
                {
                    "type": "text",
                    "data": "What is your favorite algorithm?"
                },
                {
                    "type": "text",
                    "data": "Can you tell me yours first?."
                }
            ]
        },
        {
            "role": "assistant",
            "content": [
                {
                    "type": "text",
                    "data": "Can you tell me yours first?."
                }
            ]
        },
        {
            "role": "user",
            "content": [
                {
                    "type": "text",
                    "data": "But I asked first"
                }
            ]
        }
    ]
}