Use Bedrock Guardrails with the /messages
and /prompt
endpoints
The
/messages
and /prompt
endpoints include built-in guardrails. These guardrails help ensure responsible and ethical use of AI.
These guardrails help identify and block content in these categories:
- Hate
- Insults
- Sexual
- Violence
- Misconduct
To enable these guardrails, set the safetyGuardrail field to true when calling the endpoints.
Response example
This is a response example that you can get when the guardrail conditions are met:
{
"warnings": [],
"id": "47add181-9684-41df-9cad-02af925af8ec",
"reasoning_content": null,
"content": "Blocked due to policy violation.",
"model": "CLAUDE",
"version": "claude-3-7-sonnet-20250219-v1:0",
"token_usage": {
"input": 0,
"output": 0,
"total": 0,
"billed": true,
"cache_create": 0,
"cache_read": 0
},
"guardrail_info": {
"tenant_guardrails_applied": [],
"bedrock_guardrail_applied": true,
"type": "VIOLENCE",
"message": "guardrail_intervened",
"input_flagged": true,
"output_flagged": false
}
}