Supported models
This table shows the LLMs that are available within the GenAI application:
Provider | Name | Description | Token Window |
---|---|---|---|
Anthropic | Claude 3 Haiku | Fast and compact model for near-instant responsiveness. Can answer simple queries and requests with speed as a priority. | 200,000 |
Anthropic | Claude3.5 Haiku | Fast and compact model for near-instant responsiveness. Answers simple queries and requests with considerable speed. | 200,000 |
Anthropic | Claude 3.5 Sonnet | Twice the speed of Claude 3 Opus, ideal model for complex workflows within a cost-effective pricing. | 200,000 |
Anthropic | Claude 3.7 Sonnet | Anthropic's most intelligent model to date and the first Claude model to offer extended thinking, that is the ability to solve complex problems with careful, step-by-step reasoning. | 128,000 |
Anthropic | Claude Sonnet 4 | Claude Sonnet 4 balances performance for coding with the right speed and cost for high-volume use cases. | 200,000 |
Amazon | Titan Lite (deprecating Aug 2025) | Designed for use-case specific generation tasks with affordability and speed as priorities. | 4,096 |
Amazon | Titan Express (deprecating Aug 2025) | High-performance model for more complex tasks involving retrieval augmented generation (RAG). | 8,192 |
Meta | Llama 3 8B | Ideal for limited computational power and resources, edge devices, and faster training times. | 8,192 |
Meta | Llama 3 70B | Ideal for content creation, conversational AI, language understanding, R&D, and Enterprise applications. | 8,192 |
Amazon | Nova Micro | Text-to-text understanding foundation model. Model that is multilingual and can reason over text. | 128,000 |
Amazon | Nova Lite | A multimodal understanding foundation model that can reason over text, images, and videos. | 300,000 |
Amazon | Nova Pro | High capacity for enterprise-level computations that performs effectively in a large variety of tasks. | 300,000 |