# Supported LLMs

{/* DO NOT EDIT: This file is auto-generated. Run 'go run ./internal/modelconfig/embedded/cmd/docgen' to regenerate. */}

<Callout type="note">
	Site admins can configure Cody via modelConfiguration on a
	Sourcegraph Enterprise instance. See [Model
	Configuration](/cody/enterprise/model-configuration) for more details.
</Callout>

## Chat and Prompts

Cody supports a variety of cutting-edge large language models for use in chat and prompts, allowing you to select the best model for your use case.

| **Provider** | **Model** | **Status** | **Vision Support** |
| :----------- | :-------- | :--------- | :----------------- |
| Anthropic | [Claude Opus 4.6](https://docs.anthropic.com/en/docs/about-claude/models/overview) | ✅ | ✅ |
| Anthropic | [Claude Sonnet 4.6](https://docs.anthropic.com/en/docs/about-claude/models/overview) | ✅ | ✅ |
| Anthropic | [Claude Sonnet 4.6 with Thinking](https://docs.anthropic.com/en/docs/about-claude/models/overview) | ✅ | ✅ |
| Anthropic | [Claude Sonnet 4.5](https://docs.anthropic.com/en/docs/about-claude/models/overview) | ✅ | ✅ |
| Anthropic | [Claude Sonnet 4.5 with Thinking](https://docs.anthropic.com/en/docs/about-claude/models/overview) | ✅ | ✅ |
| Anthropic | [Claude Opus 4.5](https://docs.anthropic.com/en/docs/about-claude/models/overview) | ✅ | ✅ |
| Anthropic | [Claude Opus 4.5 with Thinking](https://docs.anthropic.com/en/docs/about-claude/models/overview) | ✅ | ✅ |
| Anthropic | [Claude Haiku 4.5](https://docs.anthropic.com/en/docs/about-claude/models/overview) | ✅ | ✅ |
| Anthropic | [Claude Haiku 4.5 with Thinking](https://docs.anthropic.com/en/docs/about-claude/models/overview) | ✅ | ✅ |
| Google | [Gemini 2.5 Flash](https://cloud.google.com/vertex-ai/generative-ai/docs/models/gemini/2-5-flash) | ✅ | ✅ |
| Google | [Gemini 2.5 Pro](https://cloud.google.com/vertex-ai/generative-ai/docs/models/gemini/2-5-pro) | ✅ | ✅ |
| Google | [Gemini 3 Flash](https://cloud.google.com/vertex-ai/generative-ai/docs/models/gemini/3-flash) | ✅ | ❌ |
| Google | [Gemini 3.1 Flash Lite](https://docs.cloud.google.com/vertex-ai/generative-ai/docs/models/gemini/3-1-flash-lite) | ✅ | ❌ |
| Google | [Gemini 3.1 Pro](https://cloud.google.com/vertex-ai/generative-ai/docs/models/gemini/3-1-pro) | ✅ (beta) | ❌ |
| OpenAI | [GPT-5.4](https://developers.openai.com/api/docs/models/gpt-5.4) | ✅ | ✅ |
| OpenAI | [GPT-5.4 mini](https://developers.openai.com/api/docs/models/gpt-5.4-mini) | ✅ | ✅ |
| OpenAI | [GPT-5.4 nano](https://developers.openai.com/api/docs/models/gpt-5.4-nano) | ✅ | ✅ |
| OpenAI | [GPT-5.2](https://platform.openai.com/docs/models/gpt-5.2) | ✅ | ✅ |
| OpenAI | [GPT-5.1](https://platform.openai.com/docs/models/gpt-5.1) | ✅ | ✅ |
| OpenAI | [GPT-5](https://platform.openai.com/docs/models/gpt-5) | ✅ | ✅ |
| OpenAI | [GPT-5 mini](https://platform.openai.com/docs/models/gpt-5-mini) | ✅ | ✅ |
| OpenAI | [GPT-5 nano](https://platform.openai.com/docs/models/gpt-5-nano) | ✅ | ✅ |
| OpenAI | [GPT-4o](https://platform.openai.com/docs/models#gpt-4o) | ✅ | ✅ |
| OpenAI | [GPT-4.1](https://platform.openai.com/docs/models/gpt-4.1) | ✅ | ✅ |
| OpenAI | [GPT-4o-mini](https://platform.openai.com/docs/models#gpt-4o-mini) | ✅ | ✅ |
| OpenAI | [GPT-4.1-mini](https://platform.openai.com/docs/models/gpt-4.1-mini) | ✅ | ✅ |
| OpenAI | [GPT-4.1-nano](https://platform.openai.com/docs/models/gpt-4.1-nano) | ✅ | ✅ |
| OpenAI | [o3](https://platform.openai.com/docs/models#o3) | ✅ | ❌ |
| OpenAI | [o4-mini](https://platform.openai.com/docs/models/o4-mini) | ✅ | ❌ |

<Callout type="note">
	While Gemini models support vision capabilities, Cody clients do not
	currently support image uploads to Gemini models.
</Callout>

## Autocomplete

Cody uses a set of models for autocomplete which are suited for the low latency use case.

| **Provider** | **Model** | **Status** |
| :----------- | :-------- | :--------- |
| Anthropic | [Claude Haiku 4.5](https://docs.anthropic.com/en/docs/about-claude/models/overview) | ✅ |
| Anthropic | [Claude Haiku 4.5 with Thinking](https://docs.anthropic.com/en/docs/about-claude/models/overview) | ✅ |
| Fireworks.ai | StarCoder | ✅ |
| Fireworks.ai | DeepSeek V2 Lite Base | ✅ |
| Fireworks.ai | AutoEdits Fireworks Default | ✅ (beta) |
| Fireworks.ai | Autoedits DeepSeek Coder V2 | ✅ (beta) |
| Fireworks.ai | Autoedits DeepSeek Coder V2 | ✅ (beta) |
| Fireworks.ai | Autoedits DeepSeek Coder V2 | ✅ (beta) |
| Fireworks.ai | Autoedits Long Suggestion V4 Warm Start SFT | ✅ (beta) |
| Fireworks.ai | NLS Query Translator | ✅ |
| OpenAI | [GPT-4.1-nano](https://platform.openai.com/docs/models/gpt-4.1-nano) | ✅ |



## Smart Apply

| **Provider** | **Model** | **Status** |
| :----------- | :-------- | :--------- |
| Fireworks.ai | Smart Apply Qwen Default | ✅ |
| Fireworks.ai | Smart Apply Qwen 32B V1 | ✅ (beta) |

## Default Models

The following models are used by default for each feature when no specific model is configured:

| **Feature** | **Default Model** |
| :---------- | :---------------- |
| Chat | Claude Sonnet 4.5 |
| Autocomplete | DeepSeek V2 Lite Base |
| Fast Chat | Claude Haiku 4.5 |
