
Model Not Found in OpenAI-Compatible APIs: Causes, Fixes, and Debug Checklist

You changed your base URL to an OpenAI-compatible provider, sent a request, and got back:
{
"error": {
"message": "The model `gpt-4o` does not exist or you do not have access to it.",
"type": "invalid_request_error",
"code": "model_not_found"
}
}This is one of the most common errors when switching between OpenAI-compatible API providers. The request shape is compatible — but the model ID is not.
"OpenAI-compatible" means the request format works. It does not mean every model ID is the same across every provider.
TL;DR
- "Model not found" usually means the model ID you sent does not match what the provider expects.
- OpenAI-compatible ≠ same model IDs. Each provider has its own model naming.
- Always check: (1) base URL, (2) model ID, (3) API key scope, (4) model availability.
- Use the debug matrix below to isolate the cause in under 5 minutes.
Why this error happens
POST /v1/chat/completions request shape that OpenAI uses.But compatibility ends at the request format. The model IDs are provider-specific:
| Provider | Example model ID for GPT-4o | Example model ID for Claude Sonnet |
|---|---|---|
| OpenAI | gpt-4o | N/A (not available) |
| OpenRouter | openai/gpt-4o | anthropic/claude-sonnet-4-20250514 |
| EvoLink | gpt-4o or openai/gpt-4o | claude-sonnet-4-20250514 |
| Together AI | N/A (not all models) | N/A |
| LiteLLM | openai/gpt-4o | anthropic/claude-sonnet-4-20250514 |
base_url but keep the same model ID, the new provider does not recognize it — and returns "model not found."Debug matrix: isolate the cause in 5 minutes
Work through this checklist in order. Stop when you find the mismatch.
| Check | What to verify | How to verify | Common mistake |
|---|---|---|---|
| 1. Base URL | Is the base URL pointing to the right provider? | Print or log base_url before the request | Forgot to change base URL, or using a stale environment variable |
| 2. Model ID format | Does the model ID match the provider's naming? | Check the provider's model list or docs | Using gpt-4o when the provider expects openai/gpt-4o (or vice versa) |
| 3. Model availability | Is the model actually available on this provider? | Check the provider's model catalog page | Assuming all providers have all models |
| 4. API key scope | Does your API key have access to this model? | Try a known-working model with the same key | Key is valid but restricted to certain models or tiers |
| 5. Model deprecation | Is the model ID still active? | Check provider changelog or announcements | Model was renamed, versioned, or deprecated |
| 6. Typo or casing | Is the model ID spelled exactly right? | Compare character by character | gpt-4o vs gpt4o vs GPT-4o |
| 7. Region or plan | Does your account/region have access? | Check provider docs for regional availability | Model available in US but not in your region |
The most common mismatches
Mismatch 1: Forgot to change the model ID after switching base URL
This is the single most common cause. You are migrating from OpenAI to another provider:
# Before: direct OpenAI
client = OpenAI(api_key="sk-...")
# After: switched to another provider but kept the same model ID
client = OpenAI(
api_key="your-provider-key",
base_url="https://api.another-provider.com/v1" # Changed
)
response = client.chat.completions.create(
model="gpt-4o", # ← This model ID may not work on the new provider
messages=[{"role": "user", "content": "Hello"}]
)Mismatch 2: Provider uses namespaced model IDs
Some providers namespace model IDs with the original vendor prefix:
# OpenAI direct
model = "gpt-4o"
# OpenRouter
model = "openai/gpt-4o"
# Some providers use their own aliases
model = "gpt-4o-2024-08-06"Mismatch 3: Model not available on this provider
Not every OpenAI-compatible provider supports every model:
- A provider may support Claude but not GPT models
- A provider may support text models but not image or video models
- A newly released model may not be available everywhere on day one
Mismatch 4: Model deprecated or renamed
AI models update frequently. A model ID that worked last month may be deprecated:
gpt-4-turbo → may redirect or fail depending on provider
claude-3-opus-20240229 → older version, may be replacedHow to verify the right model ID
Option A: Call the models endpoint
GET /v1/models endpoint:curl https://api.your-provider.com/v1/models \
-H "Authorization: Bearer YOUR_API_KEY"This returns the list of models available to your account. Search for your target model in the response.
Option B: Check the provider's documentation
Every serious provider has a model list page. Before switching, verify:
- The exact model ID string
- Whether it requires a namespace prefix
- Whether it is available on your plan/tier
- Whether it is available in your region
Option C: Send a minimal test request
curl https://api.your-provider.com/v1/chat/completions \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "your-model-id",
"messages": [{"role": "user", "content": "test"}],
"max_tokens": 5
}'If this returns a valid response, your model ID is correct. If it returns "model not found," try the provider's recommended model ID.
Building resilient model selection
For production systems, model ID mismatches should not cause hard failures. Here are patterns that help:
Pattern 1: Model ID mapping layer
Maintain a mapping between your internal model names and provider-specific IDs:
MODEL_MAP = {
"fast-chat": {
"openai": "gpt-4o-mini",
"openrouter": "openai/gpt-4o-mini",
"evolink": "gpt-4o-mini",
},
"strong-chat": {
"openai": "gpt-4o",
"openrouter": "openai/gpt-4o",
"evolink": "gpt-4o",
},
"reasoning": {
"openai": "o3",
"openrouter": "openai/o3",
"evolink": "o3",
}
}
def get_model_id(capability: str, provider: str) -> str:
return MODEL_MAP[capability][provider]Pattern 2: Use a unified API that normalizes model IDs
Instead of maintaining your own mapping layer, you can use a gateway that accepts standardized model IDs and handles provider routing internally.
With EvoLink, you can use familiar model IDs without worrying about provider-specific naming:
from openai import OpenAI
client = OpenAI(
api_key="your-evolink-key",
base_url="https://api.evolink.ai/v1"
)
# Use standard model IDs — EvoLink handles the mapping
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Hello"}]
)Or let the Smart Router choose the best model for your workload:
response = client.chat.completions.create(
model="evolink/auto",
messages=[{"role": "user", "content": "Hello"}]
)Pattern 3: Validate model IDs at startup
Do not wait for a user request to discover a model ID is wrong:
async def validate_models_on_startup(client, required_models: list[str]):
"""Call /v1/models at startup and verify all required models exist."""
available = await client.models.list()
available_ids = {m.id for m in available.data}
for model in required_models:
if model not in available_ids:
raise RuntimeError(
f"Model '{model}' not found on provider. "
f"Available: {sorted(available_ids)}"
)Quick reference: OpenAI-compatible does not mean identical
| What is compatible | What is NOT guaranteed to be the same |
|---|---|
Request format (/v1/chat/completions) | Model IDs |
| Response structure | Available models |
Authentication pattern (Bearer token) | Rate limits and quotas |
| Streaming format (SSE) | Error codes and messages |
Basic parameters (messages, temperature) | Advanced parameters and extensions |
This distinction is the root cause of most "model not found" errors when switching providers.
Related articles
- Fix OpenRouter 429 "Provider Returned Error" — when the model is found but the provider returns an error
- Best OpenRouter Alternatives in 2026 — compare providers and their model coverage
- How to Reduce 429 Errors in Agent Workloads — handle rate limits after you fix model IDs
FAQ
Why does "gpt-4o" work on OpenAI but not on my new provider?
gpt-4o directly, others require a prefix like openai/gpt-4o, and some may not offer that model at all.How do I find the correct model ID for my provider?
GET /v1/models endpoint. The model ID must match exactly — including casing, version suffixes, and any namespace prefixes.Can I use the same model ID across all OpenAI-compatible providers?
Not reliably. While some providers accept the same IDs as OpenAI, many use different naming conventions. Either maintain a mapping layer, or use a gateway like EvoLink that normalizes model IDs for you.
What if the model ID was working yesterday but stopped today?
The model may have been deprecated, renamed, or removed from your tier. Check the provider's changelog and announcements. Model lifecycle is faster in AI than in traditional APIs.
Is "model not found" always a model ID problem?
Usually, but not always. It can also mean: (1) your API key does not have access to that model, (2) the model is not available in your region, or (3) the model requires a higher subscription tier.
How do I prevent model not found errors in production?
Three strategies: (1) validate model IDs at application startup, (2) maintain a model ID mapping layer or use a normalizing gateway, (3) implement fallback logic that tries an alternative model if the primary returns "not found."


