
How to Use evolink Smart router: 5-Minute Setup for Unified AI Model Routing

https://api.evolink.ai/v1/chat/completions and a base URL of https://api.evolink.ai/v1 across multiple integration guides. This tutorial uses those confirmed integration patterns and avoids undocumented promises about hidden routing logic, exact model pools, or account-specific discounts.What evolink Smart router is
evolink Smart router is the smart-routing entry point inside the EvoLink unified API workflow. The practical value is not "automatic magic." The value is that your application can keep one integration surface while EvoLink handles model selection decisions inside the gateway layer.Use it when you want to:
- keep one OpenAI-compatible request format
- reduce application-side switching between providers or model families
- inspect the routed model in the API response instead of hard-coding one model everywhere
- start with a flexible gateway path before pinning a fixed model for production-critical flows
If you already know the exact model, latency profile, and cost target you need, a fixed model ID is usually the cleaner choice.
What you need before you start
| Item | What to prepare | Why it matters |
|---|---|---|
| EvoLink account | Sign in at evolink.ai | You need access to dashboard settings and billing |
| API key | Create one in the EvoLink dashboard | The gateway uses Bearer token authentication |
| Base URL | https://api.evolink.ai/v1 | Works with OpenAI-compatible SDK flows used elsewhere in the repo |
| Smart router model ID | evolink/auto | Use this model ID to enable smart routing through the gateway |
Your first request with curl
curl https://api.evolink.ai/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{
"model": "evolink/auto",
"messages": [
{
"role": "user",
"content": "Explain vector databases in one short paragraph."
}
]
}'model field points to your smart-routing entry rather than a fixed provider model.Python example
from openai import OpenAI
client = OpenAI(
api_key="YOUR_API_KEY",
base_url="https://api.evolink.ai/v1"
)
response = client.chat.completions.create(
model="evolink/auto",
messages=[
{
"role": "user",
"content": "Summarize the tradeoff between latency and model quality."
}
]
)
print(response.model)
print(response.choices[0].message.content)Node.js example
import OpenAI from "openai";
const client = new OpenAI({
apiKey: process.env.EVOLINK_API_KEY,
baseURL: "https://api.evolink.ai/v1",
});
const response = await client.chat.completions.create({
model: "evolink/auto",
messages: [
{
role: "user",
content: "List three reasons teams use an AI gateway.",
},
],
});
console.log(response.model);
console.log(response.choices[0].message.content);How to read the response
A smart-router response still follows the familiar chat completions structure:
{
"id": "chatcmpl-example",
"object": "chat.completion",
"created": 1773187200,
"model": "provider/model-selected-at-runtime",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "A vector database stores embeddings so semantic search can retrieve similar content efficiently."
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 17,
"completion_tokens": 18,
"total_tokens": 35
}
}model field is the first thing to log in production. It tells you which routed model actually handled the request, which is useful for debugging, spend analysis, and deciding whether you should keep using the smart router or switch a workload to a fixed model.Smart router vs fixed model
| Scenario | evolink Smart router | Fixed model ID |
|---|---|---|
| Early prototyping | Strong fit | Usually unnecessary |
| Mixed workloads | Strong fit | Can become operationally noisy |
| Stable production path with strict QA | Possible, but verify carefully | Usually better |
| Cost tuning per use case | Good starting point | Better once you know the winning model |
| Provider failover strategy | Easier to centralize | You manage more logic in app code |
The pattern that usually scales best is simple:
- Start with
evolink Smart routerwhile the workload is still changing. - Log the routed
modelvalue and compare cost, latency, and output quality. - Pin a fixed model for flows that need tighter operational predictability.
What remains account-specific
The original draft included several product claims that should not be published as hard facts without a verified source tied to your account or official documentation. Treat these as items to confirm before you publish external promises:
- the exact public model identifier for
evolink Smart router - the exact size of the available routing pool
- any promise about "no routing fee"
- any percentage discount claim versus direct vendors
- SLA statements such as
99.9% uptime - which advanced capabilities are guaranteed through the smart router for every routed model
If you want this article to become more sales-forward later, the clean way is to add those items only after they are documented in a first-party pricing page, product page, or official API doc.
Production checklist before rollout
| Check | Why to verify it |
|---|---|
| Confirm the router model ID | Prevents copy-paste errors from placeholder code |
Test the response model field | Confirms routed model visibility for observability |
| Compare cost on real prompts | Effective pricing depends on selected models and workload shape |
| Measure latency by request type | Smart routing is only useful if it matches your user-facing SLA |
| Decide when to pin a fixed model | Some flows need deterministic output or narrower QA coverage |
Next steps
Once your first request works, the highest-value follow-up is not more sample code. It is instrumentation.
- log
response.model - store token usage by feature or route
- compare smart-router traffic against one pinned-model baseline
- review available fixed models in the EvoLink model catalog
That gives you the data needed to decide whether the gateway path is improving cost efficiency and production reliability for your actual workload.
FAQ
Is evolink Smart router the same as choosing a fixed model?
evolink Smart router keeps that decision in the gateway layer.Do I need a different SDK to use evolink Smart router?
No. Based on the repository's existing examples, the integration pattern stays OpenAI-compatible. You mainly change the base URL and model identifier.
Where do I find the correct Smart router model ID?
Confirm it in your EvoLink dashboard or official docs before publishing or shipping copy-paste code. The original draft did not include a locally verified identifier.
Should I start production traffic on evolink Smart router?
Yes, if your workload is still evolving and you want one gateway entry point. For tightly controlled flows, compare it against a pinned model before full rollout.
What should I log first after integration?
response.model, latency, token usage, and the feature name that triggered the request. Those four fields usually explain most routing and cost questions.Does smart routing guarantee lower cost?
Not automatically. It can improve effective cost, but the result depends on your prompts, selected downstream models, and account configuration.
When should I switch from Smart router to a fixed model?
Switch when one workload has a clear winner on quality, latency, or cost and you want tighter QA and more predictable production behavior.


