How to Use evolink Smart router: 5-Minute Setup for Unified AI Model Routing
Tutorial

How to Use evolink Smart router: 5-Minute Setup for Unified AI Model Routing

EvoLink Team
EvoLink Team
Product Team
March 11, 2026
7 min read
As of March 11, 2026, the repository confirms EvoLink exposes an OpenAI-compatible chat completions endpoint at https://api.evolink.ai/v1/chat/completions and a base URL of https://api.evolink.ai/v1 across multiple integration guides. This tutorial uses those confirmed integration patterns and avoids undocumented promises about hidden routing logic, exact model pools, or account-specific discounts.
evolink Smart router is the smart-routing entry point inside the EvoLink unified API workflow. The practical value is not "automatic magic." The value is that your application can keep one integration surface while EvoLink handles model selection decisions inside the gateway layer.

Use it when you want to:

  • keep one OpenAI-compatible request format
  • reduce application-side switching between providers or model families
  • inspect the routed model in the API response instead of hard-coding one model everywhere
  • start with a flexible gateway path before pinning a fixed model for production-critical flows

If you already know the exact model, latency profile, and cost target you need, a fixed model ID is usually the cleaner choice.

What you need before you start

ItemWhat to prepareWhy it matters
EvoLink accountSign in at evolink.aiYou need access to dashboard settings and billing
API keyCreate one in the EvoLink dashboardThe gateway uses Bearer token authentication
Base URLhttps://api.evolink.ai/v1Works with OpenAI-compatible SDK flows used elsewhere in the repo
Smart router model IDevolink/autoUse this model ID to enable smart routing through the gateway

Your first request with curl

curl https://api.evolink.ai/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -d '{
    "model": "evolink/auto",
    "messages": [
      {
        "role": "user",
        "content": "Explain vector databases in one short paragraph."
      }
    ]
  }'
This request uses the same shape as a normal OpenAI-compatible chat completion call. The main difference is that the model field points to your smart-routing entry rather than a fixed provider model.

Python example

from openai import OpenAI

client = OpenAI(
    api_key="YOUR_API_KEY",
    base_url="https://api.evolink.ai/v1"
)

response = client.chat.completions.create(
    model="evolink/auto",
    messages=[
        {
            "role": "user",
            "content": "Summarize the tradeoff between latency and model quality."
        }
    ]
)

print(response.model)
print(response.choices[0].message.content)

Node.js example

import OpenAI from "openai";

const client = new OpenAI({
  apiKey: process.env.EVOLINK_API_KEY,
  baseURL: "https://api.evolink.ai/v1",
});

const response = await client.chat.completions.create({
  model: "evolink/auto",
  messages: [
    {
      role: "user",
      content: "List three reasons teams use an AI gateway.",
    },
  ],
});

console.log(response.model);
console.log(response.choices[0].message.content);

How to read the response

A smart-router response still follows the familiar chat completions structure:

{
  "id": "chatcmpl-example",
  "object": "chat.completion",
  "created": 1773187200,
  "model": "provider/model-selected-at-runtime",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "A vector database stores embeddings so semantic search can retrieve similar content efficiently."
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 17,
    "completion_tokens": 18,
    "total_tokens": 35
  }
}
The model field is the first thing to log in production. It tells you which routed model actually handled the request, which is useful for debugging, spend analysis, and deciding whether you should keep using the smart router or switch a workload to a fixed model.

Smart router vs fixed model

Scenarioevolink Smart routerFixed model ID
Early prototypingStrong fitUsually unnecessary
Mixed workloadsStrong fitCan become operationally noisy
Stable production path with strict QAPossible, but verify carefullyUsually better
Cost tuning per use caseGood starting pointBetter once you know the winning model
Provider failover strategyEasier to centralizeYou manage more logic in app code

The pattern that usually scales best is simple:

  1. Start with evolink Smart router while the workload is still changing.
  2. Log the routed model value and compare cost, latency, and output quality.
  3. Pin a fixed model for flows that need tighter operational predictability.

What remains account-specific

The original draft included several product claims that should not be published as hard facts without a verified source tied to your account or official documentation. Treat these as items to confirm before you publish external promises:

  • the exact public model identifier for evolink Smart router
  • the exact size of the available routing pool
  • any promise about "no routing fee"
  • any percentage discount claim versus direct vendors
  • SLA statements such as 99.9% uptime
  • which advanced capabilities are guaranteed through the smart router for every routed model

If you want this article to become more sales-forward later, the clean way is to add those items only after they are documented in a first-party pricing page, product page, or official API doc.

Production checklist before rollout

CheckWhy to verify it
Confirm the router model IDPrevents copy-paste errors from placeholder code
Test the response model fieldConfirms routed model visibility for observability
Compare cost on real promptsEffective pricing depends on selected models and workload shape
Measure latency by request typeSmart routing is only useful if it matches your user-facing SLA
Decide when to pin a fixed modelSome flows need deterministic output or narrower QA coverage

Next steps

Once your first request works, the highest-value follow-up is not more sample code. It is instrumentation.

  • log response.model
  • store token usage by feature or route
  • compare smart-router traffic against one pinned-model baseline
  • review available fixed models in the EvoLink model catalog

That gives you the data needed to decide whether the gateway path is improving cost efficiency and production reliability for your actual workload.

FAQ

No. A fixed model ID keeps the selection decision in your application config. evolink Smart router keeps that decision in the gateway layer.

No. Based on the repository's existing examples, the integration pattern stays OpenAI-compatible. You mainly change the base URL and model identifier.

Where do I find the correct Smart router model ID?

Confirm it in your EvoLink dashboard or official docs before publishing or shipping copy-paste code. The original draft did not include a locally verified identifier.

Yes, if your workload is still evolving and you want one gateway entry point. For tightly controlled flows, compare it against a pinned model before full rollout.

What should I log first after integration?

Log response.model, latency, token usage, and the feature name that triggered the request. Those four fields usually explain most routing and cost questions.

Does smart routing guarantee lower cost?

Not automatically. It can improve effective cost, but the result depends on your prompts, selected downstream models, and account configuration.

When should I switch from Smart router to a fixed model?

Switch when one workload has a clear winner on quality, latency, or cost and you want tighter QA and more predictable production behavior.

Ready to Reduce Your AI Costs by 89%?

Start using EvoLink today and experience the power of intelligent API routing.