Doubao Seed 2.0 API
$0.028 - 0.111(~ 2 - 8 credits) per 1M input tokens; $0.278 - 1.111(~ 20 - 80 credits) per 1M output tokens
$0.0056(~ 0.4 credits) per 1M cache read tokens
Cache storage charged separately per hour.
Highest stability with guaranteed 99.9% uptime. Recommended for production environments.
Use the same API endpoint for all versions. Only the model parameter differs.
Doubao Seed 2.0 Mini API pricing and access
Use Doubao Seed 2.0 Mini through one EvoLink API key. This page focuses on the Mini variant for faster high-throughput inference, lower latency, and production systems that need speed more than premium reasoning depth.

What can you build with Doubao Seed 2.0 Mini?
Real-Time Classification and Routing
Use Seed 2.0 Mini for lightweight intent routing, moderation, classification, and decision layers that need quick responses at scale.

Batch Processing Pipelines
Mini works well for large batches of short text tasks, including extraction, tagging, summarization, and operational processing where throughput drives infrastructure decisions.

Latency-Sensitive Product Features
Choose Mini when your product needs quick replies for UI-level AI features, routing layers, or supporting model calls that should not add heavy latency.

Why teams choose Doubao Seed 2.0 Mini
Seed 2.0 Mini is the practical variant when you optimize for speed, throughput, and large-scale production execution rather than the highest-quality model output.
Built for fast production paths
Mini is easier to justify in systems where response speed and request volume matter more than premium reasoning.
Lower-friction routing on one gateway
Switch between Mini and other Seed 2.0 variants without rebuilding your API integration or provider logic.
Useful for scalable supporting tasks
Use Mini for routing, preprocessing, and operational text tasks around a larger model stack.
How to integrate Doubao Seed 2.0 Mini API
Use the EvoLink API with the Mini model enum. Migration is usually a base URL swap plus `doubao-seed-2.0-mini` in your existing OpenAI-compatible workflow.
Step 1 — Authenticate
Create an EvoLink API key and send requests with Bearer token authentication from your app, agent, or backend service.
Step 2 — Use the Mini model ID
Set model to `doubao-seed-2.0-mini` when you want the Mini variant on EvoLink.
Step 3 — Tune outputs
Adjust temperature, top_p, max_tokens, stream, and other parameters. Prompt-length tiers and cache billing are applied automatically in the API pricing model.
Core Doubao Seed 2.0 Mini API capabilities
Model facts for the lightweight Seed 2.0 variant, plus EvoLink pricing and access signals
High-throughput model profile
Mini is the Seed 2.0 variant built for faster request handling, lighter workloads, and large-scale production traffic.
256K Context Window
Use Mini with long-context support when your system still needs larger prompts but prioritizes speed and throughput.
Prompt-Length Pricing
Mini pricing still scales by prompt length, helping teams optimize both latency and spend.
Cache Hit Billing
Lower repeated prompt cost in routing, classification, and repeated operational flows.
OpenAI-Compatible Access
Use Mini through the same EvoLink gateway with minimal changes to an existing OpenAI-compatible stack.
Low-latency product fit
Mini is a strong fit for routing, classification, batch processing, and latency-sensitive supporting model calls.
Doubao Seed 2.0 Mini API FAQs
Everything you need to know about the product and billing.