HappyHorse 1.0 Coming SoonLearn More

Doubao Seed 2.0 API

Doubao Seed 2.0 is ByteDance's latest model family for production text workloads. EvoLink gives you one OpenAI-compatible endpoint for Pro, Lite, Mini, and Code, so you can choose the right model for quality, cost, latency, or coding tasks without changing providers.
Price: 

$0.028 - 0.111(~ 2 - 8 credits) per 1M input tokens; $0.278 - 1.111(~ 20 - 80 credits) per 1M output tokens

$0.0056(~ 0.4 credits) per 1M cache read tokens

Cache storage charged separately per hour.

Highest stability with guaranteed 99.9% uptime. Recommended for production environments.

Use the same API endpoint for all versions. Only the model parameter differs.

Doubao Seed 2.0 Mini API pricing and access

Use Doubao Seed 2.0 Mini through one EvoLink API key. This page focuses on the Mini variant for faster high-throughput inference, lower latency, and production systems that need speed more than premium reasoning depth.

Hero showcase of Doubao Seed 2.0 API

What can you build with Doubao Seed 2.0 Mini?

Real-Time Classification and Routing

Use Seed 2.0 Mini for lightweight intent routing, moderation, classification, and decision layers that need quick responses at scale.

Use-case showcase of Doubao Seed 2.0 feature 1

Batch Processing Pipelines

Mini works well for large batches of short text tasks, including extraction, tagging, summarization, and operational processing where throughput drives infrastructure decisions.

Use-case showcase of Doubao Seed 2.0 feature 2

Latency-Sensitive Product Features

Choose Mini when your product needs quick replies for UI-level AI features, routing layers, or supporting model calls that should not add heavy latency.

Use-case showcase of Doubao Seed 2.0 feature 3

Why teams choose Doubao Seed 2.0 Mini

Seed 2.0 Mini is the practical variant when you optimize for speed, throughput, and large-scale production execution rather than the highest-quality model output.

Built for fast production paths

Mini is easier to justify in systems where response speed and request volume matter more than premium reasoning.

Lower-friction routing on one gateway

Switch between Mini and other Seed 2.0 variants without rebuilding your API integration or provider logic.

Useful for scalable supporting tasks

Use Mini for routing, preprocessing, and operational text tasks around a larger model stack.

How to integrate Doubao Seed 2.0 Mini API

Use the EvoLink API with the Mini model enum. Migration is usually a base URL swap plus `doubao-seed-2.0-mini` in your existing OpenAI-compatible workflow.

1

Step 1 — Authenticate

Create an EvoLink API key and send requests with Bearer token authentication from your app, agent, or backend service.

2

Step 2 — Use the Mini model ID

Set model to `doubao-seed-2.0-mini` when you want the Mini variant on EvoLink.

3

Step 3 — Tune outputs

Adjust temperature, top_p, max_tokens, stream, and other parameters. Prompt-length tiers and cache billing are applied automatically in the API pricing model.

Core Doubao Seed 2.0 Mini API capabilities

Model facts for the lightweight Seed 2.0 variant, plus EvoLink pricing and access signals

Throughput

High-throughput model profile

Mini is the Seed 2.0 variant built for faster request handling, lighter workloads, and large-scale production traffic.

Context

256K Context Window

Use Mini with long-context support when your system still needs larger prompts but prioritizes speed and throughput.

Pricing

Prompt-Length Pricing

Mini pricing still scales by prompt length, helping teams optimize both latency and spend.

Caching

Cache Hit Billing

Lower repeated prompt cost in routing, classification, and repeated operational flows.

Compatibility

OpenAI-Compatible Access

Use Mini through the same EvoLink gateway with minimal changes to an existing OpenAI-compatible stack.

Use Cases

Low-latency product fit

Mini is a strong fit for routing, classification, batch processing, and latency-sensitive supporting model calls.

Doubao Seed 2.0 Mini API FAQs

Everything you need to know about the product and billing.

Doubao Seed 2.0 Mini API is the lightweight high-throughput model in the Seed 2.0 family. On EvoLink it is available through the same OpenAI-compatible API gateway as the other Seed 2.0 variants.
Choose Mini when your workload is latency-sensitive or high-volume, such as routing, classification, batch processing, and lightweight product features that need quick response times.
Seed 2.0 uses length-based pricing tiers for input and output. Requests within 32K prompt length are priced lower than 128K or 256K tiers, so cost scales with how much context you actually consume. On EvoLink, the pricing section below is the canonical place to compare costs by variant.
Seed 2.0 supports cache hit billing. When the same system prompts or prompt prefixes are reused, cached tokens are billed below standard input rates. That makes a visible difference in production workloads with repeated prompt scaffolding, agent prefixes, or stable enterprise instructions.
Use these model enums in your API requests: doubao-seed-2.0-pro, doubao-seed-2.0-lite, doubao-seed-2.0-mini, and doubao-seed-2.0-code. EvoLink keeps access unified behind one API gateway, so variant selection happens through the model field rather than a separate vendor integration.
Yes. EvoLink exposes an OpenAI-compatible API endpoint for Seed 2.0. In most integrations, migration means changing the base URL, using your EvoLink API key, and selecting the right Seed 2.0 model enum.
Seed 2.0 models support a maximum output of 128,000 tokens (128K) per request. The context window supports up to 256K tokens for input.
Pro fits premium assistants, long-form generation, and higher-quality general tasks. Lite fits cost-aware chat and content workloads. Mini fits real-time responses, routing, classification, and large-scale throughput. Code fits software engineering, code review, debugging, and developer tooling.
Doubao Seed 2.0 Mini API Pricing & Access | EvoLink