Seedance 2.0 API — Coming SoonGet early access

Seed 2.0 API (Doubao 2.0)

Doubao Seed 2.0 is ByteDance's latest large language model series, available in four variants: Pro, Lite, Mini, and Code. With up to 256K context window, tiered pricing by prompt length, and cache billing support, it delivers strong performance at competitive cost. Access it on EvoLink with the documented model enum.

Doubao Seed 2.0 API for Flexible AI

ByteDance's Seed 2.0 series offers up to 256K context window with smart tiered pricing based on prompt length (32K/128K/256K). Save more with cache hit billing and choose from four variants — Pro, Lite, Mini, and Code — to match your exact workload.

Hero showcase of Doubao Seed 2.0 API
$

PRICING

PLANCONTEXT WINDOWMAX OUTPUTINPUTOUTPUTCACHE READ
Doubao Seed 2.0 Lite256.0K128.0K
32.0K$0.083
$0.083Official Price
128.0K$0.125
$0.125Official Price
256.0K$0.250
$0.250Official Price
32.0K$0.500
$0.500Official Price
128.0K$0.750
$0.750Official Price
256.0K$1.500
$1.50Official Price
32.0K$0.017
$0.017Official Price
128.0K$0.017
$0.017Official Price
256.0K$0.017
$0.017Official Price
Cache Storage

Cached prompt tokens billed by storage duration

$0.002/1M tokens/hour

Pricing Note: Price unit: USD / 1M tokens

Cache Hit: Price applies to cached prompt tokens.

What can you build with the Doubao Seed 2.0 API?

General-Purpose Conversations

Doubao Seed 2.0 Pro excels at building intelligent chatbots and knowledge-intensive assistants. Its strong instruction-following ability and long context window make it ideal for customer support, enterprise Q&A, and interactive content generation.

Use-case showcase of Doubao Seed 2.0 feature 1

Code Generation & Development

Doubao Seed 2.0 Code is purpose-built for software engineering tasks. Use it for code generation, debugging, code review, and technical documentation across multiple programming languages with competitive performance.

Use-case showcase of Doubao Seed 2.0 feature 2

Lightweight Inference & Efficiency

Doubao Seed 2.0 Lite and Mini variants deliver efficient performance for high-throughput scenarios. Perfect for batch processing, real-time classification, and cost-sensitive production workloads where speed matters most.

Use-case showcase of Doubao Seed 2.0 feature 3

Why teams choose the Doubao Seed 2.0 API

Doubao Seed 2.0 combines high-quality LLM performance with flexible tiered pricing and cache billing, making it one of the most cost-effective model series for production workloads.

High Cost-Effectiveness

Competitive performance at a fraction of the cost of premium models, with four variants to match your budget and quality requirements.

Flexible Tiered Pricing

Pay based on actual prompt length — shorter prompts (≤32K) cost less than longer ones (128K/256K), so you only pay for what you use.

Cache Billing Saves More

Built-in cache hit billing reduces costs for repeated system prompts and prefixes by up to 80%, ideal for production workloads.

How to integrate the Doubao Seed 2.0 API

Use the EvoLink API with the documented model enum. Doubao Seed 2.0 is fully compatible with the OpenAI SDK — just change the base URL and model name.

1

Step 1 — Authenticate

Create an EvoLink API key and send requests with Bearer token authentication.

2

Step 2 — Choose your variant

Set model to doubao-seed-2.0-pro, doubao-seed-2.0-lite, doubao-seed-2.0-mini, or doubao-seed-2.0-code based on your use case.

3

Step 3 — Tune outputs

Adjust temperature, top_p, max_tokens, stream, and other parameters. Pricing tiers are applied automatically based on prompt length.

Core Doubao Seed 2.0 API capabilities

Model facts from ByteDance Volcengine, plus EvoLink access details

Context

256K Context Window

Process long documents, extensive codebases, and complex multi-turn conversations within a single request with up to 256K token context.

Pricing

Tiered Pricing by Length

Smart pricing tiers based on prompt length (≤32K, ≤128K, ≤256K). Shorter prompts cost less — no need to pay premium rates for simple queries.

Caching

Cache Hit Billing

Reduce costs by up to 80% with built-in cache hit billing for repeated system prompts and prefixes, ideal for production workloads.

Variants

4 Model Variants

Choose from Pro (full power), Lite (balanced), Mini (lightweight), and Code (programming-focused) to match your exact workload requirements.

Compatibility

OpenAI SDK Compatible

Fully compatible with the OpenAI SDK. Switch to Seed 2.0 by changing the base URL and model name — no code rewrite needed.

Languages

Multilingual Support

Strong performance in both Chinese and English, with solid capabilities across other major languages for global deployment.

Doubao Seed 2.0 API FAQs

Everything you need to know about the product and billing.

Doubao Seed 2.0 is ByteDance's latest large language model series, available in four variants: Pro (full-featured, best quality), Lite (balanced performance and cost), Mini (lightweight, fastest inference), and Code (optimized for programming tasks). All variants support up to 256K context window.
Pro offers the highest quality for complex tasks. Lite provides a good balance of quality and speed. Mini is optimized for high-throughput, low-latency scenarios. Code is specifically tuned for programming tasks like code generation, debugging, and review. Choose based on your quality requirements and budget.
Seed 2.0 uses three pricing tiers based on prompt (input) length: ≤32K tokens, ≤128K tokens, and ≤256K tokens. Shorter prompts are billed at lower rates. For example, a prompt under 32K tokens costs less per token than one between 32K and 128K. This ensures you only pay for the context capacity you actually use.
Seed 2.0 supports cache hit billing. When you repeatedly send the same system prompts or prefixes, cached tokens are billed at a significantly lower rate (up to 80% savings) compared to regular input tokens. This is especially beneficial for production workloads with consistent system prompts.
Use these model enums in your API requests: doubao-seed-2.0-pro (Pro), doubao-seed-2.0-lite (Lite), doubao-seed-2.0-mini (Mini), or doubao-seed-2.0-code (Code). EvoLink will route the request through the optimal provider.
Yes. EvoLink provides an OpenAI-compatible API endpoint. You can use the OpenAI SDK by changing the base URL to your EvoLink endpoint and setting the model to the appropriate Seed 2.0 variant (e.g., doubao-seed-2.0-pro).
Seed 2.0 models support a maximum output of 128,000 tokens (128K) per request. The context window supports up to 256K tokens for input.
Pro: complex reasoning, long-form content, enterprise assistants. Lite: general chat, content creation, moderate-complexity tasks. Mini: real-time responses, batch classification, high-throughput APIs. Code: code generation, debugging, code review, technical documentation.
Doubao Seed 2.0 Lite API: ByteDance's Latest LLM Series | EvoLink