Seedance 2.0 API — Coming SoonGet early access

DeepSeek Chat API

DeepSeek Chat API is a high-performance general-purpose chat model built by DeepSeek. With a 128K context window and highly competitive pricing, it delivers strong results across coding, reasoning, and conversational tasks. Access it on EvoLink with the documented model enum.

$

PRICING

PLANCONTEXT WINDOWMAX OUTPUTINPUTOUTPUTCACHE READ
DeepSeek Chat128.0K8.2K
$0.278
$0.278Official Price
$0.417
$0.417Official Price
$0.028
$0.028Official Price

Pricing Note: Price unit: USD / 1M tokens

Cache Hit: Price applies to cached prompt tokens.

DeepSeek Chat API for Cost-Effective AI

DeepSeek Chat (V3) delivers frontier-level performance at a fraction of the cost. With a 128K context window, prompt caching support, and strong coding and reasoning capabilities, it is an excellent choice for teams that need quality AI without premium pricing.

Hero showcase of DeepSeek Chat API

What can you build with the DeepSeek Chat API?

Conversational AI Assistants

DeepSeek Chat API excels at building intelligent chatbots and virtual assistants. Its strong instruction-following ability and natural language understanding make it ideal for customer support, knowledge bases, and interactive Q&A systems.

Use-case showcase of DeepSeek Chat API feature 1

Code Generation & Analysis

DeepSeek Chat delivers competitive coding performance across multiple programming languages. Use it for code generation, debugging, code review, and technical documentation — all at a fraction of the cost of premium models.

Use-case showcase of DeepSeek Chat API feature 2

Content Creation & Summarization

With its 128K context window, DeepSeek Chat can process long documents, generate structured content, and produce accurate summaries. It handles translation, copywriting, and report generation with high quality output.

Use-case showcase of DeepSeek Chat API feature 3

Why teams choose the DeepSeek Chat API

DeepSeek Chat API combines strong general-purpose performance with highly competitive pricing, making quality AI accessible for teams of all sizes.

Highly Competitive Pricing

DeepSeek Chat offers frontier-level quality at significantly lower cost than comparable models.

128K Context Window

Process long documents, codebases, and multi-turn conversations with a generous 128K token context.

Prompt Caching Support

Reduce costs further with built-in prompt caching for repeated prefixes and system prompts.

How to integrate the DeepSeek Chat API

Use the EvoLink API with the documented model enum and required fields. DeepSeek Chat is fully compatible with the OpenAI SDK — just change the base URL.

1

Step 1 — Authenticate

Create an EvoLink API key and send requests with Bearer token authentication.

2

Step 2 — Set required fields

Provide model: deepseek-chat, and a messages array with role and content fields.

3

Step 3 — Tune outputs

Adjust temperature, top_p, max_tokens, stop, stream, and other parameters for your use case.

Core DeepSeek Chat API capabilities

Model facts from DeepSeek, plus EvoLink access details

Performance

Frontier-Level Performance

DeepSeek Chat (V3) delivers competitive results on major benchmarks, rivaling models that cost significantly more.

Context

128K Context Window

Process long documents, extensive codebases, and complex multi-turn conversations within a single request.

Caching

Prompt Caching

Built-in prompt caching reduces costs for repeated system prompts and prefixes, ideal for production workloads.

Compatibility

OpenAI SDK Compatible

Fully compatible with the OpenAI SDK. Switch to DeepSeek Chat by changing the base URL and model name — no code rewrite needed.

Coding

Strong Coding Ability

Competitive performance on coding benchmarks across Python, JavaScript, TypeScript, and other popular languages.

Languages

Multilingual Support

Strong performance in both English and Chinese, with solid capabilities across other major languages.

DeepSeek Chat API FAQs

Everything you need to know about the product and billing.

DeepSeek Chat (also known as DeepSeek V3) is a high-performance general-purpose language model developed by DeepSeek. It delivers competitive results on major benchmarks while being significantly more cost-effective than comparable models from OpenAI and Anthropic.
DeepSeek Chat supports a 128K token context window, allowing it to process long documents, extensive code files, and complex multi-turn conversations in a single request.
Yes. DeepSeek Chat supports prompt caching, which reduces costs when you repeatedly send the same system prompts or prefixes. Cached tokens are billed at a lower rate than regular input tokens.
Yes. EvoLink provides an OpenAI-compatible API endpoint. You can use the OpenAI SDK by changing the base URL to your EvoLink endpoint and setting the model to deepseek-chat.
Use the model enum `deepseek-chat` in the request body. EvoLink will route the request to the DeepSeek Chat model through the optimal provider.
DeepSeek Chat excels at general-purpose chat, code generation and analysis, content creation, document summarization, translation, and Q&A systems. Its competitive pricing makes it especially suitable for high-volume production workloads.