HappyHorse 1.0 is now liveTry it now
DeepSeek V4 Is Live in Preview: Official API Models, Pricing, and What Changed
Product Launch

DeepSeek V4 Is Live in Preview: Official API Models, Pricing, and What Changed

Jessie
Jessie
COO
April 6, 2026
8 min read
Updated April 24, 2026

This article was originally published as a release-watch update on April 6, 2026. It has now been fully revised to reflect DeepSeek's official V4 preview launch, including public API model IDs, official pricing, 1M context, and the deprecation path for legacy aliases.

If you're searching for DeepSeek V4, the answer is no longer "wait and see." As of April 24, 2026, DeepSeek's official API docs now list deepseek-v4-flash and deepseek-v4-pro as available model IDs, and the official pricing page now documents 1M context, 384K max output, and public API pricing for both variants. Separately, Reuters reported on April 24, 2026 that DeepSeek launched preview versions of V4 and is using that preview period to gather real-world feedback before finalizing the model. DeepSeek API Docs DeepSeek Models & Pricing Reuters via Investing.com

That means the correct operating posture has changed:

  • DeepSeek V4 is now publicly available in preview through the API
  • Pricing and model IDs are now officially documented
  • Preview does not mean final GA behavior is locked
If you want the product page and current route details, start with the DeepSeek V4 API page.

What changed since April 6, 2026

On April 6, 2026, the official DeepSeek API docs still listed only deepseek-chat and deepseek-reasoner, both mapped to DeepSeek-V3.2 with a 128K context limit. That is no longer the current state.
As of April 24, 2026, the official updates are:
  • New official API model IDs are listed: deepseek-v4-flash and deepseek-v4-pro now appear directly in DeepSeek's quick-start docs. DeepSeek API Docs
  • Official V4 pricing is now public: the pricing page now includes separate API pricing for Flash and Pro. DeepSeek Models & Pricing
  • Official V4 context and output limits are now public: the pricing page lists 1M context length and 384K max output for both V4 variants. DeepSeek Models & Pricing
  • Legacy aliases now have a deprecation path: deepseek-chat and deepseek-reasoner are marked for deprecation on July 24, 2026, and the docs state that they map to the non-thinking and thinking modes of deepseek-v4-flash for compatibility. DeepSeek API Docs
  • DeepSeek is treating the launch as a preview phase: Reuters reported on April 24, 2026 that the V4 release is a preview and that DeepSeek did not provide a timeline for finalization. Reuters via Investing.com

The short version is simple:

the DeepSeek V4 story has moved from release-watch mode to preview-launch mode.

What is officially available now

The table below uses only what is currently documented on official DeepSeek API pages.

TopicOfficially documented as of April 24, 2026
Public API modelsdeepseek-v4-flash, deepseek-v4-pro
Base URLhttps://api.deepseek.com
Context length1M
Max output384K
Thinking modeSupported
Tool callsSupported
FIM completionNon-thinking mode only
deepseek-chat / deepseek-reasoner statusDeprecated on 2026/07/24; compatibility aliases to deepseek-v4-flash modes

Official pricing for DeepSeek V4 Flash and Pro

DeepSeek's pricing page now lists public API pricing for both V4 variants.

ModelInput (cache hit)Input (cache miss)Output
deepseek-v4-flash$0.028 / 1M tokens$0.14 / 1M tokens$0.28 / 1M tokens
deepseek-v4-pro$0.145 / 1M tokens$1.74 / 1M tokens$3.48 / 1M tokens

Two practical implications matter for teams:

  1. You no longer need to model V4 costs off V3.2 assumptions.
  2. The decision is now Flash vs Pro, not "wait for V4 pricing."
If you are evaluating which route to use in production, see the current DeepSeek V4 API page for implementation-oriented details.

What "live in preview" means in practice

The current evidence supports a precise framing:

  • Publicly accessible through the API: yes, because DeepSeek now lists V4 model IDs and pricing on its official API docs
  • Final production-stable release with fixed behavior and final roadmap: not yet guaranteed, because Reuters reports the current launch as a preview phase and DeepSeek has not provided a finalization timeline

For engineering teams, that means:

  • you can start real API evaluation now
  • you should still treat behavior, routing, and performance tuning as subject to change during preview
  • you should avoid assuming that preview behavior equals final long-term GA behavior

This is very different from the April 6 situation, where the most responsible guidance was to treat V4 as unreleased.


What to do if you are already using deepseek-chat or deepseek-reasoner

This is the biggest operational change for existing DeepSeek API users.

The official docs now say:

  • deepseek-chat will be deprecated on July 24, 2026
  • deepseek-reasoner will be deprecated on July 24, 2026
  • both names remain available for compatibility
  • they map to the non-thinking and thinking modes of deepseek-v4-flash

That suggests a clean migration plan:

  1. Do not wait until July 24, 2026 to start testing.
  2. Benchmark deepseek-v4-flash directly against your current alias-based route.
  3. Use a rollback path while preview behavior settles.
  4. Separate Flash and Pro use cases instead of treating V4 like a single undifferentiated upgrade.

If your workloads are mostly latency-sensitive and cost-sensitive, Flash is the likely first route to test. If your workloads depend on stronger reasoning quality, Pro is the route to evaluate next.


Updated timeline

Here is the shortest trustworthy timeline now:

  • April 6, 2026: DeepSeek's official API docs still exposed only deepseek-chat and deepseek-reasoner, both tied to DeepSeek-V3.2. DeepSeek Models & Pricing
  • April 24, 2026: DeepSeek's official API docs now list deepseek-v4-flash and deepseek-v4-pro, plus official pricing, 1M context, and 384K max output. DeepSeek API Docs DeepSeek Models & Pricing
  • April 24, 2026: Reuters reported that DeepSeek launched preview versions of V4 and is using the preview period to gather feedback before finalizing the model. Reuters via Investing.com
  • July 24, 2026: deepseek-chat and deepseek-reasoner are scheduled for deprecation according to the official API docs. DeepSeek API Docs

What this means for teams evaluating DeepSeek V4

The decision is no longer "should we keep waiting for public V4?"

The real decision now is:

  • Should we test Flash or Pro first?
  • Which workloads belong on preview routes today?
  • How do we migrate off alias-based V3.2 naming before July 24, 2026?

The most practical next step is:

  • use official docs as your source of truth
  • test Flash and Pro separately
  • keep your evaluation set focused on your real workloads
  • treat preview as deployable for testing, but not as a promise that every behavior is final
For route and integration details, the best handoff page is the DeepSeek V4 API page.

FAQ

Has DeepSeek V4 launched publicly?
Yes, in preview form. As of April 24, 2026, DeepSeek's official API docs list deepseek-v4-flash and deepseek-v4-pro, and Reuters reported that DeepSeek launched preview versions of V4 on the same date. DeepSeek API Docs Reuters via Investing.com
Is DeepSeek V4 pricing officially public now?
Yes. DeepSeek's official pricing page now lists public API pricing for both deepseek-v4-flash and deepseek-v4-pro. DeepSeek Models & Pricing
What are the official DeepSeek V4 model IDs?
The official API docs now list deepseek-v4-flash and deepseek-v4-pro. DeepSeek API Docs
Does DeepSeek V4 officially support 1M context?
Yes. DeepSeek's official pricing page lists 1M context length for both V4 variants. DeepSeek Models & Pricing
What is the official max output for DeepSeek V4?
DeepSeek's official pricing page lists 384K as the maximum output for both V4 variants. DeepSeek Models & Pricing
What happens to deepseek-chat and deepseek-reasoner?
DeepSeek's official API docs say both names will be deprecated on July 24, 2026. For compatibility, they correspond to the non-thinking and thinking modes of deepseek-v4-flash, respectively. DeepSeek API Docs
Should teams treat preview as production-ready by default?
Not automatically. The API is publicly usable now, but Reuters reports the current release as a preview phase and DeepSeek has not published a finalization timeline. Teams should test with real workloads, keep rollback paths, and avoid assuming preview behavior is frozen. Reuters via Investing.com

Sources

Ready to Reduce Your AI Costs by 89%?

Start using EvoLink today and experience the power of intelligent API routing.