Wan 2.6 API
Price: $0.075 - 0.125(~ 5.1 - 8.52 credits) per second of video
Highest stability with guaranteed 99.9% uptime. Recommended for production environments.
Use the same API endpoint for all versions. Only the model parameter differs.
Click Generate to see preview
History
Max 20 items0 running · 0 completed
Wan 2.6 API for Cinematic Short‑Form Video
Use Wan 2.6 API to go from text or images to 1080p multi‑shot clips (up to 15 seconds) with audio output on current routes, ready for TikTok, Reels, YouTube Shorts, and ad creatives without touching a video editor.

Pricing
| Model | Mode | quality | Price |
|---|---|---|---|
| WAN 2.6 Text to Video | Video Generation | 720p | $0.075/ second(5.1 Credits) |
| WAN 2.6 Text to Video | Video Generation | 1080p | $0.125/ second(8.517 Credits) |
If it's down, we automatically use the next cheapest available—ensuring 99.9% uptime at the best possible price.
What You Can Do with Wan 2.6 API
Latest cinematic text-to-video entry point
Wan 2.6 API is the latest cinematic tier in the Wan family — built for narrative scenes that need up to 15 seconds of multi-shot motion (2–15s for text/image, 2–10s for reference video), not just a single static beat. Type a short script or scene brief and the Wan 2.6 AI video generator handles framing, camera moves, transitions, and audio output together, so the output reads as a planned shot list rather than a one-off render. This is the entry point teams use when the brief is closer to a brand campaign than a daily UGC clip.

Image and Reference to Video
With Wan 2.6 API you can start from an existing image or reference video and extend it into a moving scene that keeps the same character, style, or product look. This is ideal when you already have brand visuals or a hero shot and want to animate it into dynamic social content without reshooting, while keeping colors, outfits, and framing consistent across multiple clips.

Multi‑Shot Storytelling for Social Media
Wan 2.6 API is built for short, story‑driven videos rather than random one‑off shots. You can describe several beats in one prompt, and the model generates a multi‑shot sequence up to 15 seconds long (2–15s for text/image inputs) with coherent motion and physics. This makes it easy to create trending formats, UGC‑style explainer clips, and narrative ads that feel planned instead of stitched together from unrelated scenes.

Why Choose Wan 2.6 API for Your Videos
Wan 2.6 API helps you create consistent, high‑impact social videos without a studio, editing skills, or a big budget.
Cinematic Quality Without Production Hassle
Wan 2.6 API gives you sharp 1080p short videos with realistic motion, lighting, and physics, so your content looks closer to a professional shoot than a quick template. Instead of coordinating cameras, actors, and locations, you describe the scenes once and let the model handle animation, timing, and audio, freeing you to test more ideas per week.
Premium iteration for narrative campaigns
Wan 2.6 is built for the campaign cycle — the moments when you want premium cinematic output that justifies a brand budget. You can iterate multi-shot concepts, swap hooks or scenes between versions, and use Wan 2.6 Flash for high-volume A/B passes before committing the standard tier to the final hero clip. The mix of standard and Flash variants lets one campaign brief cover both exploration and final delivery without leaving the Wan 2.6 family.
Consistent Brand and Character Stories
Wan 2.6 API helps keep your characters, products, and worlds consistent across dozens of clips, which is hard to do with one‑off stock assets. By reusing prompts and references, you can tell ongoing stories around the same mascot, influencer, or product line, making your feed feel like a connected universe rather than a random collection of videos.
How to Use Wan 2.6 API Step by Step
You do not need to be a developer to plan good prompts, but the Wan 2.6 API makes it easy for teams to plug video generation into real products.
1. Connect Your Evolink AI Account
Sign up with Evolink AI, enable Wan 2.6 API in your dashboard, and grab the API key your app or backend will use for secure video requests.
2. Draft a Social‑Style Prompt or Pick an Image
Write a short brief like a TikTok hook plus scene notes or upload a key visual, then send it through Wan 2.6 API as text‑to‑video or image‑to‑video.
3. Generate, Review, and Publish Your Video
The Wan 2.6 API returns your clip after processing; you review it, save the best versions, and post directly to your social channels or use them in ads.
Wan 2.6 API Features Built for Creators
Every Wan 2.6 API feature is tuned around social media, UGC, and SaaS video tools, not studio workflows.
Audio output alongside video on current routes
Wan 2.6 API is tuned for narrative scenes where dialogue, ambient sound, and score matter alongside the visuals. Per current route docs, the audio layer is generated alongside the video output, so longer multi-shot sequences come back with audio attached instead of a separate silent render that needs a second pass.
Multi‑Shot Clips Up to 15 Seconds
Instead of a single random moment, Wan 2.6 AI video generator creates multi‑shot videos that match the way people actually tell stories on social platforms. You can cover the hook, product demo, and closing scene in one clip, making it much easier to deliver a clear message in the few seconds you have before viewers swipe away.
Text, image, and reference video inputs (r2v)
Wan 2.6 adds reference video (`wan2.6-r2v`) as a first-class input alongside text-to-video (`wan2.6-t2v`) and image-to-video (`wan2.6-i2v`). With reference video you can extract a character's appearance and visual identity from an existing clip and carry them into new scenes — useful for episodic brand mascots, recurring spokespeople, or any campaign that needs the same on-screen identity across multiple shoots without re-casting. Note that reference-video billing follows a separate input-plus-output duration logic, with a 1080p quality multiplier, so plan it as its own line item rather than batching it into standard text-to-video budgets.
Creator‑Friendly Controls, Not Tech Jargon
Rather than burying you in complex settings, Wan 2.6 API focuses on controls creators actually understand, like pacing, mood, and camera feel. You can hide technical parameters inside your app and expose only simple sliders or presets, helping non‑technical teammates generate on‑brand video ideas without training.
Optimized for Social Media Formats
Wan 2.6 API is tuned for short‑form formats, making it straightforward to generate clips that look good on vertical feeds and mobile screens. You can design prompts around hooks, transitions, and call‑to‑action shots, then reuse them across campaigns and channels to build a repeatable video playbook, not one‑off experiments.
Async multi-variant orchestration for campaign work
Wan 2.6 API on Evolink AI is built around async task patterns, which is what you need when one campaign brief requires multiple multi-shot variants generated in parallel and then reviewed together. You can fan out variant generation across the standard Wan 2.6 tier and Wan 2.6 Flash, collect finished clips into your review pipeline as tasks complete, and only commit the hero version to publishing — instead of forcing a single render queue to do both exploration and final delivery.
Explore the Wan API family
Wan 2.6 is the latest cinematic tier with multi-shot storytelling and reference video. See how Wan 2.6 fits alongside Wan 2.5 for daily content volume and Wan Image for text-to-image workflows.
Wan 2.6 API FAQs
Everything you need to know about the product and billing.
API Reference
Select endpoint
Authentication
All APIs require Bearer Token authentication.
Authorization:
Bearer YOUR_API_KEY/v1/videos/generationsCreate Video
WAN 2.6 Text to Video (wan2.6-text-to-video) model supports text-to-video generation with enhanced quality and longer duration options.
Asynchronous processing mode, use the returned task ID to .
Generated video links are valid for 24 hours, please save them promptly.
Request Parameters
modelstringRequiredDefault: wan2.6-text-to-videoVideo generation model name.
wan2.6-text-to-videopromptstringRequiredText description of the video to generate.
Notes
- Maximum 1500 characters
A majestic eagle soaring through mountain peaks at sunset, cinematic lightingaspect_ratiostringOptionalDefault: 16:9Video aspect ratio.
| Value | Description |
|---|---|
| 16:9 | Landscape video (default) |
| 9:16 | Portrait video |
| 1:1 | Square video |
| 4:3 | Standard video |
| 3:4 | Portrait standard |
16:9qualitystringOptionalDefault: 720pVideo quality. Higher quality costs more.
| Value | Description |
|---|---|
| 720p | Standard quality (1.0x price) |
| 1080p | High quality (1.67x price) |
720pdurationintegerOptionalDefault: 5Duration of the generated video in seconds. Supports 2-15 seconds.
Notes
- Range: 2-15 seconds (any integer)
- Price is calculated as: base_price × duration × quality_multiplier
5prompt_extendbooleanOptionalDefault: trueWhether to enable intelligent prompt rewriting.
Notes
- When enabled, AI will optimize your prompt for better video generation
- Recommended for simple or short prompts
truemodel_params.shot_typestringOptionalDefault: singleShot type for video generation.
| Value | Description |
|---|---|
| single | Single continuous shot |
| multi | Multiple camera angles/shots |
Notes
- Only effective when prompt_extend is true
singleaudio_urlstringOptionalAudio file URL. The model will use this audio to generate the video.
Notes
- Supported format: mp3
- Duration: 3-30 seconds
- File size: up to 15MB
- If audio exceeds video duration, only the first portion is used
- If audio is shorter than video, the remaining portion will be silent
https://example.com/audio.mp3callback_urlstringOptionalHTTPS callback address after task completion.
Notes
- Triggered on completion, failure, or cancellation
- HTTPS only, no internal IPs
- Max length: 2048 chars
- Timeout: 10s, Max 3 retries
https://your-domain.com/webhooks/video-task-completed