HappyHorse 1.0 Coming SoonLearn More

Wan 2.6 API

Turn ideas, scripts, or static images into cinematic clips with the Wan 2.6 API, optimized for fast social media content and creator workflows.

Price: $0.075 - 0.125(~ 5.1 - 8.52 credits) per second of video

Input video capped at 5s for billing. Billed on total (input + output) duration.

Highest stability with guaranteed 99.9% uptime. Recommended for production environments.

Use the same API endpoint for all versions. Only the model parameter differs.

51 (suggested: 2,000)

Upload 1-3 reference videos (2-30s each, max 100MB). Input billing capped at 5s total.

Click to upload or drag and drop

Supported formats: MP4, MOV
Maximum file size: 100MB; Maximum files: 3

5s
2s10s

Click Generate to see preview

History

Max 20 items

0 running · 0 completed

Your generation history will appear here

Wan 2.6 API for Cinematic Short‑Form Video

Use Wan 2.6 API to go from text or images to 1080p multi‑shot clips (up to 15 seconds) with audio output on current routes, ready for TikTok, Reels, YouTube Shorts, and ad creatives without touching a video editor.

Wan 2.6 AI video generation showcase of product category feature 1

Billing Rules

  • Input video is capped at 5 seconds for billing
  • Billed per second based on total duration (input + output)
  • Pricing based on selected output quality, not input video resolution

Pricing

WAN 2.6 Reference Video
Video Generation
quality:720p
Price:
$0.075/ second
(5.1 Credits)
WAN 2.6 Reference Video
Video Generation
quality:1080p
Price:
$0.125/ second
(8.517 Credits)

If it's down, we automatically use the next cheapest available—ensuring 99.9% uptime at the best possible price.

What You Can Do with Wan 2.6 API

Latest cinematic text-to-video entry point

Wan 2.6 API is the latest cinematic tier in the Wan family — built for narrative scenes that need up to 15 seconds of multi-shot motion (2–15s for text/image, 2–10s for reference video), not just a single static beat. Type a short script or scene brief and the Wan 2.6 AI video generator handles framing, camera moves, transitions, and audio output together, so the output reads as a planned shot list rather than a one-off render. This is the entry point teams use when the brief is closer to a brand campaign than a daily UGC clip.

Text to video showcase of AI video generation product category feature 2

Image and Reference to Video

With Wan 2.6 API you can start from an existing image or reference video and extend it into a moving scene that keeps the same character, style, or product look. This is ideal when you already have brand visuals or a hero shot and want to animate it into dynamic social content without reshooting, while keeping colors, outfits, and framing consistent across multiple clips.

Image to video showcase of AI video generation product category feature 3

Multi‑Shot Storytelling for Social Media

Wan 2.6 API is built for short, story‑driven videos rather than random one‑off shots. You can describe several beats in one prompt, and the model generates a multi‑shot sequence up to 15 seconds long (2–15s for text/image inputs) with coherent motion and physics. This makes it easy to create trending formats, UGC‑style explainer clips, and narrative ads that feel planned instead of stitched together from unrelated scenes.

Multi shot storytelling showcase of AI video generation product category feature 4

Why Choose Wan 2.6 API for Your Videos

Wan 2.6 API helps you create consistent, high‑impact social videos without a studio, editing skills, or a big budget.

Cinematic Quality Without Production Hassle

Wan 2.6 API gives you sharp 1080p short videos with realistic motion, lighting, and physics, so your content looks closer to a professional shoot than a quick template. Instead of coordinating cameras, actors, and locations, you describe the scenes once and let the model handle animation, timing, and audio, freeing you to test more ideas per week.

Premium iteration for narrative campaigns

Wan 2.6 is built for the campaign cycle — the moments when you want premium cinematic output that justifies a brand budget. You can iterate multi-shot concepts, swap hooks or scenes between versions, and use Wan 2.6 Flash for high-volume A/B passes before committing the standard tier to the final hero clip. The mix of standard and Flash variants lets one campaign brief cover both exploration and final delivery without leaving the Wan 2.6 family.

Consistent Brand and Character Stories

Wan 2.6 API helps keep your characters, products, and worlds consistent across dozens of clips, which is hard to do with one‑off stock assets. By reusing prompts and references, you can tell ongoing stories around the same mascot, influencer, or product line, making your feed feel like a connected universe rather than a random collection of videos.

How to Use Wan 2.6 API Step by Step

You do not need to be a developer to plan good prompts, but the Wan 2.6 API makes it easy for teams to plug video generation into real products.

1

1. Connect Your Evolink AI Account

Sign up with Evolink AI, enable Wan 2.6 API in your dashboard, and grab the API key your app or backend will use for secure video requests.

2

2. Draft a Social‑Style Prompt or Pick an Image

Write a short brief like a TikTok hook plus scene notes or upload a key visual, then send it through Wan 2.6 API as text‑to‑video or image‑to‑video.

3

3. Generate, Review, and Publish Your Video

The Wan 2.6 API returns your clip after processing; you review it, save the best versions, and post directly to your social channels or use them in ads.

Wan 2.6 API Features Built for Creators

Every Wan 2.6 API feature is tuned around social media, UGC, and SaaS video tools, not studio workflows.

Audio output

Audio output alongside video on current routes

Wan 2.6 API is tuned for narrative scenes where dialogue, ambient sound, and score matter alongside the visuals. Per current route docs, the audio layer is generated alongside the video output, so longer multi-shot sequences come back with audio attached instead of a separate silent render that needs a second pass.

Story‑first output

Multi‑Shot Clips Up to 15 Seconds

Instead of a single random moment, Wan 2.6 AI video generator creates multi‑shot videos that match the way people actually tell stories on social platforms. You can cover the hook, product demo, and closing scene in one clip, making it much easier to deliver a clear message in the few seconds you have before viewers swipe away.

r2v reference video

Text, image, and reference video inputs (r2v)

Wan 2.6 adds reference video (`wan2.6-r2v`) as a first-class input alongside text-to-video (`wan2.6-t2v`) and image-to-video (`wan2.6-i2v`). With reference video you can extract a character's appearance and visual identity from an existing clip and carry them into new scenes — useful for episodic brand mascots, recurring spokespeople, or any campaign that needs the same on-screen identity across multiple shoots without re-casting. Note that reference-video billing follows a separate input-plus-output duration logic, with a 1080p quality multiplier, so plan it as its own line item rather than batching it into standard text-to-video budgets.

Simple controls

Creator‑Friendly Controls, Not Tech Jargon

Rather than burying you in complex settings, Wan 2.6 API focuses on controls creators actually understand, like pacing, mood, and camera feel. You can hide technical parameters inside your app and expose only simple sliders or presets, helping non‑technical teammates generate on‑brand video ideas without training.

Social‑first design

Optimized for Social Media Formats

Wan 2.6 API is tuned for short‑form formats, making it straightforward to generate clips that look good on vertical feeds and mobile screens. You can design prompts around hooks, transitions, and call‑to‑action shots, then reuse them across campaigns and channels to build a repeatable video playbook, not one‑off experiments.

Async orchestration

Async multi-variant orchestration for campaign work

Wan 2.6 API on Evolink AI is built around async task patterns, which is what you need when one campaign brief requires multiple multi-shot variants generated in parallel and then reviewed together. You can fan out variant generation across the standard Wan 2.6 tier and Wan 2.6 Flash, collect finished clips into your review pipeline as tasks complete, and only commit the hero version to publishing — instead of forcing a single render queue to do both exploration and final delivery.

Explore the Wan API family

Wan 2.6 is the latest cinematic tier with multi-shot storytelling and reference video. See how Wan 2.6 fits alongside Wan 2.5 for daily content volume and Wan Image for text-to-image workflows.

Wan 2.6 API FAQs

Everything you need to know about the product and billing.

Wan 2.6 is the latest cinematic AI video generation model developed by Alibaba's Tongyi Wanxiang team, featuring multi-shot storytelling, audio output on current routes, and up to 15 second 1080p clips (2–15s for text-to-video and image-to-video, 2–10s for reference video). Evolink AI provides the Wan 2.6 API as a streamlined integration layer, making it easier for developers and creators to call this powerful model in their apps, SaaS tools, or workflows without managing Alibaba Cloud infrastructure directly. It is ideal for indie developers building video features, marketers creating TikTok/Reels ads, and creators producing short-form content quickly and at scale.
Alibaba open-sourced earlier Wan releases such as Wan 2.1, while Wan 2.6 is documented as an API-accessible model on Alibaba's DashScope and Model Studio. As of April 2026, we have not found an official Alibaba source confirming Wan 2.6 itself as open source, so for the most current status please check Alibaba's official announcements. To use Wan 2.6 today, you can call it via the Wan 2.6 API on Evolink AI without managing Alibaba Cloud infrastructure directly.
Wan 2.6 Flash is a faster variant of the Wan 2.6 video generation lineup, optimized for shorter inference time at a lower per-second cost. It is well suited for high-volume, iterative workflows such as A/B testing social ad hooks, generating multiple variants of the same prompt, or powering in-app video features where latency matters more than the absolute highest quality. The standard Wan 2.6 API is the right choice when you want the full cinematic output, while Wan 2.6 Flash is the right choice when you want speed and unit economics.
With the Wan 2.6 AI video generator you can create story‑driven clips, product demos, UGC‑style explainers, and imaginative scenes from text or images. The model is especially strong at short narrative videos up to 15 seconds long (2–15s for text/image, 2–10s for reference video) with smooth camera motion, audio output on current routes, and consistent characters. This makes it ideal for TikTok, Reels, YouTube Shorts, social ads, and any app where users want dynamic video from minimal input.
Basic text‑to‑video tools often output one static shot with generic motion and no integrated audio, forcing you to edit and sound‑design everything afterward. Wan 2.6 API focuses on multi‑shot storytelling, smoother physics, and audio output alongside video on current routes, so your clips feel more like planned scenes than random AI tests. You also get more control over pacing, viewpoint, and continuity, which matters when you are trying to drive clicks, watch time, or conversions.
Yes, non‑technical creators can still benefit from Wan 2.6 API as long as someone sets up a simple interface or tool around it. Once integrated, you can expose only the fields creators care about, such as prompt, reference image, aspect ratio, and video length. From there, they simply type instructions like they would for a caption or script, click generate, and receive finished clips they can post or lightly edit before publishing.
Wan 2.6 API is well suited for social media ads and brand campaigns because it balances quality and speed. You can prototype many visual concepts, angles, and hooks without booking shoots or motion designers, then promote the versions that perform best. By reusing prompts and reference assets, you keep characters, product shots, and brand styling consistent even while testing different storylines and offers.
Generation time depends on length and traffic, but Wan 2.6 API is designed for short‑form content, so clips are typically ready in minutes rather than hours. This turnaround time is fast enough to support daily content calendars, reactive social posts, and automated video creation flows. You can request multiple generations in parallel through Evolink AI to keep up with higher‑volume publishing schedules.
The best Wan 2.6 prompts read like mini shooting briefs instead of single word tags. Mention the setting, subject, camera style, and what happens in order, and include the emotion or mood you want viewers to feel. For example, you can describe the opening hook, the main action shot, and the closing frame with call‑to‑action, which helps the Wan 2.6 AI video generator produce clearer, more watchable stories for your audience.
You can integrate Wan 2.6 API into your SaaS, automation, or content tools through Evolink AI, which handles routing, scaling, and monitoring for you. This lets you offer in‑app video generation features where users type prompts, upload images, and receive ready‑to‑use clips without leaving your product. It is a straightforward way to add modern AI video capabilities while keeping your own engineering team focused on core features.
POST
/v1/videos/generations

Create Video

WAN 2.6 Reference Video (wan2.6-reference-video) model supports reference video-to-video generation, extracting character appearance and voice from uploaded reference videos.

Asynchronous processing mode, use the returned task ID to .

Generated video links are valid for 24 hours, please save them promptly.

Request Parameters

modelstringRequiredDefault: wan2.6-reference-video

Video generation model name.

Examplewan2.6-reference-video
promptstringRequired

Text description of the video to generate.

Notes
  • Maximum 1500 characters
ExampleA person dancing
video_urlsarrayRequired

Array of reference video file URLs. Used to extract character appearance and voice.

Notes
  • Maximum 3 videos
  • Format: mp4, mov
  • Duration: 2-30 seconds each
  • File size: max 100MB each
  • Input billing capped at 5s total
Example["https://example.com/reference.mp4"]
aspect_ratiostringOptionalDefault: 16:9

Video aspect ratio.

ValueDescription
16:9Landscape video (default)
9:16Portrait video
1:1Square video
4:3Standard video
3:4Portrait standard
Example16:9
qualitystringOptionalDefault: 720p

Video quality. Higher quality costs more.

ValueDescription
720pStandard quality (1.0x price)
1080pHigh quality (1.67x price)
Example720p
durationintegerOptionalDefault: 5

Duration of the generated video in seconds. Supports 2-10 seconds.

Notes
  • Range: 2-10 seconds (any integer)
  • Price is calculated as: base_price × (input_duration + output_duration) × quality_multiplier
  • Input duration is capped at 5 seconds for billing
Example5
model_params.shot_typestringOptionalDefault: single

Shot type for video generation.

ValueDescription
singleSingle continuous shot
multiMultiple camera angles/shots
Examplesingle
callback_urlstringOptional

HTTPS callback address after task completion.

Notes
  • Triggered on completion, failure, or cancellation
  • HTTPS only, no internal IPs
  • Max length: 2048 chars
  • Timeout: 10s, Max 3 retries
Examplehttps://your-domain.com/webhooks/video-task-completed

Request Example

{
  "model": "wan2.6-reference-video",
  "prompt": "A person dancing",
  "video_urls": [
    "https://example.com/reference.mp4"
  ],
  "aspect_ratio": "16:9",
  "quality": "720p",
  "duration": 5,
  "model_params": {
    "shot_type": "single"
  }
}

Response Example

{
  "created": 1757169743,
  "id": "task-unified-1757169743-abc123",
  "model": "wan2.6-reference-video",
  "object": "video.generation.task",
  "progress": 0,
  "status": "pending",
  "task_info": {
    "can_cancel": true,
    "estimated_time": 120
  },
  "type": "video",
  "usage": {
    "billing_rule": "per_second",
    "credits_reserved": 10,
    "user_group": "default"
  }
}
Wan 2.6 API: Cinematic AI Video Generator | Evolink AI