Wan 2.6 API

Turn ideas, scripts, or static images into cinematic clips with the Wan 2.6 API, optimized for fast social media content and creator workflows.

Animate this image with smooth motion

Parameters
duration
5
quality
720p
prompt_extend
true
shot_type
single
Estimated Cost
3 Credits
Sample Result

No sample available

37 (suggested: 2,000)

Upload reference images

Click to upload or drag and drop

Supported formats: JPG, JPEG, PNG, WEBP
Maximum file size: 10MB; Maximum files: 10

Click Generate to see preview

History

Max 20 items

0 running · 0 completed

Your generation history will appear here

Wan 2.6 API for Cinematic Short‑Form Video

Use Wan 2.6 API to go from text or images to 1080p, sound‑on, multi‑shot clips ready for TikTok, Reels, YouTube Shorts, and ad creatives, without touching a video editor.

Wan 2.6 AI video generation showcase of product category feature 1

What You Can Do with Wan 2.6 API

Text to Video with Native Audio

Wan 2.6 API lets you type a short script or caption and receive a complete short video with motion, camera moves, and synced audio in one go. Instead of stitching clips in an editor, you describe the scene, mood, and pacing, and the Wan 2.6 AI video generator handles framing, transitions, and sound so you can focus on ideas, not timelines.

Text to video showcase of AI video generation product category feature 2

Image and Reference to Video

With Wan 2.6 API you can start from an existing image or reference video and extend it into a moving scene that keeps the same character, style, or product look. This is ideal when you already have brand visuals or a hero shot and want to animate it into dynamic social content without reshooting, while keeping colors, outfits, and framing consistent across multiple clips.

Image to video showcase of AI video generation product category feature 3

Multi‑Shot Storytelling for Social Media

Wan 2.6 API is built for short, story‑driven videos rather than random one‑off shots. You can describe several beats in one prompt, and the model generates a 5–15 second, multi‑shot sequence with coherent motion and physics. This makes it easy to create trending formats, UGC‑style explainer clips, and narrative ads that feel planned instead of stitched together from unrelated scenes.

Multi shot storytelling showcase of AI video generation product category feature 4

Why Choose Wan 2.6 API for Your Videos

Wan 2.6 API helps you create consistent, high‑impact social videos without a studio, editing skills, or a big budget.

Cinematic Quality Without Production Hassle

Wan 2.6 API gives you sharp 1080p short videos with realistic motion, lighting, and physics, so your content looks closer to a professional shoot than a quick template. Instead of coordinating cameras, actors, and locations, you describe the scenes once and let the model handle animation, timing, and audio, freeing you to test more ideas per week.

Faster Iteration for TikTok, Reels, and Ads

Short‑form platforms reward constant experimentation, and Wan 2.6 API is built for that pace. You can generate multiple versions of the same concept, change only the hook or background, and quickly see what performs. Because everything runs through the API, you can automate this inside your own tools or flows instead of manually exporting and uploading from an editor.

Consistent Brand and Character Stories

Wan 2.6 API helps keep your characters, products, and worlds consistent across dozens of clips, which is hard to do with one‑off stock assets. By reusing prompts and references, you can tell ongoing stories around the same mascot, influencer, or product line, making your feed feel like a connected universe rather than a random collection of videos.

How to Use Wan 2.6 API Step by Step

You do not need to be a developer to plan good prompts, but the Wan 2.6 API makes it easy for teams to plug video generation into real products.

1

1. Connect Your Evolink AI Account

Sign up with Evolink AI, enable Wan 2.6 API in your dashboard, and grab the API key your app or backend will use for secure video requests.

2

2. Draft a Social‑Style Prompt or Pick an Image

Write a short brief like a TikTok hook plus scene notes or upload a key visual, then send it through Wan 2.6 API as text‑to‑video or image‑to‑video.

3

3. Generate, Review, and Publish Your Video

The Wan 2.6 API returns your clip after processing; you review it, save the best versions, and post directly to your social channels or use them in ads.

Wan 2.6 API Features Built for Creators

Every Wan 2.6 API feature is tuned around social media, UGC, and SaaS video tools, not studio workflows.

Sound‑on ready

Native Audio and Lip‑Sync

Wan 2.6 API generates both visuals and sound together, so your scenes arrive with dialogue, ambient audio, and music already synced to the motion. This means you can post sound‑on clips directly to TikTok or Reels without searching for separate tracks or manually matching voiceover timing frame by frame.

Story‑first output

Multi‑Shot, 5–15 Second Clips

Instead of a single random moment, Wan 2.6 AI video generator creates multi‑shot videos that match the way people actually tell stories on social platforms. You can cover the hook, product demo, and closing scene in one clip, making it much easier to deliver a clear message in the few seconds you have before viewers swipe away.

Flexible inputs

Text, Image, and Reference Inputs

Wan 2.6 API works with plain text, images, or reference videos, so you can start from whatever you already have on hand. Turn product shots into motion, use illustrated characters as video actors, or extend an existing clip into new scenes, all while keeping style and branding consistent across your feed and campaigns.

Simple controls

Creator‑Friendly Controls, Not Tech Jargon

Rather than burying you in complex settings, Wan 2.6 API focuses on controls creators actually understand, like pacing, mood, and camera feel. You can hide technical parameters inside your app and expose only simple sliders or presets, helping non‑technical teammates generate on‑brand video ideas without training.

Social‑first design

Optimized for Social Media Formats

Wan 2.6 API is tuned for short‑form formats, making it straightforward to generate clips that look good on vertical feeds and mobile screens. You can design prompts around hooks, transitions, and call‑to‑action shots, then reuse them across campaigns and channels to build a repeatable video playbook, not one‑off experiments.

Workflow‑ready

Easy Integration into Apps and Workflows

Through Evolink AI, you can plug Wan 2.6 API into SaaS dashboards, content tools, or internal automation without reinventing infrastructure. Trigger generation from forms, queues, or no‑code tools, let the API handle processing, and send finished clips directly into your storage, editors, or publishing pipeline with minimal manual steps.

Wan 2.6 API FAQs

Everything you need to know about the product and billing.

Wan 2.6 is the latest cinematic AI video generation model developed by Alibaba's Tongyi Wanxiang team, featuring multi-shot storytelling, native audio sync, lip-sync, and up to 15-second 1080p clips from text, images, or references. Evolink AI provides the Wan 2.6 API as a streamlined integration layer, making it easier for developers and creators to call this powerful model in their apps, SaaS tools, or workflows without managing Alibaba Cloud infrastructure directly. It is ideal for indie developers building video features, marketers creating TikTok/Reels ads, and creators producing short-form content quickly and at scale.
With the Wan 2.6 AI video generator you can create story‑driven clips, product demos, UGC‑style explainers, and imaginative scenes from text or images. The model is especially strong at short, 5–15 second narrative videos with smooth camera motion, native sound, and consistent characters. This makes it ideal for TikTok, Reels, YouTube Shorts, social ads, and any app where users want dynamic video from minimal input.
Basic text‑to‑video tools often output one static shot with generic motion and no integrated audio, forcing you to edit and sound‑design everything afterward. Wan 2.6 API focuses on multi‑shot storytelling, smoother physics, and native audio, so your clips feel more like planned scenes than random AI tests. You also get more control over pacing, viewpoint, and continuity, which matters when you are trying to drive clicks, watch time, or conversions.
Yes, non‑technical creators can still benefit from Wan 2.6 API as long as someone sets up a simple interface or tool around it. Once integrated, you can expose only the fields creators care about, such as prompt, reference image, aspect ratio, and video length. From there, they simply type instructions like they would for a caption or script, click generate, and receive finished clips they can post or lightly edit before publishing.
Wan 2.6 API is well suited for social media ads and brand campaigns because it balances quality and speed. You can prototype many visual concepts, angles, and hooks without booking shoots or motion designers, then promote the versions that perform best. By reusing prompts and reference assets, you keep characters, product shots, and brand styling consistent even while testing different storylines and offers.
Generation time depends on length and traffic, but Wan 2.6 API is designed for short‑form content, so clips are typically ready in minutes rather than hours. This turnaround time is fast enough to support daily content calendars, reactive social posts, and automated video creation flows. You can request multiple generations in parallel through Evolink AI to keep up with higher‑volume publishing schedules.
The best Wan 2.6 prompts read like mini shooting briefs instead of single word tags. Mention the setting, subject, camera style, and what happens in order, and include the emotion or mood you want viewers to feel. For example, you can describe the opening hook, the main action shot, and the closing frame with call‑to‑action, which helps the Wan 2.6 AI video generator produce clearer, more watchable stories for your audience.
You can integrate Wan 2.6 API into your SaaS, automation, or content tools through Evolink AI, which handles routing, scaling, and monitoring for you. This lets you offer in‑app video generation features where users type prompts, upload images, and receive ready‑to‑use clips without leaving your product. It is a straightforward way to add modern AI video capabilities while keeping your own engineering team focused on core features.
POST
/v1/videos/generations

Create Video

WAN2.6 (wan2.6-image-to-video) model supports first-frame image-to-video generation.

Asynchronous processing mode, use the returned task ID to .

Generated video links are valid for 24 hours, please save them promptly.

Request Parameters

modelstringRequiredDefault: wan2.6-image-to-video

Model name.

Examplewan2.6-image-to-video
promptstringRequired

Prompt describing the video you want to generate.

Notes
  • Limited to 1500 characters
ExampleA cat playing piano
image_urlsarrayRequired

Reference image URL list for first-frame image-to-video generation.

Notes
  • Single request supports 1 image
  • Image size: no more than 10MB
  • Supported formats: .jpeg, .jpg, .png (transparent channel not supported), .bmp, .webp
  • Image resolution: width and height range is [360, 2000] pixels
  • Image URL must be directly accessible by the server
Examplehttps://example.com/image1.png
durationintegerOptionalDefault: 5

Specifies the duration of the generated video (in seconds).

ValueDescription
55 seconds duration
1010 seconds duration
1515 seconds duration
Notes
  • Each request will be pre-charged based on the duration value, actual charge is based on the generated video duration
Example5
qualitystringOptionalDefault: 720p

Video quality. 1080p costs 1.67x more than 720p.

ValueDescription
720pStandard definition, standard price (default)
1080pHigh definition, 1.67x price
Example720p
prompt_extendbooleanOptionalDefault: true

Whether to enable intelligent prompt rewriting. When enabled, a large model will optimize the prompt, which significantly improves results for simple or insufficiently descriptive prompts.

ValueDescription
trueEnable intelligent prompt rewriting (default)
falseDisable intelligent prompt rewriting
Exampletrue
model_paramsobjectOptional

Model parameter configuration.

model_params.shot_typestringOptionalDefault: single

Specifies the shot type for the generated video.

ValueDescription
singleOutputs single-shot video (default)
multiOutputs multi-shot video
Notes
  • Only effective when prompt_extend is true
  • shot_type priority > prompt priority
Examplesingle
audio_urlstringOptional

Audio file URL. The model will use this audio to generate the video.

Notes
  • Supported format: mp3
  • Duration: 3-30 seconds
  • File size: up to 15MB
  • If audio exceeds video duration, only the first portion is used
  • If audio is shorter than video, the remaining portion will be silent
Examplehttps://example.com/audio.mp3
callback_urlstringOptional

HTTPS callback URL for task completion.

Notes
  • Triggered when task is completed, failed, or cancelled
  • Sent after billing confirmation
  • Only HTTPS protocol is supported
  • Callbacks to internal IP addresses are prohibited
  • URL length must not exceed 2048 characters
  • Timeout: 10 seconds, Max 3 retries
Examplehttps://your-domain.com/webhooks/video-task-completed

Request Example

{
  "model": "wan2.6-image-to-video",
  "prompt": "A cat playing piano",
  "image_urls": [
    "https://example.com/image1.png"
  ],
  "duration": 5,
  "quality": "720p",
  "prompt_extend": true,
  "model_params": {
    "shot_type": "single"
  }
}

Response Example

{
  "created": 1757169743,
  "id": "task-unified-1757169743-7cvnl5zw",
  "model": "wan2.6-image-to-video",
  "object": "video.generation.task",
  "progress": 0,
  "status": "pending",
  "task_info": {
    "can_cancel": true,
    "estimated_time": 120
  },
  "type": "video",
  "usage": {
    "billing_rule": "per_call",
    "credits_reserved": 5,
    "user_group": "default"
  }
}
Wan 2.6 API: AI Video Generator with Native Audio | Evolink AI