
How to Use Kling AI: Tutorial and API Documentation Guide (2026)

TL;DR
- If your goal is how to use Kling AI for API work, start with Kling 3.0 for standard text-to-video and image-to-video.
- The current public EvoLink flow is
POST /v1/videos/generationsand thenGET /v1/tasks/{task_id}. - Kling video generation is asynchronous. You do not wait for the final MP4 in the initial request.
- Current public docs in this repo say generated video links are valid for
24 hours, so save results promptly. - If you are specifically looking for Kling AI API documentation, the useful docs are the model references linked later in this article, not generic marketing pages.
What this tutorial covers
- understand how to use Kling AI through EvoLink
- pick the right Kling model family
- send a first request without guessing the endpoint shape
- find the right Kling AI API documentation for the exact route they need
- Kling 3.0 supports text-to-video and image-to-video
- Kling O3 supports text-to-video, image-to-video, reference-to-video, and video editing
- Kling O1 is the higher-control route for brand-consistent generation and editing workflows
- Kling 3.0 Motion Control is the specialized route for reference-motion transfer
Step 1: Choose the right Kling model before writing code
The easiest way to fail with Kling is to start coding before choosing the right route.
| Route | Best for | Current pricing | Practical note |
|---|---|---|---|
| Kling 3.0 | standard text-to-video and image-to-video | from $0.075/s | best default starting point |
| Kling O3 | reference-heavy workflows and video editing | from $0.075/s | use when you need more than prompt-first generation |
| Kling O1 | brand consistency and unified subject inputs | $0.1111/s | use when consistency matters more than lowest entry price |
| Kling 3.0 Motion Control | reference motion transfer | from $0.1134/s | use for character motion replication |
Step 2: Understand the async workflow
The public API references for Kling in this repo document an asynchronous pattern:
- Submit a generation request to
POST https://api.evolink.ai/v1/videos/generations - Store the returned
task_id - Poll
GET https://api.evolink.ai/v1/tasks/{task_id} - Save the finished asset before the result URL expires

Step 3: Send your first Kling 3.0 request
For most first-time users, text-to-video is the fastest path.
Text-to-video example
curl --request POST \
--url https://api.evolink.ai/v1/videos/generations \
--header 'Authorization: Bearer YOUR_API_KEY' \
--header 'Content-Type: application/json' \
--data '{
"model": "kling-v3-text-to-video",
"prompt": "A golden retriever running through a sunlit meadow, cinematic slow motion",
"duration": 5,
"aspect_ratio": "16:9",
"quality": "720p"
}'What is documented in the current reference:
modelshould bekling-v3-text-to-video- the prompt limit is
2500characters - Kling 3.0 supports async task creation through
/v1/videos/generations - generated links are time-limited and should be saved promptly
Image-to-video example
If your workflow starts from a fixed first frame, use the image route instead.
curl --request POST \
--url https://api.evolink.ai/v1/videos/generations \
--header 'Authorization: Bearer YOUR_API_KEY' \
--header 'Content-Type: application/json' \
--data '{
"model": "kling-v3-image-to-video",
"prompt": "The character turns, smiles, and walks toward the camera",
"image_start": "https://example.com/portrait.jpg",
"duration": 5,
"quality": "720p"
}'The current image-to-video reference documents these constraints:
image_startis required- supported formats include
.jpg,.jpeg, and.png - image size can be up to
10MB - image dimensions must be at least
300px
Step 4: Poll the task result
Once you have submitted the job, poll the task endpoint:
curl --request GET \
--url https://api.evolink.ai/v1/tasks/{task_id} \
--header 'Authorization: Bearer YOUR_API_KEY'A good production habit is to persist:
task_id- request payload
- user or job metadata
- final output location
- timestamps for retries and completion
Step 5: Save outputs immediately
24 hours. That means the safest pattern is:- poll until status is complete
- copy the result to your own storage
- store the durable URL in your database
If you skip that step, a successful job can still become an operational problem later.
Where to find Kling AI API documentation
- Kling 3.0 Text-to-Video API docs
- Kling 3.0 Image-to-Video API docs
- Kling O3 Reference-to-Video API docs
Common mistakes in first-time Kling integrations
- choosing O3 or O1 before you know whether simple Kling 3.0 already fits the job
- treating video generation like a synchronous endpoint
- failing to store
task_id - forgetting that result links expire
- mixing creator-oriented pricing assumptions with API billing
Which route should you start with?
Use this short decision table:
| If your workflow is... | Start here |
|---|---|
| pure prompt to video | Kling 3.0 |
| image-guided motion | Kling 3.0 |
| video editing or reference-driven control | Kling O3 |
| stronger consistency across assets | Kling O1 |
| reference motion transfer | Kling 3.0 Motion Control |
FAQ
How do I use Kling AI for the first time?
/v1/videos/generations, store the task_id, poll /v1/tasks/{task_id}, and save the result quickly. That is the core pattern behind how to use Kling AI on EvoLink.Is this a complete Kling AI tutorial?
Where can I find Kling AI API documentation?
Which Kling model should developers start with?
Is Kling generation synchronous?
No. The current public docs in this repo document an asynchronous workflow using task creation plus task polling.
How long are Kling result links valid?
24 hours, so production systems should save the output promptly.Sources
- Kling 3.0 text-to-video API reference
- Kling 3.0 image-to-video API reference
- Kling O3 text-to-video API reference
- current route pricing and model metadata as of April 8, 2026


