GPT Image 1 API
Est.Price: $0.0098 - 0.446(~ 0.66586 - 30.31603 credits) per image
Highest stability with guaranteed 99.9% uptime. Recommended for production environments.
Use the same API endpoint for all versions. Only the model parameter differs.
Upload up to 16 images (max 50MB each)
Click to upload or drag and drop
Supported formats: JPEG, JPG, PNG, WEBP
Maximum file size: 50MB; Maximum files: 16
Number of images to generate (1-10)
Click Generate to see preview
History
Max 20 items0 running · 0 completed
GPT Image 1 API Pricing and Editing Access
Run GPT Image 1 on EvoLink for text-to-image, image-to-image, and image editing workflows. Review pricing, quality controls, and async task delivery when you need a flexible route for existing image pipelines.

Billing Model
- •Token-based — charged by the token counts reported in the upstream usage object.
- •Image Output tokens scale with size and quality (low/medium/high).
- •Image Input tokens apply only in image-to-image / edit mode.
- •Cached Input is charged only when the upstream usage object reports cached tokens.
- •Output Text + Thinking (model reasoning + refined prompt) is returned alongside the image at no extra charge by OpenAI. gpt-image-1.5 bills this dimension separately.
Pricing
| Model | Mode | Token Category | Price |
|---|---|---|---|
| GPT Image 1 | Token-based billing | Image Output | $0.036/ 1K tokens(2.448 Credits) |
| GPT Image 1 | Token-based billing | Image Input | $0.0090/ 1K tokens(0.612 Credits) |
| GPT Image 1 | Token-based billing | Image Cached Input | $0.0022/ 1K tokens(0.153 Credits) |
| GPT Image 1 | Token-based billing | Text Input | $0.0045/ 1K tokens(0.306 Credits) |
| GPT Image 1 | Token-based billing | Text Cached Input | $0.0011/ 1K tokens(0.0765 Credits) |
If it's down, we automatically use the next cheapest available—ensuring 99.9% uptime at the best possible price.
What You Can Build with GPT Image 1
Editing-Heavy Image Workflows
GPT Image 1 is a useful route when your workflow depends on image-to-image changes, reference-guided edits, and quality tuning instead of only chasing the newest model label. It fits teams that want flexible controls and a known editing surface.

Reference-Based Asset Variations
Send one or more reference images when you need variant generation, style shifts, or compositional updates from an existing source. GPT Image 1 is especially useful when reference handling matters more than purely net-new output.

Legacy-Compatible Production Integrations
GPT Image 1 also works as a compatibility route for teams already familiar with its request model, pricing multipliers, and editing behavior. That makes it a practical option when migration speed matters less than predictability.

Why Use GPT Image 1 Through EvoLink
GPT Image 1 on EvoLink is best framed as a flexible editing and compatibility route: clear pricing logic, multiple request controls, reference image support, and async task handling for teams that still need this model.
A Flexible Route for Editing and Variations
GPT Image 1 is still useful when your main requirement is image editing, image-to-image behavior, and request-level control over quality, size, and reference handling.
Transparent Pricing Multipliers
This route gives teams explicit control over quality, size, and count, making it easier to reason about cost before sending larger or more frequent image jobs.
Async Delivery for Longer Jobs
Requests run through an async task flow, which is a better fit for generation, editing, and batch-style workloads than assuming immediate sync responses.
How to Integrate GPT Image 1
Three steps to send requests, manage task polling, and work with editing-oriented parameters.
Create Your API Key
Sign in to EvoLink and create one API key for playground testing and production requests. You can use the same access path across supported image routes.
Build the Request Payload
Use `gpt-image-1` as the model name, then pass your prompt, quality, size, and optional reference images. This route is especially useful when you need more explicit control over editing-style requests.
Poll the Task and Save the Result
GPT Image 1 runs through an async task flow. Retrieve the task result when ready, then save the completed image promptly because output URLs are temporary.
GPT Image 1 Capabilities
The request controls and editing behavior that still matter for teams using GPT Image 1.
Quality Multipliers
Choose low, medium, or high quality settings depending on the speed, cost, and output fidelity your workflow requires.
Multiple Size Options
Use supported aspect ratios and pixel dimensions to fit different delivery needs without rewriting the route.
Batch Generation Support
Generate multiple outputs in one request when you need iterative variants or bulk image jobs from the same prompt setup.
Reference Image Inputs
Attach up to multiple reference images for image-to-image and editing tasks when your workflow depends on existing source assets.
Async Task Processing
Use task polling to manage longer-running jobs and retrieve results cleanly inside application workflows.
Commercial Usage Rights
Outputs generated through GPT Image 1 on EvoLink are positioned for commercial use cases, making the route suitable for product and business workflows.
Frequently Asked Questions
Everything you need to know about the product and billing.
Explore the GPT Image family
GPT Image 1 is the older baseline in the GPT Image family on EvoLink. Use the family page when you need to compare this legacy route against GPT Image 1.5 or GPT Image 2 before choosing the best current fit.
API Reference
Select endpoint
Authentication
All APIs require Bearer Token authentication.
Authorization:
Bearer YOUR_API_KEY/v1/images/generationsGenerate Image
GPT Image 1 (gpt-image-1) model supports text-to-image, image-to-image, and image editing modes.
Asynchronous processing mode, use the returned task ID to query status.
Generated image links are valid for 24 hours, please save them promptly.
Request Parameters
modelstringRequiredDefault: gpt-image-1Image generation model name.
| Value | Description |
|---|---|
| gpt-image-1 | GPT Image 1 model |
gpt-image-1promptstringRequiredPrompt describing the image to be generated or how to edit the input image.
Notes
- Limited to 2000 tokens
A beautiful colorful sunset over the oceansizestringOptionalDefault: 1024x1024Size of generated image, supports two formats.
| Value | Description |
|---|---|
| 1:1 | Square aspect ratio |
| 2:3 | Portrait aspect ratio |
| 3:2 | Landscape aspect ratio |
| 1024x1024 | Square (default) |
| 1024x1536 | Portrait |
| 1536x1024 | Landscape |
Notes
- Aspect ratio format: 1:1, 2:3, 3:2
- Pixel format: 1024x1024, 1024x1536, 1536x1024
- Larger sizes produce more output tokens and higher cost
1024x1024qualitystringOptionalDefault: highQuality of the generated image. Affects pricing.
| Value | Description |
|---|---|
| low | Low quality, faster — fewest output tokens |
| medium | Medium quality — balanced |
| high | High quality, slower (default) — most output tokens |
highimage_urlsarrayOptionalReference image URL list for image-to-image and image editing features.
Notes
- Supports 1-16 images per request
- Max size: 50MB per image
- Formats: .jpeg, .jpg, .png, .webp
- URLs must be directly accessible by the server
https://example.com/image1.pngnintegerOptionalDefault: 1Number of images to generate.
| Value | Description |
|---|---|
| 1-10 | Range from 1 to 10 images |
Notes
- Each image is billed independently by token usage
1callback_urlstringOptionalHTTPS callback address after task completion.
Notes
- Triggered on completion, failure, or cancellation
- Sent after billing confirmation
- HTTPS only, no internal IPs
- Max length: 2048 chars
- Timeout: 10s, Max 3 retries
https://your-domain.com/webhooks/image-task-completedPricing Information
Billing Model: Token-based — charged by the token counts reported in the upstream usage object.
Token Categories:
- Image Output — scales with size and quality (low/medium/high)
- Image Input — applies only in image-to-image / edit mode
- Image Cached Input — charged when cached tokens are reported
- Text Input — prompt token consumption
- Text Cached Input — charged when cached prompt tokens are reported
See the Pricing tab for per-token rates by user group.