
GPT Image 2 Developer Guide (2026): Official Status, Capability Assessment, and How to Get Started

GPT Image 2 Developer Guide: Official Status, Capability Assessment, and How to Integrate (2026)
- As of April 22, 2026, OpenAI now publishes an official public model page for
gpt-image-2. - On EvoLink,
gpt-image-2is already available, andgpt-image-2-betais also available as a secondary testing route. - For developers, what actually matters is: what OpenAI has officially confirmed, how your provider exposes the model, and how to design your system so migration stays painless later.
So this article will not lead with marketing claims. We will start from the official OpenAI status, then discuss the safest integration strategy on EvoLink.
This guide is for teams building real image workflows: product photo generation, image editing pipelines, creative automation, mockup output, and multi-step AI interactions. We will cover three things clearly:
- What has OpenAI actually confirmed?
- In all the discussion about GPT Image 2, what is still unclear, undocumented, or provider-specific?
- If you need to build image generation workflows now, what is the safest integration and migration strategy?
TL;DR
- As of April 22, 2026, OpenAI publicly documents
gpt-image-2as an official model. - OpenAI's official model page describes GPT Image 2 as a state-of-the-art image generation model for image generation and editing.
- OpenAI's public docs now give developers an official model name to anchor on:
gpt-image-2. - For single-shot generation or edit jobs, OpenAI recommends the Image API.
- For conversational, multi-step, editable image experiences, OpenAI recommends the Responses API.
- EvoLink currently offers
gpt-image-2for direct integration, and also keepsgpt-image-2-betaavailable for testing and comparison. - Want to "prepare for GPT Image 2"? The safest strategy is: keep your model-routing layer abstract and map vendor model names separately from provider-specific route names.
What People Actually Mean When They Search for "GPT Image 2"
Now that OpenAI has publicly documented the model, the real issue is no longer whether the name exists. The real issue is that one keyword still mixes together several very different user needs.
In practice, "GPT Image 2" still covers at least four search intents:
- "Has OpenAI released a new model after GPT Image 1.5?"
- "Has ChatGPT's image system improved again?"
- "Should I switch my API integration to a new model ID?"
- "What architecture should I use now so migration later is easy?"
What OpenAI Has Officially Confirmed
1. gpt-image-2 now has an official public model page
gpt-image-2, which means GPT Image 2 is no longer just a market nickname or a speculative placeholder in API discussions.That matters because it gives developers a clean boundary: what is documented by OpenAI versus what is still provider-specific implementation detail.
2. OpenAI supports two main image API integration paths
The current docs separate image work into two API styles:
- Image API: best for single-shot generation or editing of one image.
- Responses API: best for conversational, multi-step, iteratively editable image experiences.
This choice directly affects system design. Many teams obsess over model names while missing the more fundamental architecture question: are you building a one-shot asset generator or an iterative editing workflow.
3. Background mode is documented
4. Editing and high-fidelity image inputs are already public features
The current docs already support many of the capabilities people assume require a "next-gen model":
- Image generation and image editing
- Multi-turn editing in Responses API
- High-fidelity preservation of input images
- Mask support in edit workflows
In other words, most of the "next-gen image workflow" story is already available in the current tech stack.
What OpenAI Has Not Fully Clarified
This is the section where teams still need to read carefully.
- That every third-party provider will expose the model under the exact same request name
- That a provider route named
gpt-image-2-betais identical in naming semantics to OpenAI's officialgpt-image-2 - An official migration guide from
gpt-image-1.5togpt-image-2 - Official latency benchmarks for
GPT Image 2 - Performance claims like "40% better text rendering" or "95% success rate"
Any article that flattens these distinctions into "it is all the same everywhere" is taking a credibility risk.
For most teams, the practical approach is: use OpenAI's official docs for vendor-level facts, then treat EvoLink's beta docs as route-specific implementation detail for testing and workflow validation.
EvoLink Access: GPT Image 2 First, Beta Optional
gpt-image-2 directly, and also keeps gpt-image-2-beta available as an optional testing route.gpt-image-2 should be the main model name you foreground in product-facing copy. If you want to compare behavior, validate staged changes, or test alternate routing, gpt-image-2-beta is there as a secondary option.What is currently available:
- GPT Image 2 product page - view model capabilities and use cases
- Playground access - test prompts and workflows with zero code
- Full API documentation - guides for current GPT Image 2 routes
- Support for text-to-image, image-to-image, and image editing
- Async task handling - suited for long-running generation jobs
The integration pattern follows the OpenAI-compatible format you are used to:
- Primary request model name:
gpt-image-2 - Generation endpoint:
/v1/images/generations - Async result retrieval via task status flow
- Optional
image_urlsparameter for reference-based editing or image-to-image work - Optional
callback_urlfor HTTPS task-completion callbacks - Supported aspect ratios:
1:1,3:2,2:3, andauto - Returned image links remain valid for 24 hours
- Optional secondary testing route:
gpt-image-2-beta
gpt-image-2 directly. Use gpt-image-2-beta only when you specifically want side-by-side testing, staged rollout, or early comparison.How to Call GPT Image 2 on EvoLink
gpt-image-2 on the unified image generation endpoint:curl --request POST \
--url https://api.evolink.ai/v1/images/generations \
--header "Authorization: Bearer $EVOLINK_API_KEY" \
--header "Content-Type: application/json" \
--data '{
"model": "gpt-image-2",
"prompt": "A premium product photo of a ceramic coffee mug on a marble counter, soft window light, clean ecommerce composition",
"size": "1:1"
}'image_urls parameter.The developer flow is straightforward:
- Test your prompt in the GPT Image 2 Playground
- Switch to API calls with
model: "gpt-image-2" - Poll the async task result
- Save the returned image URL within 24 hours
How to Build a Migration-Friendly Architecture
Whether you are using EvoLink's standard GPT Image 2 route or comparing alternative routes, getting these architecture fundamentals right means future model swaps will be painless.
gpt-image-1.5 remains an important comparison baseline
gpt-image-2 now publicly documented, gpt-image-1.5 still matters as a stable reference point for teams comparing capabilities and rollout paths. It already covers many of the core capabilities teams care about:- Text-to-image generation
- Image editing
- Conversational image workflows through Responses API
- Better text rendering than previous generations
- Higher-fidelity preservation of input images
gpt-image-1.5 is the safest default choice.Abstract model routing from day one
This is the real "prepare for GPT Image 2" strategy - do not hardcode model names throughout your codebase. Centralize the routing decision in your service layer.
type ImageJobType =
| "hero_image"
| "text_heavy_mockup"
| "product_edit"
| "creative_iteration";
function selectImageModel(jobType: ImageJobType): string {
switch (jobType) {
case "text_heavy_mockup":
return "gpt-image-1.5"; // conservative choice for legacy doc alignment
case "hero_image":
case "product_edit":
case "creative_iteration":
default:
return "gpt-image-2"; // default to the latest model
}
}When you need to switch models or align with a different provider route, you only change the routing table - not a repo-wide search and replace.
Async architecture is a must
Regardless of which model you use, image generation latency variance is significant. OpenAI's docs explicitly note that complex prompts can take up to 2 minutes, and background mode is the recommended approach.
A production-grade architecture should look like:
- Submit image request
- Return a job ID immediately
- Poll in background
- Store result on completion
- Update UI when the final asset is ready
A minimal polling example with the Responses API:
import OpenAI from "openai";
const client = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
export async function submitImageJob(prompt: string) {
const response = await client.responses.create({
model: "gpt-4o",
input: prompt,
tools: [{ type: "image_generation" }],
background: true,
});
return response.id;
}
export async function waitForImage(responseId: string) {
let resp = await client.responses.retrieve(responseId);
while (resp.status === "queued" || resp.status === "in_progress") {
await new Promise((resolve) => setTimeout(resolve, 2000));
resp = await client.responses.retrieve(responseId);
}
return resp;
}This pattern works regardless of what the model is called in the future.
GPT Image 2 Editing Capabilities
If your workflow is single-shot generation or editing, default to the Image API. If it is conversational and multi-step, consider the Responses API.
OpenAI's current documentation already covers:
- Image edits and multi-turn editing
- High-fidelity input and mask-based edit workflows
So if you want to do background replacement, small-object edits, iterative visual refinement, or brand element preservation (logos, faces, etc.), you can start now - no need to wait.
Pricing Reference: Use Only Verifiable Data
gpt-image-2:| Model | Text input | Cached text input | Image input | Cached image input | Image output |
|---|---|---|---|---|---|
gpt-image-2 | $5.00 / 1M tokens | $1.25 / 1M tokens | $8.00 / 1M tokens | $2.00 / 1M tokens | $30.00 / 1M tokens |
gpt-image-1.5 | $5.00 / 1M tokens | $1.25 / 1M tokens | $8.00 / 1M tokens | $2.00 / 1M tokens | $32.00 / 1M tokens |
gpt-image-1 | $5.00 / 1M tokens | $1.25 / 1M tokens | $10.00 / 1M tokens | $2.50 / 1M tokens | $40.00 / 1M tokens |
| Model | Low | Medium | High |
|---|---|---|---|
gpt-image-1.5 | $0.009 | $0.034 | $0.133 |
gpt-image-1 | $0.011 | $0.042 | $0.167 |
gpt-image-1.5 also has token-based pricing:- Text input: $5.00 / 1M tokens
- Image input: $8.00 / 1M tokens
- Image output: $32.00 / 1M tokens
Output token counts by quality for 1024x1024:
- low: 272
- medium: 1,056
- high: 4,160
The practical value of this section is not to pretend one table answers every pricing question. It is to separate three different pricing views:
- the official OpenAI baseline you can publicly verify
- the current EvoLink route pricing you actually integrate against
- the internal budgeting view your team should build for forecasting and routing decisions
If you mix those together, pricing discussions get confusing fast. Treat them as related, but not interchangeable.
Practical Cost Strategy
Pattern 1: Generate once, edit iteratively
- Create the base image with
gpt-image-1.5 - Use edits and multi-turn workflows for refinements
- Avoid full regeneration when only one region needs to change
Pattern 2: Route by job type
- Standard product visuals ->
gpt-image-2 - Product edits ->
gpt-image-2 - Text-heavy mockups (legacy doc alignment) ->
gpt-image-1.5 - Experimental future models -> isolated test bucket
The point is not to predict the next model name. The point is to make future model adoption as cheap as possible.
What This Looks Like in Real Workflows
The article becomes more useful when you translate model discussion into concrete production scenarios.
| Workflow | Better route | Why |
|---|---|---|
| Ecommerce hero image generation | gpt-image-2 | Cleaner primary path for production image generation |
| Background replacement and localized edits | gpt-image-2 | Better fit when you want to wire image editing directly into a live workflow |
| Creative prompt experiments | gpt-image-2-beta | Gives you a separate lane for exploratory testing without changing the main route |
| Agent-driven async image pipeline | gpt-image-2 | Better default for orchestrated jobs, task polling, and callback-based systems |
| Internal A/B evaluation | gpt-image-2 + gpt-image-2-beta | Run the main sample on the primary route and compare against beta when needed |
If you are building a real system rather than testing prompts casually, the first things to get right are:
- async task handling
- routing abstraction
- durable saving of returned image assets
- separation between production and testing lanes
What Teams Should Do Now
At this point, most teams do not need more headlines. They need a clear action sequence.
If you are moving this project forward now, the practical path is:
- Test now - try GPT Image 2 and evaluate whether it fits your use case
- Integrate now - connect it to your development or testing environment
- Switch smoothly later - as OpenAI docs and provider routes continue to settle, adjust routing configuration rather than rewriting application logic
The current GPT Image tech stack already has enough capability to build:
- Image generation pipelines
- Editing workflows
- Iterative refinement loops
- Async job orchestration
- Cost-aware routing
What Is Still Worth Watching
gpt-image-2 model page. From here, the next signals to watch are:- Updated image-generation docs listing a new GPT Image family member
- An official pricing table for the new model
- Changelog or release notes
- An official migration guide from current GPT Image models
gpt-image-2 as the main integration target, and keep gpt-image-2-beta only as an optional testing lane.Production Checklist Before You Go Live
If you are preparing to ship GPT Image 2 in a real product, verify at least these items before launch:
- your model names are centralized in routing config instead of scattered through the codebase
gpt-image-2is the production default rather than accidentally treating beta as the main pathgpt-image-2-betais behind a controlled switch for testing, not mixed into the default production flow- your system handles async task status instead of assuming every request returns the final image immediately
- you save returned assets before the 24-hour image link expires
- your team clearly distinguishes OpenAI official model facts from EvoLink route-specific integration details
- you have either polling or callback handling in place for long-running image jobs
FAQ
Do I still need async architecture now that GPT Image 2 is public?
Yes. OpenAI's docs already note that complex prompts can take up to 2 minutes, and background mode is the recommended approach.
Can I build iterative image editing workflows today?
Yes. OpenAI's current docs cover image edits, multi-turn editing, masks, and high-fidelity image input handling.
Will I need to rewrite my app if model names or provider routes change later?
Not if you abstract model routing now. Future model switches should be a routing-table change, not a full application rewrite.
How should I think about gpt-image-2 vs gpt-image-2-beta on EvoLink?
gpt-image-2. On EvoLink, treat gpt-image-2 as the main production-facing route and gpt-image-2-beta as an optional secondary lane for testing, comparison, or staged validation.What is the most practical default if I am integrating now?
gpt-image-2 for direct integration. Reach for gpt-image-2-beta only when you explicitly need staged testing, side-by-side comparison, or an extra evaluation lane.Where can I compare the whole GPT Image lineup quickly?
Get Started
If you want to build with GPT Image 2 now, EvoLink already offers it directly. The beta route is there if you want extra testing flexibility.
Compare Image Models on EvoLinkRelated Articles
- GPT Image Family
- ChatGPT Image 2: official status and where to start
- GPT Image 2 vs GPT Image 1.5
- GPT Image 2 vs Nano Banana 2
- GPT Image 1.5 API production guide
- GPT Image 1.5 complete guide
Sources
- OpenAI Models overview: https://platform.openai.com/docs/models
- OpenAI image generation guide: https://developers.openai.com/api/docs/guides/image-generation
- OpenAI GPT Image 2 model page: https://platform.openai.com/docs/models/gpt-image-2
- OpenAI GPT Image 1.5 model page: https://platform.openai.com/docs/models/gpt-image-1.5
- OpenAI API pricing: https://platform.openai.com/docs/pricing
- OpenAI background mode guide: https://developers.openai.com/api/docs/guides/background


