Gemini 3.0 Pro Preview API
$1.600 - 3.200(~ 115.2 - 230.4 credits) per 1M input tokens; $9.600 - 14.400(~ 691.2 - 1036.8 credits) per 1M output tokens
$0.160 - 0.319(~ 11.5 - 23 credits) per 1M cache read tokens
Google Search grounding charged separately per query.
Highest stability with guaranteed 99.9% uptime. Recommended for production environments.
Use the same API endpoint for all versions. Only the model parameter differs.
Gemini 3.0 Pro Preview API — OpenAI SDK Compatible
Call gemini-3-pro-preview using the OpenAI SDK format via EvoLink. Get usage plus reasoning_tokens in responses and leverage a 1M context window reported in testing.

Capabilities of Gemini 3.0 Pro Preview API
Multimodal Inputs + Grounding
Gemini 3 Pro Preview API accepts text, code, image, video, audio, and PDF inputs with text-only output, and supports Search grounding plus URL context for verifiable answers.

Thinking + Agent Tools
Thinking, function calling, structured outputs, code execution, and file search are supported for agentic reasoning and automation.

1M Context + Ops Controls
Input token limit is 1,048,576 with up to 65,536 output tokens. Caching and Batch API support long-context pipelines.

Why Use Gemini 3.0 Pro on EvoLink
EvoLink exposes Gemini 3.0 Pro Preview through a familiar OpenAI SDK-style endpoint, with explicit auth and detailed usage stats for production-grade tracking.
OpenAI SDK Format
Call /v1/chat/completions using model + messages. The messages array is required (minimum length 1).
Granular Usage Metrics
Usage includes prompt_tokens, completion_tokens, total_tokens, plus completion_tokens_details.reasoning_tokens.
Model Quality Signal
Vercel reports stronger instruction following, improved response consistency, and strong results in its Next.js evaluations.
How to Call Gemini 3.0 Pro Preview
Use the OpenAI SDK format and the gemini-3-pro-preview model string.
Step 1 - Set the Model
Use model: "gemini-3-pro-preview" in the request body.
Step 2 - Send Messages
Provide a messages array with role/content pairs (minimum length 1).
Step 3 - Inspect Output + Usage
Read choices[0].message.content and track usage.prompt_tokens, completion_tokens, and reasoning_tokens.
Technical Specs
Key details for the Gemini 3.0 Pro Preview API
OpenAI SDK Format
Use the standard /v1/chat/completions interface.
Model String
Set model to gemini-3-pro-preview for this endpoint.
Usage Breakdown
Response includes prompt/completion totals plus detailed token categories.
Reasoning Tokens
completion_tokens_details includes reasoning_tokens for deeper analysis.
Multimodal Reasoning Focus
Vercel notes stronger multimodal reasoning and tool use in testing.
1M Context Window
Vercel reports a 1M context window supporting long agent flows.
Gemini 3.0 Pro API FAQs
Everything you need to know about the product and billing.