HappyHorse 1.0 is now liveTry it now

Gemini 3 Pro Preview (Deprecated)

Gemini 3 Pro Preview (gemini-3-pro-preview) is an earlier Gemini 3 route. This page is kept as a reference for teams comparing it with EvoLink's newer Gemini 3.1 Pro Preview route.
Price: 

$1.865 - 3.729(~ 126.8 - 253.6 credits) per 1M input tokens; $11.182 - 16.774(~ 760.4 - 1140.6 credits) per 1M output tokens

$0.187 - 0.374(~ 12.7 - 25.4 credits) per 1M cache read tokens

Google Search grounding charged separately per query.

Highest stability with guaranteed 99.9% uptime. Recommended for production environments.

Use the same API endpoint for all versions. Only the model parameter differs.

Gemini 3 Pro Preview — Compare Before You Migrate

If you are still evaluating Gemini 3 Pro Preview, compare it with EvoLink's newer Gemini 3.1 Pro Preview route before standardizing production traffic. In many apps, migration only requires changing the model string and validating outputs.

example 1

What Gemini 3 Pro Preview Can Do

First-Gen Gemini 3 Reasoning

Gemini 3 Pro Preview introduced the Gemini 3 generation's thinking capability. It handles multi-step reasoning, code generation, and analysis with text-only output from multimodal inputs.

example 2

Tool Use & Grounding

Function calling, structured outputs, code execution, and Google Search grounding are all supported. URL context lets the model reference external pages directly.

example 3

1M Context & Batch Processing

Process up to 1,048,576 input tokens per request. Use caching and Batch API to reduce cost on repetitive or high-volume workloads.

example 4

How to Migrate from Gemini 3 Pro to 3.1 Pro

If your workflow still references Gemini 3 Pro, here is what to check before moving traffic to a newer Gemini route.

Change One String

Replace gemini-3-pro-preview with gemini-3.1-pro-preview in your API request body. The endpoint, auth, and request format are identical — no other code changes needed.

Check Pricing Tiers

Compare the current listed pricing for each route in the EvoLink dashboard and official Gemini pricing docs before migrating production traffic.

Better Performance

Run your own coding, instruction-following, and tool-use prompts against Gemini 3.1 Pro before switching your default route.

How to Call Gemini 3.0 Pro Preview

Use the OpenAI SDK format and the gemini-3-pro-preview model string.

1

Step 1 - Set the Model

Use model: "gemini-3-pro-preview" in the request body.

2

Step 2 - Send Messages

Provide a messages array with role/content pairs (minimum length 1).

3

Step 3 - Inspect Output + Usage

Read choices[0].message.content and track usage.prompt_tokens, completion_tokens, and reasoning_tokens.

Gemini 3 Pro Preview Specs

Technical details for the original Gemini 3 Pro route

Model

Model ID

gemini-3-pro-preview — the earlier Gemini 3 Pro route, distinct from EvoLink's newer gemini-3.1-pro-preview route.

Pricing

Input Pricing

$2.00 per 1M tokens (standard), $4.00 per 1M for prompts over 200K tokens.

Pricing

Output Pricing

$12.00 per 1M tokens (standard), $18.00 per 1M for high-volume prompts over 200K tokens.

Limits

Context Window

1,048,576 input tokens (1M context). Max output: 65,536 tokens.

Data

Knowledge Cutoff

January 2025. For more recent training data, use Gemini 3.1 Pro.

Lifecycle

Status

Earlier Gemini 3 route. For new workloads, compare it with Gemini 3.1 Pro Preview and validate current availability in EvoLink.

Gemini 3 Pro Preview FAQ

Everything you need to know about the product and billing.

Gemini 3.1 Pro is EvoLink's newer Gemini Pro route. Test coding quality, agentic tool use, instruction following, pricing, and response compatibility against your own prompts before switching production traffic.
For new projects, start by testing the newer Gemini 3.1 Pro route and keep Gemini 3 Pro only when you need compatibility with an existing setup. Confirm current route availability in EvoLink before launching production traffic.
Change the model string from gemini-3-pro-preview to gemini-3.1-pro-preview in your API request. Keep the endpoint and auth the same, then test representative prompts to validate output quality, latency, and cost for your use case.
Use "gemini-3-pro-preview" as the model value. Send requests to api.evolink.ai/v1/chat/completions with Bearer token auth.
Standard pricing is listed as $2.00 input / $12.00 output per 1M tokens, with higher tiers for prompts over 200K tokens. Check the EvoLink dashboard and official Gemini pricing docs for current route-specific pricing.
1,048,576 input tokens (approximately 1M tokens) with up to 65,536 max output tokens. Supports long documents, multi-turn conversations, and large codebases.
Yes. Thinking mode is available for multi-step reasoning. The response includes reasoning_tokens in completion_tokens_details so you can track how much compute went into chain-of-thought.
Text, code, images, video, audio, and PDF. Output is text only — no image or audio generation. Google Search grounding and URL context (up to 20 URLs, 34MB each) are supported for factual verification.
Check current availability in EvoLink before relying on Gemini 3 Pro for production traffic. For new builds, test Gemini 3.1 Pro first and keep a fallback route configured.

Where Gemini 3 Pro fits now

Gemini 3 Pro is an earlier Gemini 3 route. Compare it with Gemini 3.1 Pro, Flash, and CustomTools before routing new production workloads. Compare or upgrade to other Gemini routes based on your use case.