Gemini 2.5 Pro (API)

Google’s top reasoning model with 1M context, strong tools/RAG, and multi-turn planning. Get instant access through EvoLink with flexible pricing tiers.

Playground Not Available

This feature is currently only available for selected image and video generation models.

Gemini 2.5 Pro — million‑token context, strong reasoning

Handle books, codebases, and long conversations with 1M tokens. Better math, tool use, and multilingual understanding at competitive routed prices.

example 1

What can Gemini 2.5 Pro do?

Long‑context RAG

Ingest 1M tokens of docs/code, ground answers, and cite passages.

example 2

Tool + function calling

Robust JSON/function/tool calls for agents, data pipelines, and workflow automation.

example 3

Code & math reasoning

Better math proofs and code refactors; strong chain-of-thought with safety filters.

example 4

Why teams pick Gemini 2.5 Pro

Best balance of context, reasoning, and price among Google models, with instant EvoLink access and caching for heavy prompts.

1M context headroom

Avoid chunking—fit books, transcripts, and repos in one window.

Tooling strength

Reliable function calling for agents and data workflows.

Cost controls

Routing + prompt caching keeps large jobs affordable.

How to use Gemini 2.5 Pro

Prepare context, call the unified API, and iterate.

1

Step 1 — Load context

Attach docs/code (up to ~1M tokens) or use retrieval to keep prompts lean.

2

Step 2 — Define tools/format

Provide function schemas or JSON instructions; include safety and style constraints.

3

Step 3 — Generate & cache

Call the API, review outputs, enable prompt caching for repeated runs to cut cost.

Key capabilities

Optimized for long, reliable reasoning

Context

1M‑token context

Fits books/repos without aggressive chunking.

Tools

Tool/function calling

Deterministic JSON, strong argument formatting.

Cost

Prompt caching

Cut repeat costs on static context blocks.

Reasoning

Math & code reasoning

Improved CoT for analytics, proofs, refactors.

Language

Multilingual

Robust across English/Chinese and more.

Trust

Safety & formatting

Model-side filters plus schema guidance.

Gemini 2.5 Pro vs other text models

Choose based on context size and price

ModelDurationResolutionPriceStrength
Gemini 2.5 ProN/AN/A (text)$0.00125 in / $0.01 out per 1K (list)1M context, strong tools, caching.
Gemini 2.5 FlashN/AN/A (text)Cheaper, smaller contextSpeed and cost for shorter tasks.
Claude Opus 4.1N/AN/A (text)Higher per 1K tokensVery strong reasoning; smaller context than 1M.

Frequently Asked Questions

Everything you need to know about the product and billing.

List is ~$0.00125/1K input tokens and ~$0.01/1K output; EvoLink routing may reduce cost and supports caching discounts.
Up to ~1,000,000 tokens; ideal for long RAG or repo ingestion.
Yes, with reliable JSON outputs; provide schemas for best results.
Yes—use caching for static context to lower cost on repeat calls.
Yes—Gemini 2.5 Pro improves reasoning and code refactoring over earlier versions.