Gemini 3.1 Pro
Google's most capable AI model — with major advances in reasoning, coding, and agentic intelligence
2× the reasoning of Gemini 3 Pro • #1 on 12 of 18 benchmarks • Native SVG & 3D code rendering
What's New in Gemini 3.1 Pro
Every dimension reimagined — reasoning, coding, multimodal understanding, and speed
2× Abstract Reasoning
Scores 77.1% on ARC-AGI-2 — more than double Gemini 3 Pro. Solves novel visual-logic puzzles requiring multi-step abstraction that previous models could not.
1M Token Context, 65K Output
Process entire codebases, 8.4 hours of audio, 900-page PDFs, or 1 hour of video in a single prompt. Output expanded to 65,536 tokens — no more truncation.
Elite Software Engineering
80.6% SWE-Bench Verified pass rate. LiveCodeBench Pro Elo of 2887 — outperforming GPT-5.2. Handles real-world software issues autonomously.
Native SVG & 3D Code Rendering
Generates, animates, and visually renders SVG graphics and 3D code directly from natural language descriptions — a capability not commonly found in other models.
Long-Horizon Agentic Workflows
Scores 33.5% on APEX-Agents, 69.2% on MCP Atlas for tool coordination, and 85.9% on BrowseComp for autonomous web research. Strong performance across agentic benchmarks.
Three-Tier Thinking System
Choose between Low, Medium, or High compute modes. Balance output speed vs. reasoning depth for every task — from quick answers to complex problem solving.
Gemini 3.1 Pro Benchmark Performance
#1 on 12 of 18 tracked benchmarks across reasoning, coding, multimodal, and agentic tasks
Abstract Reasoning
Software Engineering
Multimodal Understanding
Agentic Capabilities
Gemini 3.1 Pro vs Competition
How Gemini 3.1 Pro compares to Claude Opus 4.6, GPT-5.2, and GPT-5.4 across key benchmarks and pricing
Reasoning & Knowledge
| Benchmark | Gemini 3.1 Pro | Claude Opus 4.6 | GPT-5.2 | GPT-5.4 |
|---|---|---|---|---|
| ARC-AGI-2 | 77.1% | 68.8% | 52.9% | 73.3% |
| GPQA Diamond | 94.3% | 91.3% | 92.4% | 92.8% |
| OSWorld (Computer Use) | — | 72.7% | 47.3% | 75.0% |
| GDPval (Knowledge Work) | — | 78.0% | 70.9% | 83.0% |
Coding & Agentic Tasks
| Benchmark | Gemini 3.1 Pro | Claude Opus 4.6 | GPT-5.2 | GPT-5.4 |
|---|---|---|---|---|
| SWE-Bench Verified | 80.6% | 80.8% | 80.0% | — |
| SWE-Bench Pro | 54.2% | — | 55.6% | 57.7% |
| LiveCodeBench Pro Elo | 2887 | — | 2393 | — |
| Terminal-Bench 2.0 | 68.5% | 65.4% | 54.0% | 75.1% |
| MCP Atlas | 69.2% | 59.5% | 60.6% | 67.2% |
Pricing
| Benchmark | Gemini 3.1 Pro | Claude Opus 4.6 | GPT-5.2 | GPT-5.4 |
|---|---|---|---|---|
| Input Price / 1M tokens | $2.00 | $15.00 | $1.75 | $2.50 |
| Output Price / 1M tokens | $12.00 | $75.00 | $14.00 | $15.00 |
Gemini 3.1 Pro FAQ
Frequently asked questions about Gemini 3.1 Pro
- What is Gemini 3.1 Pro?
- Gemini 3.1 Pro is Google DeepMind's most advanced AI model, released on February 19, 2026. It delivers a 2x+ reasoning boost over Gemini 3 Pro, ranks #1 on 12 of 18 tracked benchmarks, and supports 1M token context with 65K token output. It is built on a Transformer-based Mixture-of-Experts architecture.
- When was Gemini 3.1 Pro released?
- Gemini 3.1 Pro was released on February 19, 2026. It is the first '.1' increment between major Gemini versions — Google previously used '.5' for mid-cycle updates. The naming change reflects the significant capability jump in reasoning and agentic performance.
- How much does Gemini 3.1 Pro cost?
- Gemini 3.1 Pro costs $2 per 1M input tokens and $12 per 1M output tokens — the same price as Gemini 3 Pro. For prompts over 200K tokens, pricing increases to $4/$18. Context caching can reduce costs by up to 75%. This makes it 7.5x cheaper than Claude Opus 4.6 on input.
- How does Gemini 3.1 Pro compare to GPT-5.4?
- Gemini 3.1 Pro leads on ARC-AGI-2 (77.1% vs 73.3%), GPQA Diamond (94.3% vs 92.8%), and MCP Atlas tool coordination (69.2%). GPT-5.4 leads on Terminal-Bench 2.0 (75.1% vs 68.5%), SWE-Bench Pro (57.7% vs 54.2%), OSWorld computer use (75.0%), and knowledge work (83.0% GDPval). Gemini 3.1 Pro is cheaper at $2/$12 vs $2.50/$15 per 1M tokens.
- How does Gemini 3.1 Pro compare to Claude Opus 4.6?
- Gemini 3.1 Pro leads on ARC-AGI-2 (77.1% vs 68.8%), GPQA Diamond (94.3% vs 91.3%), LiveCodeBench Pro (2887 Elo), and MCP Atlas (69.2% vs 59.5%). Claude Opus 4.6 narrowly leads SWE-Bench Verified (80.8% vs 80.6%) and expert office tasks (GDPval-AA). Gemini 3.1 Pro is 7.5x cheaper on input ($2 vs $15 per 1M tokens).
- What is the context window of Gemini 3.1 Pro?
- Gemini 3.1 Pro supports a 1,048,576 token (1M) input context window and up to 65,536 tokens of output. It can process entire codebases, 8.4 hours of audio, 900-page PDFs, or 1 hour of video in a single prompt.
- Can Gemini 3.1 Pro generate SVG and 3D code?
- Yes. Gemini 3.1 Pro can generate, animate, and visually render SVG graphics and 3D code directly from natural language descriptions. This native SVG and 3D rendering capability is not commonly found in other frontier models.
- What are the thinking levels in Gemini 3.1 Pro?
- Gemini 3.1 Pro offers three thinking levels: Low (fastest, for simple queries and classification), Medium (balanced, for code review and data analysis), and High (maximum reasoning depth, for complex coding and research). The Medium level is new in 3.1 Pro and helps optimize cost for tasks that don't need full reasoning.
About Gemini 3.1 Pro
Gemini 3.1 Pro is Google DeepMind's most advanced AI model, released on February 19, 2026. Built on a Transformer-based Mixture-of-Experts architecture atop Gemini 3 Pro, it delivers breakthrough improvements across every metric at the same price point: $2 per 1M input tokens and $12 per 1M output tokens.
Important Notice: Gemini3.us is an independent enthusiast community and developer platform. We are not affiliated with, endorsed by, or officially connected to Google LLC. We provide paid access to Google's official Gemini API services to support our infrastructure and operations.
Experience Gemini 3.1 Pro Today
Try Gemini 3.1 Pro through our platform