Gemini 2.5 Flash
Improved across key benchmarks for reasoning, multimodality, code and long context while getting even more efficient. Best for fast performance on complex tasks.
Specifications
- Architecture
- Mixture of Experts
- License
- Proprietary
- Context Window
- 1,000,000 tokens
- Type
- multimodal
- Modalities
- textimagevideoaudio
Benchmark Scores
A challenging benchmark of novel problems designed to test the limits of AI capabilities....
Graduate-level Problems in Quantitative Analysis (GPQA) evaluates advanced reasoning on graduate-lev...
Software Engineering Benchmark (SWE-bench) evaluates models on real-world software engineering tasks...
A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for Expert AGI with 11.5...
Massive Multitask Language Understanding (MMLU) tests knowledge across 57 subjects including mathema...
The FACTS Grounding Leaderboard evaluates LLMs' ability to generate factually accurate long-form res...
Evaluates models on their ability to understand and generate content with specific vibes or styles....
MRCR (Multi-Round Coreference Resolution) is part of the Michelangelo benchmark suite that evaluates...
MRCR (Multi-Round Coreference Resolution) is part of the Michelangelo benchmark suite that evaluates...
Advanced Specifications
- Model Family
- Gemini
- API Access
- Available
- Chat Interface
- Available
- Multilingual Support
- Yes
Capabilities & Limitations
- Capabilities
- fast performancereasoningcodemultimodal understandinglong context
- Tool Use Support
- Yes