Gemini 2.5 Flash
Improved across key benchmarks for reasoning, multimodality, code and long context while getting even more efficient. Best for fast performance on complex tasks.
Specifications
- Architecture
- Mixture of Experts
- License
- Proprietary
- Context Window
- 1,000,000 tokens
- Type
- multimodal
- Modalities
- textimagevideoaudio
Benchmark Scores
A challenging benchmark of novel problems designed to test the limits of AI capabilities....
Graduate-level Problems in Quantitative Analysis (GPQA) evaluates advanced reasoning on graduate-lev...
American Invitational Mathematics Examination (AIME) 2025 problems....
Software Engineering Benchmark (SWE-bench) evaluates models on real-world software engineering tasks...
Massive Multi-discipline Multimodal Understanding (MMMU) evaluates multimodal understanding across 3...
Massive Multitask Language Understanding (MMLU) tests knowledge across 57 subjects including mathema...
Tests models on their ability to write code in multiple programming languages....
Evaluates models on their ability to solve coding problems in real-time....
A benchmark of simple but precise questions to test factual knowledge and reasoning....
Evaluates models on their ability to ground responses in factual information....
Evaluates models on their ability to understand and generate content with specific vibes or styles....
MRCR (Multi-Round Coreference Resolution) is part of the Michelangelo benchmark suite that evaluates...
MRCR (Multi-Round Coreference Resolution) is part of the Michelangelo benchmark suite that evaluates...
Advanced Specifications
- Model Family
- Gemini
- API Access
- Available
- Chat Interface
- Available
- Multilingual Support
- Yes
Capabilities & Limitations
- Capabilities
- fast performancereasoningcodemultimodal understandinglong context
- Tool Use Support
- Yes