Anthropic logo

Claude 2

AnthropicProprietaryVerified

Anthropic's most capable system at release, representing a continuous evolution from previous Claude models. Trained via unsupervised learning, RLHF, and Constitutional AI with both supervised and reinforcement learning phases. Features improved helpfulness, honesty, and harmlessness (HHH) with enhanced coding abilities, longer output generation up to 4000 tokens, and better structured data formatting capabilities.

2023-07-11
~130B
Transformer
Proprietary

Specifications

Parameters
~130B
Architecture
Transformer
License
Proprietary
Context Window
100,000 tokens
Max Output
4,000 tokens
Training Data Cutoff
2023-01
Type
text
Modalities
text

Benchmark Scores

Evaluates code generation capabilities by asking models to complete Python functions based on docstr...

Grade School Math 8K (GSM8K) consists of 8.5K high-quality grade school math word problems....

MMLU78.5

Massive Multitask Language Understanding (MMLU) tests knowledge across 57 subjects including mathema...

view all (+11)

Advanced Specifications

Model Family
Claude
API Access
Available
Chat Interface
Available
Multilingual Support
Yes

Capabilities & Limitations

Capabilities
codingmathreasoninglong-form writingdocument analysisConstitutional AIharmless responsestechnical documentation processingcreative writingliterary analysisJSON/XML/YAML formattingmarkdown generationmultilingual translationfew-shot promptingcomplex instruction following
Known Limitations
confabulation and hallucinationknowledge cutoff in early 2023no web search capabilityweaker performance on low-resource languagescan be jailbroken
Notable Use Cases
conversational AIcoding assistancecontent generationdocument analysistechnical documentation reviewlong-form content creationeducational tutoringbusiness applicationscreative and literary writinglegal document supportmedical exam preparationtranslation servicesstructured data conversion

Related Models