Google logo

GLaM (Generalist Language Model)

GoogleProprietaryPending Verification

A 1.2 trillion-parameter **Mixture-of-Experts** Transformer model. Uses sparse activation (64 experts) to reduce inference cost relative to model size.

2021-12-01
1.2T
Sparse Mixture-of-Experts Transformer
Proprietary

Specifications

Parameters
1.2T
Architecture
Sparse Mixture-of-Experts Transformer
License
Proprietary
Context Window
2,048 tokens
Type
text
Modalities
text

Benchmark Scores

Advanced Specifications

Model Family
GLaM
API Access
Not Available
Chat Interface
Not Available

Capabilities & Limitations

Related Models