70B-parameter successor to LLaMA, trained on 2 trillion tokens and fine-tuned for helpful dialogue. Released openly under a custom license allowing commercial use with some restrictions. Strong performance rivaling closed models like ChatGPT on many benchmarks.