Gemma 3n is a mobile-first AI model built on a groundbreaking architecture optimized for on-device performance. It features Per-Layer Embeddings (PLE) for reduced RAM usage, allowing larger models to run on mobile devices with a memory footprint comparable to much smaller models. Supports multimodal understanding including text, images, audio, and video.
Typemultimodal
Parameters4B