Google Gemma 2
The new model has achieved MMLU and MBPP scores that are 10 per cent higher than its predecessor.
Google has unveiled its latest AI model, Gemma 2 2B, a lightweight AI that outperforms larger models like GPT-3.5 and Mistral 8x7B on key benchmarks. Shortly after releasing the Gemma 2 best-in-class models, Google introduced Gemma 2 2B with built-in safety advancements. Additionally, Google announced new tools, ShieldGemma and Gemma Scope.
“With these additions, researchers and developers can create safer customer experiences, gain insights into our models, and confidently deploy powerful AI on devices, unlocking new possibilities for innovation,” Google stated.
Gemma 2 2B is a lightweight model that achieves impressive results by learning from larger models through distillation. It surpasses all GPT-3.5 models on Chatbot Arena, showcasing exceptional conversational AI abilities.
The model runs efficiently on various hardware, from edge devices and laptops to robust cloud deployments with Vertex AI and Google Kubernetes Engine (KGE). It is optimized with the NVIDIA TensorRT-LLM library to enhance speed and integrates with Keras, JAX, Hugging Face, NVIDIA NeMo, Ollama, and Gemma. cpp. It will soon be available on the MediaPipe platform.
Gemma 2 2B has 2.6B parameters and was trained on a massive 2 trillion token dataset. On Chatbot Arena, it scored 1130, matching the scores of GPT-3.5 Turbo and Mixtral 8x7B. It achieved an MMLU score of 56.1 and an MBPP score of 36.6, surpassing its predecessor by over 10 per cent. As an open-source model, developers can download it from Google’s announcement page.

Post a comment