Introducing Gemma: Groq’s lightning-fast chatbot, Google’s open-source alternative to Gemini

by

in

1. Google’s open source AI model Gemma is now available on Groq chatbot, joining Mixtral and Llama 2.
2. Gemma is smaller than other models but can be installed on laptops and runs fastest on Groq’s LPU chips.
3. Groq is a chatbot platform designed for speed, with chips optimized for running AI models quickly.

Google has released its open source AI model Gemma, which is now available through the Groq chatbot platform. Gemma is a smaller language model compared to others like Gemini or OpenAI’s ChatGPT, but it can be installed anywhere, including on laptops. The Language Processing Unit (LPU) chips that power Groq enable Gemma to respond at a fast rate of 679 tokens per second.

Gemma is part of the trend of smaller, open source AI models that are capable of running on laptops or phones. It is trained similarly to Gemini and is available in two billion and seven billion parameter versions. Google plans to expand the Gemma family over time, allowing other developers to build on top of the model and adapt it for various applications.

Groq, the chatbot platform hosting Gemma, is also a company that manufactures AI chips designed for quick inference and low latency. These chips are optimized for generative AI applications and efficient data flow. Gemma on Groq outperforms other cloud installations in terms of speed, responding faster than models like ChatGPT or Gemini.

The speed of AI models like Gemma on Groq is crucial for real-time interactions and applications. Connecting Gemma to a fast text-to-speech engine could enable natural conversations and adaptability to interruptions. Developers can access Gemma through Google Cloud’s Vertex AI for integration into apps and products, as well as through the Groq platform for offline use.

Source link