Google has revealed the new “Gemma” model of artificial intelligence, targeting researchers who want to run AI models on their devices instead of relying on cloud computing services.
According to Google’s statement, the new Gemma model was designed by its subsidiary company DeepMind based in the UK, in collaboration with other teams within the company.
The Gemma model shares some “technical and structural elements” with the larger Gemini model, offered by Google in two variants: Gemma 2B and Gemma 7B.
Researchers can access pre-trained and tuned Gemma models to receive instructions, either by running them locally on desktop or laptop computers, or by accessing them through cloud computing.
Google has developed a new version of graphics processing units cards from Nvidia and Google Cloud TPUs, artificial intelligence processors that work on the Google Cloud platform.
Google states that the Gemma model in its two versions is technically smaller compared to other large language models (LLMs), but it significantly surpasses them in key metrics, including the Llama 2 model from the previously known company Meta.
Google is currently providing the Gemma model to researchers for free through the Kaggle and Colab platforms, as well as offering free credits for cloud usage through Google Cloud for researchers.
The company announced the launch of the Gemini model earlier this month, offering a free application of the same name along with paid subscriptions for advanced features at a price of $20. Later, the company introduced the Gemini 1.5 Pro model with advanced capabilities compared to the previous version.