During its annual developers conference, Google announced a series of new updates for Gemma, its open-source models series similar to Meta’s Llama models and Mistral models.
Gemma presented the next generation model Gemma 2 with open specifications, and the company plans to launch it through a model that includes 27 billion parameters in June.
Google announced Gemma back in February, and previous Gemma models were available in versions consisting of two billion parameters and seven billion parameters earlier this year, making the new model consisting of 27 billion parameters a step forward.
Google explained that its users have used Gemma models over a million times across its various services.
The company stated that it has improved the model containing 27 billion instructions for operation using graphics processing units from the next generation of Nvidia, making it easier to use.
Regardless of the number of teachers, Google has not released much data about Gemma 2 so far, meaning that users should wait for developers to receive the new model to determine its performance level.
These examples do not require extensive memory or high processing power consumption, making them suitable for use on devices with limited resources, such as smartphones, smart internet devices, and personal computers.
Since its launch earlier this year, Google has added several tools and programs, including CodeGemma for completing coding instructions, RecurrentGemma for improving memory efficiency, and PaliGemma.
Google describes PaliGemma as the first visual linguistic model in the Gemma series for image tagging and labeling, using visual questions and answers.
With around 27 billion parameters, Gemma 2 is capable of delivering accurate results and improved performance when dealing with challenging tasks.
Using a dedicated artificial intelligence chip is necessary for using Gemma 2 to reduce access time and process tasks such as image recognition and natural language processing.