Microsoft has announced the launch of Phi-3, a small model of artificial intelligence capable of functioning across various devices, offering several diverse versions for use.
This model is presented in three diverse versions: the Mini version, containing about 3.8 billion parameters, the Small version with approximately 7 billion parameters, and finally the Medium version with around 14 billion parameters. The parameters represent variables in artificial intelligence systems, where these parameters are used to evaluate the sizes and capabilities of the models.
This launch follows the release of Phi-2 by the company at the end of the past year, surpassing new competing releases such as Llama-3 in efficiency and performance.
The smaller-sized version containing 3.8 billion parameters is considered more advanced than the previous Phi-2 model, with significantly lower resource consumption compared to larger linguistic models. The Phi-3 Mini model surpasses the Llama-3 model from Meta, which includes 8 billion parameters, and the GPT-3 model from OpenAI, containing 3.5 billion parameters, according to Microsoft’s proprietary metrics for evaluating the performance of artificial intelligence models.
Due to the small size of these models, Microsoft has directed Phi-3 models towards devices with low power consumption compared to larger models usually used in server-based services.
Eric Boyd, Microsoft’s Corporate Vice President, stated that the latest version “has the ability to directly handle natural language through a smartphone,” making it ideal for use in modern applications that require artificial intelligence capabilities everywhere.
Although Microsoft’s recent products may stand out from competing releases in the same category, they still cannot keep up with the efficiency of large LLMs models trained on a wide range of online data. On the other hand, these models offer improved performance on devices due to the smaller data size they rely on.