The website (The Information) reported today, Monday, that Microsoft is working on developing a new linguistic model (MAI-1) that utilizes artificial intelligence to compete with models offered by both OpenAI and Alphabet, Google’s parent company.
According to a study based on information from two knowledgeable Microsoft employees, the new model known internally as (MAI-1) is being managed by Mustafa Sulaiman, who was recently appointed. Sulaiman is one of the founding partners of DeepMind, the artificial intelligence division at Google, and was previously the CEO of Inflexion, an AI startup.
The report mentioned that the primary purpose of the new model has not been determined yet, with its effectiveness being the key factor. Microsoft estimates that it may provide an overview of this model at its developer conference later this month.
Microsoft declined to comment when contacted by Reuters agency.
The report stated that the model (MAI-1) will be “significantly larger” in size compared to previous small models trained on open sources by Microsoft, indicating that its cost will be higher.
Microsoft announced last month the release of a miniature version of its AI system named (Phi-3-Mini), aiming to attract more users by offering lower-cost options.
The company is investing billions in funding OpenAI and has incorporated its proprietary technologies into its productivity suite, including the ChatGPT application, enabling it to lead the field of industrial artificial intelligence early on.
The report mentioned that Microsoft heavily relies on servers equipped with NVIDIA graphic processing units, along with massive amounts of data, to develop and enhance the model.
The system (MAI-1) will consist of 500 billion variables, while it is claimed that the OpenAI’s GPT-4 system contains a trillion variables, and the Phi-3-Mini system contains 3.8 billion variables.
In March, Microsoft appointed Sulaiman as the director of its new AI division for clients and offered positions to several employees from Inflexion.
The report stated that the model was not cloned from Inflexion, but it may have been based on the training data provided by the startup.
(Reuters)