Apple has launched several open-source models of artificial intelligence, designed to operate directly on devices instead of relying on cloud servers.
The company refers to its models as “Open-Source Efficient Language Models” abbreviated as OpenELM. They were launched on the Hugging Face Hub platform, which allows sharing of AI system code.
Apple offers 8 different models of OpenELM, and the company stated that they rely on a “layer scaling strategy” aimed at enhancing the accuracy and efficiency of these models. The number of features in these models ranges from 270 million to 3 billion parameters, making them relatively small models.
Apple provided all accessories related to the new models, such as programming instructions and training logs, instead of only providing the final training models. The researchers at the company aim to accelerate the development of AI systems.
Apple’s launch of OpenELM models means “empowering and enhancing the open research community” with modern language models, allowing developers and companies to use or modify those models.
Recent reports suggest that Apple may intend to use custom AI models in its future devices instead of relying on servers to provide certain features.
On the other hand, companies working in the field of artificial intelligence are striving to develop small and efficient models with fast performance and lower resource consumption compared to large language models. These models stand out for their ability to work directly on devices.
Many examples of such models have been found, including Phi-3 from Microsoft, Gemini Nano from Google, and other similar models.