Apple Releases 8 Small AI Language Models to Compete with Microsoft’s Phi-3

Apple Releases 8 Small AI Language Models to Compete with Microsoft’s Phi-3
Apple Releases 8 Small AI Language Models to Compete with Microsoft’s Phi-3

Seeing strength in numbers, Apple has made a strategic move in the competitive artificial intelligence market by making eight small AI models available. The compact tools, collectively called OpenELM, are designed to work on mobile and offline devices, making them perfect for smartphones.

Published in the open source AI community Hugging Face, models are offered in versions with 270 million, 450 million, 1.1 billion and 3 billion parameters. Users can also download Apple’s OpenELM in pre-trained or instruction-tuned versions.

Pre-trained models provide a foundation upon which users can fine-tune and build. Instruction-tuned models are already programmed to respond to instructions, making them better suited for conversations and interactions with end users.

While Apple hasn’t suggested specific use cases for these models, they could be applied to running assistants that can analyze emails and texts, or provide intelligent suggestions based on data. This is a similar approach to adopted by Googlewhich deployed its Gemini AI model in its Pixel line of smartphones.

The models were trained on publicly available data sets, and Apple is sharing both the CoreNet code (the library used to train OpenELM) and the “recipes” for its models. In other words, users can inspect how Apple built them.

Apple’s launch comes shortly after Microsoft will announce Phi-3, a family of small language models capable of running locally. Phi-3 Mini, a 3.8 billion parameter model trained on 3.3 trillion tokens, is still capable of handling 128K context tokens, making it comparable to GPT-4 and outperforming Llama-3 and Mistral Large in terms of token capacity.

Being open source and lightweight, Phi-3 Mini could potentially replace traditional assistants like Apple’s Siri or Google’s Gemini for some tasks, and Microsoft has already tested Phi-3 on an iPhone and reported satisfactory results and fast token generations.

While Apple has yet to integrate these new AI language modeling capabilities into its consumer devices, the upcoming iOS 18 update is rumored to includes new AI features that use on-device processing to ensure user privacy.

Apple hardware has an advantage in using local AI, as it combines the device’s RAM with the GPU’s video RAM (or VRAM). This means that a Mac with 32GB of RAM (a common configuration on a PC) can use that RAM as the GPU’s VRAM would to run AI models. In comparison, devices Windows they are limited by the device RAM and the GPU VRAM separately. Users often need to purchase a powerful 32GB GPU to increase RAM and run AI models.

However, Apple falls behind Windows/Linux in the area of ​​AI development. Most AI applications revolve around hardware designed and built by Nvidia, which Apple phased out in support of its own chips. This means that there is relatively little AI development native to Apple, and as a result, using AI in Apple products requires layers of translation or other complex procedures.

 
For Latest Updates Follow us on Google News
 

-

NEXT 5 Stardew Valley-type games for Android mobiles