Apple unveils MM1 AI model with potential to drive future Siri 2.0

by

in

– Apple catching up quickly in the large language model scene
– MM1: a new method for training multimodal models using synthetic data
– Potential for MM1 to power Siri 2.0 with a focus on performance, efficiency, and multimodal capabilities

Apple is joining the large language model (LLM) scene, catching up quickly to Google, Microsoft, and Meta in creating powerful AI tools. CEO Tim Cook hinted at a major breakthrough in AI, possibly a new version of Siri powered by an LLM called MM1, similar to Google’s Gemini. MM1 is a new method for training multimodal models using synthetic data, offering improved performance and reducing the need for prompts to get desired results. MM1 achieves state-of-the-art pre-training metrics and competitive performance on benchmarks.

Apple’s MM1 is a family of AI models with up to 30 billion parameters, smaller than other models but still effective. The breakthrough lies in vision analysis of images and visual content, with a focus on understanding the output. MM1’s different architecture, approach to pre-training, and use of a mixture-of-experts model set it apart from other models. There are speculations that MM1 could power Siri 2.0, with a focus on performance, efficiency, and multimodal capabilities, potentially running on iPhones while maintaining user privacy.

Apple’s move to develop powerful AI models for Siri, alongside potentially incorporating Gemini and ChatGPT, indicates a multi-faceted approach to revolutionizing AI as Cook promised investors. While the full potential of MM1 and its impact on Siri remains to be seen, Apple’s advancements in AI technology hold promise for the future of consumer tech.

Source link