TLDR
- Apple used Google’s Tensor Processing Units (TPUs) to train two AI models for its upcoming Apple Intelligence feature.
- The models are called Apple Foundation Model (AFM)-on-device and AFM-server.
- AFM-on-device was trained on 2,048 TPUv5p chips, while AFM-server was trained on 8,192 TPUv4 chips.
- Apple did not use Nvidia’s GPUs, which dominate about 80% of the AI chip market.
- The decision to use Google’s chips over Nvidia’s is considered notable in the industry.
Apple has chosen to use Google’s chips instead of industry leader Nvidia’s to train two key artificial intelligence (AI) models for its upcoming Apple Intelligence feature. This revelation came from an Apple research paper published on Monday, shedding light on the tech giant’s AI development process.
The two models in question are the Apple Foundation Model (AFM)-on-device, which will operate on iPhones and other Apple devices, and AFM-server, a larger server-based language model. These models form crucial components of Apple’s AI software infrastructure, which will power a suite of new AI tools and features.
According to the research paper, Apple used Google’s Tensor Processing Units (TPUs) to train these models. Specifically, AFM-on-device was trained on 2,048 TPUv5p chips, while AFM-server was trained on 8,192 TPUv4 processors. This choice is particularly noteworthy because Nvidia is widely recognized as the leader in AI computing, commanding about 80% of the AI chip market.
Apple’s decision to rely on Google’s cloud infrastructure rather than Nvidia’s hardware has raised eyebrows in the tech industry. While the paper doesn’t explicitly state that no Nvidia chips were used, the description of the hardware and software infrastructure lacks any mention of Nvidia products.
Google’s TPUs are custom-developed application-specific integrated circuits (ASICs) designed to accelerate machine learning workloads. Unlike Nvidia’s graphics processing units (GPUs), which are sold as standalone products, Google’s TPUs are accessible through the Google Cloud Platform. This means that customers, including Apple, must build their software through Google’s cloud platform to use these chips.
The revelation about Apple’s use of Google’s chips comes as the company prepares to roll out its Apple Intelligence feature. Introduced at the Worldwide Developers Conference 2024, Apple Intelligence is described as a personal intelligence system integrated into iOS 18, iPadOS 18, and macOS Sequoia. It consists of multiple generative models designed to be fast and specialized for users’ everyday tasks.
Apple plans to give software developers access to Apple Intelligence for early testing as soon as this week through iOS 18.1 and iPadOS 18.1 betas. This early access is intended to help identify and fix bugs, ensuring a smooth launch. The full rollout to customers is expected by October, a few weeks after the release of the new iPhone and iPad software in September.
In the research paper, Apple’s engineers noted that it would be possible to create even larger, more sophisticated models using Google’s chips than the two models discussed. This suggests that Apple’s partnership with Google for AI development could extend beyond the current models.
While Apple’s stock saw a slight dip of 0.1% to $218.24 in regular trading following this news, the long-term impact of this decision on both Apple’s AI strategy and the wider AI chip market remains to be seen.