For the past two years, as artificial intelligence rampaged into the public consciousness, Apple has been very quiet. Apple is a 48-year-old company — mature by Silicon Valley standards. Ancient by AI ones. To its younger competitors, the company’s lack of an AI strategy was a delightful sign of slippage. Wall Street more or less agreed. It wasn’t just silence and age stirring doubt, but Apple’s corporate essence. The company is a $3 trillion control freak. It takes all the messy bits of technology and subdues them with engineering and design until they feel ordered, inevitable and, crucially, profitable. How do you do that with AI — a technology that incinerates cash and that no one can completely explain, let alone control?
At the company’s Worldwide Developers Conference (WWDC) on Monday, Apple provided its answer. It’s not a sexy answer or a risky one. It’s so consistent with the values of a prosperous 48-year-old that the whole presentation could have been delivered by a pair of Bonobos. But it’s the first rational theory of AI for the masses that I’ve heard, and it does what all great corporate strategy is supposed to: identify a gaping hole in the marketplace and make sure it overlaps precisely with your strengths.
[In an interview with Josh Tyrangiel, Tim Cook explains how Apple’s new AI will enhance your work and life, with guardrails.]
Let’s start with the theory. Apple believes that much of the conversation around AI these past few years has been categorically insane. “We’re trying to help people in their daily life,” John Giannandrea, Apple’s senior vice president of machine learning and AI strategy, told me after WWDC wrapped. “We’re not trying to make a sentient being or some nonsense. Talking about AI as a new species” — as the CEO of Microsoft AI recently did — “seems like complete nonsense to me. This is a technology, and we’re trying to apply it in the most practical, helpful way.”
Early in the development process, Apple bet the franchise that most people do not want a trillion-parameter neural network, because most people do not know what any of those words mean. They want AI that can shuttle between their calendar and email to make their day a little more coordinated. They want Siri to do multistep tasks, like finding photos of their kid in a pink coat at Christmas and organizing them into a movie with music that flatters their taste. If AI is going to generate any original visuals, they’d prefer emojis based on descriptions of their friends rather than deepfakes. And of course they want all of Apple’s usual privacy guarantees.
Apple calls these kinds of AI-driven tasks “personal context.” Each is a meaningful improvement to the iPhone, which is where more than 1 billion people do the bulk of their computing and where Apple makes the bulk of its profits. They also happen to require relatively small bursts of computing power, which is where AI generates the most expense. By constraining itself, Apple says it’s able to run most of these functions on a 3 billion-parameter AI model that’s completely contained within the device — meaning no communication with an outside server and therefore no privacy risk. This sounds easy and is all kinds of hard from an engineering perspective, unless you make your own silicon and run your own supply chain and train your own AI models on licensed high-quality data. The benefits of being a control freak.
There’s a conviction among the disrupters that Apple doesn’t deserve credit for this strategy. They insist Apple was caught off guard by the sudden emergence of large AI models and is only relying on small ones because it lacked the foresight to make its own enormous foundation model like Open AI’s ChatGPT. There’s a lot of Silicon Valley pride at stake, to which Apple says: Hush now.
Still, Apple concedes there are some tasks too complicated to be handled by a relatively small AI model. For midsize jobs that require web search, the company built an array of private cloud servers that it can ping for help, where it says user data will never be stored or made accessible to Apple or anyone else. And for the biggest AI tasks, Apple has integrated an outside product into the iPhone’s user experience: ChatGPT.
Wait, what? Isn’t Apple the geezer that OpenAI hopes to topple? Isn’t OpenAI the chaos kid whose willingness to move fast and break stuff has Apple threatening to turn this car around?
When a Silicon Valley partnership seems contradictory, it usually means each side is temporarily using the other. The Apple executives I spoke with weren’t exactly thrilled by OpenAI’s recent run of self-inflicted PR head wounds, but they conceded that ChatGPT is the best and most powerful consumer AI on the market. (GPT-4 has approximately 1.5 trillion parameters; it’s an 18-wheeler compared with the tricycle running on the iPhone.) If an iPhone user wants to analyze a thick PDF or do some generative writing or coding, integrating ChatGPT — for free, without the need to create an account — is a pretty sweet perk. And before referring any query to ChatGPT, the iPhone’s operating system will ask for a user’s permission.
For OpenAI, access to Apple’s user base is literally priceless. If it can insinuate itself into the lives of just a fraction of the iPhone’s 1.4 billion users — while gaining the tacit endorsement of tech’s most respected gatekeeper — it could expand the company’s already broad horizons. My hunch is that both sides will get something out of the deal and go their separate ways.
For Apple, that path appears set. It’s going to use AI to be the life hacker that improves emails and saves time and makes little generative delights that take users ever deeper into their Apple devices. It’ll be safe, profitable, inevitable — so inevitable that all friction will be removed. It won’t even be called artificial intelligence. In a sublime act of marketing hubris, Apple has decided to market this new frontier of products as something else: Apple Intelligence. Killer dad joke.