Private cloud and on-device AI calculations: the keys to Apple Intelligence

Private cloud and on-device AI calculations: the keys to Apple Intelligence
Private cloud and on-device AI calculations: the keys to Apple Intelligence


At its Worldwide Developers Conference on Monday, Apple unveiled for the first time its vision for powering its product range with artificial intelligence. The key feature, which will apply to virtually their entire product range, is the Apple Intelligencea set of AI-based capabilities that promises to deliver personalized AI services while keeping sensitive data secure.

It represents Apple’s biggest leap forward in using our private data to help AI perform tasks for us. To prove it can do this without sacrificing privacy, the company says it has created a new way to manage sensitive data in the cloud.

Apple says its privacy-focused system will first attempt to perform AI tasks locally on the device itself. If data is exchanged with cloud services, it will be encrypted and subsequently deleted. The company also claims that the process, which it calls Private Cloud Computing (private cloud computing), will be subject to verification by independent security researchers.

The proposal supposes an implicit contrast with companies like Alphabet, Amazon or Meta, which collect and store enormous amounts of personal data. Apple states that any personal data transmitted to the cloud will be used solely for the AI ​​task at hand and will not be retained or accessible to the company, including for debugging or quality control, once the model has completed. application.

Simply put, Apple is saying that people can trust it to analyze incredibly sensitive data—photos, messages and emails that contain intimate details of our lives—and offer automated services based on what it finds there. without actually storing the data online or making it vulnerable.

He showed some examples of how it will work in upcoming versions of iOS. For example, instead of searching your messages for the podcast a friend sent you, you could ask Siri to find it and play it for you. Craig Federighi, senior vice president of engineering at software of Apple, explained another situation: You get an email that delays a work meeting, but your daughter is performing in a play that night. Now her phone can find the PDF with information about the performance, predict local traffic, and tell her if it will arrive on time. These capabilities will extend beyond Apple-created apps, allowing developers to take advantage of Apple’s AI as well.

Since the company obtains more benefits from hardware and advertising services, has fewer incentives than others to collect personal data online, allowing it to position the iPhone as the most private device. Even so, Apple has already found itself in the crosshairs of privacy advocates. Security flaws led to leaks of explicit photos from iCloud in 2014. In 2019, contractors were caught listening to intimate Siri recordings for quality control. Disputes over how Apple handles data requests from law enforcement agencies are ongoing.

The first line of defense against privacy violations, according to Apple, is avoid cloud computing for AI tasks whenever possible. “The cornerstone of the personal intelligence system is on-device processing,” Federighi says, meaning many of the AI ​​models will run on iPhones and Macs rather than in the cloud. “It is aware of your personal data without collecting your personal data.”

That presents some technical obstacles. Two years after boom With AI, making requests to models for even simple tasks still requires enormous amounts of computing power. Achieving that with the chips used in phones and laptops is difficult, which is why only the smallest of Google’s AI models can run on the company’s phones, and everything else is done through the cloud. Apple claims its ability to handle on-device AI calculations comes from years of research in chip design, which led to the M1 chips which began to deploy in 2020.

However, Not even Apple’s most advanced chips can handle the full spectrum of tasks the company promises to accomplish with AI.. If you ask Siri to do something complicated, it may have to pass that request, along with your data, to models that are only available on Apple’s servers. This step, security experts say, introduces a large number of vulnerabilities which can expose your information to external bad actors, or at least to Apple itself.

“I always warn people that as soon as data leaves the device, it becomes much more vulnerable,” said Albert Fox Cahn, executive director of the Surveillance Technology Oversight Project and resident professor at the Institute. of Information Law from the New York University School of Law (USA).

Apple claims to have mitigated this risk with its new Private Cloud Compute system. “For the first time in history, Private Cloud Compute extends the industry-leading security and privacy of Apple devices to the cloud,” Apple security experts write in their announcement, stating that personal data “is not accessible to no one but the user, not even Apple.” How does it work?

Apple has historically encouraged people to opt for end-to-end encryption (the same type of technology used in messaging apps like Signal) to protect sensitive iCloud data. But that doesn’t work with AI. Unlike messaging apps, where a company like WhatsApp doesn’t need to see the content of your messages to send them to your friends, Apple’s AI models need unencrypted access to underlying data to generate responses. This is where Apple’s privacy process comes into play. First of all, Apple claims that the data will only be used for the task at hand. In second place, This process will be verified by independent researchers.

Needless to say, the architecture of this system is complicated, but you can imagine it as an encryption protocol. If your phone determines that it needs help from a larger AI model, it will package a request containing the prompt it’s using and the specific model, and then put a lock on that request. Only the specific AI model being used will have the appropriate key.

To the question of MIT Technology Review about Whether users will be notified when a particular request is sent to cloud-based AI models instead of being handled on the devicean Apple spokesperson said there will be transparency for users, but no further details are available.

Dawn Song, co-director of the Center for Responsible Decentralized Intelligence at the University of California at Berkeley (USA) and an expert in private computing, says that Apple’s news is encouraging. “The list of goals they have announced is well thought out,” she says. “Of course, there will be some challenges in meeting those goals.”

Cahn says that judging from what Apple has revealed so far, the system appears much more privacy-protective than other AI products currently in existence. That said, the common refrain in their space is “Trust but verify.” In other words, We won’t know how secure these systems keep our data until independent researchers can verify their claims.as Apple promises it will do, and the company responds to its findings.

“Opening up to independent review by researchers is a big step,” he says. “But that doesn’t determine how you’re going to respond when researchers tell you things you don’t want to hear.” Apple did not respond to questions from MIT Technology Review about how the company will evaluate the researchers’ comments.

The pact between privacy and AI

Apple is not the only company that is betting that many of us will grant AI models almost unlimited access to our private data if that means they can automate tedious tasks. OpenAI’s Sam Altman described his dream AI tool to MIT Technology Review like someone “who knows everything about my life, every email, every conversation I’ve ever had.” At its own developer conference in May, Google announced Project Astra, an ambitious project to build a “universal AI agent that is useful in everyday life.”

It’s a gamble that will force many of us to consider for the first time what role, if any, we want AI models to play in the way we interact with our data and devices. When ChatGPT first appeared on the scene, it wasn’t a question we had to ask ourselves. It was just a text generator that could write us a birthday card or a poem, and the questions it raised — like where its training data came from or what biases it perpetuated — didn’t seem all that personal.

Now, less than two years laterBig Tech is betting billions of dollars that we trust the security of these systems enough to hand over our private information.. It is not yet clear whether we know enough to make that decision, or to what extent we can abstain even if we want to. “I’m concerned that this artificial intelligence arms race will put more and more of our data in the hands of others,” says Cahn.

Apple will soon launch beta versions of its Apple Intelligence features, starting this fall with the iPhone 15 and the new macOS Sequoia, which can run on Macs and iPads with M1 or newer chips. Says Tim Cook, CEO of Apple: “We believe that Apple’s intelligence is going to be indispensable.”

 
For Latest Updates Follow us on Google News
 

-