Apple Announcing Private Cloud Compute

Apple Announcing Private Cloud Compute

Apple has announced Private Cloud Compute (PCC), which uses Confidential Computing to ensure user data privacy in cloud AI processing, setting a new standard in data security.

Raphaël Millet,
Corentin Lauverjat

Apple's announcement at WWDC last week about using Confidential Computing to protect users’ data is a key moment for the Confidential AI ecosystem. Here is our take on this breakthrough.

TLDR

  • Apple's Private Cloud Compute (PCC) uses Confidential Computing (CC) to ensure user data privacy in cloud AI processing, setting a new bar in data security. While Apple traditionally focused on on-device data processing for privacy, the size of the most wanted AI models necessitated a shift to cloud processing. Apple's PCC protects data during AI model analysis by decrypting it only within trustworthy compute nodes, ensuring it remains confidential and inaccessible to administrators.
  • Apple has shown a strong commitment to this approach and has promised to make its PCC stack partially verifiable by publishing most binary artifacts and a subset of the security-critical source code. However, this raises concerns about the extent of transparency and the challenges third-party researchers will face in fully auditing the system. Key questions about the robustness of their privacy guarantees remain unanswered.
  • Apple's use of CC is likely to drive widespread adoption of this technology, setting a new industry benchmark.
  • Mithril offers an open-source toolkit for deploying Private Cloud Compute with the same high standards of security and privacy as Apple’s PCC, compatible with various existing hardware.

Why Apple built it

Privacy has always been a core value for Apple. For example, Apple introduced App Tracking Transparency in 2021, a feature that requires apps to get user permission before tracking their data, a very constraining approach for app providers compared to the Android ecosystem.

Apple was in need of a way to combine data protection and use cloud-based AI. Apple's technical approach to protecting privacy is to maximize on-device processing and ensure that data transferred and stored in the Apple cloud is always encrypted. However, with the rise of AI for various everyday applications and efficiency gains, it became clear that Apple needed to position itself in this domain. Yet the data encryption protection used by Apple to protect data at rest and in transit cannot be used to protect data during model analysis in the Cloud. Many observers wondered how Apple would reconcile its stringent data security measures with large AI models that require cloud and can't be efficiently run on individual devices due to their size.

Today, we have no choice but to trust the AI model provider when we use their cloud-based services. While the usage of AI has skyrocketed in the past year, serious concerns have been raised about the lack of transparency and security. A key concern is the inadvertent leakage of confidential data. Data sent to AI by users may be used by the provider to further train their models. The model may then reproduce a segment of this input data as output to other users, thus leaking the confidential data. Incidents of accidental proprietary data breaches such as the Samsung Incident in 2022, highlighted the potential impacts of those risks. Without extra technical protection and guarantees, users' main protection today against data misuse is the legal framework. There is no way to know for sure what happens with the data.

Apple's solution to this dilemma is using Confidential Computing (a special hardware technology that encrypts data at runtime and transit) to protect user data, decrypting it only within secure and isolated environments known as enclaves. These environments are referred to by Apple as Private Cloud Compute. Access to these reinforced security environments (with no admin access) is controlled by a key decided by the user, meaning the data is analyzed in what can be considered a "Virtual Private Cloud," a term we at Mithril used in 2022 to describe this technology. This ensures data confidentiality and full code integrity, demonstrating that data can be sent to AI providers without any exposure, not even to the AI provider’s admins. In this case, it also protects against breaches and compromises on the part of Apple's AI ops teams, as they administer their cloud. This approach addresses the growing concerns about data breaches and unauthorized access, providing a robust framework for secure AI deployment. It’s a solution that marries the benefits of advanced AI with Apple’s stringent privacy standards, setting a precedent for the industry.

How It Works

Apple uses a cryptographic module present on the hardware to create trusted environments called enclaves. The goal is to ensure that the AI model's administrator does not have access to data in clear and that the model performs only inference without leaking data. The specific "Confidential Computing" hardware also allows for the physical isolation of this trusted environment.

These enclaves leverage "Apple silicon". Yet we don’t know for now what type of enclave hardware technology they leverage exactly. This a is real area for improvement in terms of transparency. This Apple silicon is equipped with a hardened subset of iOS and macOS as a base. This ensures that user data is processed securely and privately, with technical guarantees enforced by the hardware itself. The use of Secure Boot and Code Signing ensures that only authorized code runs on these enclaves, protecting data integrity and against some physical attacks.

Apple's approach to real transparency and traceability is significant but there are notable limitations. Unlike the Confidential Computing stack of most cloud providers, which often lacks transparency, Apple's stack promises a level of verifiability. Transparency is crucial in Confidential Computing as it allows users and researchers to verify the integrity and security of the system. Apple has committed to publishing most binary artifacts of their CC stack, including a subset of the security-critical PCC source code. While Apple's decision to release raw firmware for iBoot and bootloader is unprecedented and a positive step, it falls short of the comprehensive transparency provided by fully open-source solutions like those offered by Mithril Security. Third-party security researchers face greater challenges in auditing closed-source software, making it more difficult to detect backdoors or vulnerabilities and thereby limiting the level of trust that can be obtained in the PCC nodes.

In contrast, Mithril Security's approach of using open-source components and making all source code available offers superior auditability. This commitment ensures that security guarantees can be independently verified, building greater trust in our platform. 

Key Elements Still Awaiting Clarification from Apple

  • OpenAI Models Deployment: Apple Intelligence will use Private Cloud Compute (PCC). However, Apple has also announced a partnership with OpenAI to integrate ChatGPT into Siri and Writing Tools. The use of ChatGPT will not provide the same level of privacy. As part of its agreement with Apple, OpenAI has agreed not to store any prompts from Apple users or collect their IP addresses, but this falls short of the technical guarantees that Apple implements in their PCC. To mitigate this privacy issue, the use of ChatGPT will be on an opt-in basis.  Will Apple compete with OpenAI by developing its own ChatGPT-like AI to avoid reliance on OpenAI and thus provide better privacy guarantees for its users? Will this push OpenAI to accelerate the deployment of enclave-based technologies to offer similar privacy assurances? 
  • The protection of Apple users’ data using OpenAI models is currently a hot topic, notably led by Elon Musk.
  • “Confidential Computing Hardware” Stack: What specific Confidential Computing hardware is Apple using? Are they opting for the latest Confidential Computing capabilities of Nvidia's H100 GPUs (Mithril will release a new version of our Confidential AI inference server compatible with them in July) or are they using vTPMs as we did on our first version of BlindLlama?

Our best guess of this topic based is that they use don’t Nvidia H100 . They mention "custom-built server hardware" and Apple silicon. They might have designed their own GPUs / accelerator for ML. 

Based on early documentation, what they do is more akin to Trusted Computing than Confidential Computing (A bit like us with Mithril OS reliance on TPM). For instance a major part of the guarantees come from the secure boot and an external chip -- which they call "Secure Enclave" but the chip is mostly used as a Root of Trust very similar to a TPM.

  • Further Insights Needed: Awaiting more details on these aspects to fully understand the scope and impact of Apple's PCC.

Why It’s a Key Moment for the Confidential Computing Ecosystem

Apple's move to deploy AI in Confidential environments is a significant shift as it marks the first major tech player to embrace secure AI with trusted environments. While we will wait to test the solution to judge fully, Apple's strong commitment to Confidential Computing is a game-changer for the global adoption of security. At Mithril, we've been convinced since our inception three years ago that CC is the right solution to combine the adoption of large models, which need cloud deployment, with user data security. What was missing was a visible example that could serve as a benchmark for the industry. Apple's PCC could be that example, potentially triggering widespread adoption.

This initiative also sets a new industry standard, encouraging other tech giants to follow suit. By establishing a high data security and privacy benchmark, Apple is pushing the entire industry towards more secure and transparent AI practices. This shift is crucial as more businesses and users rely on cloud-based AI services, highlighting the need for robust security measures. Now, let’s see how you can develop your own Private Cloud Compute.

How to Deploy AI in Your Own Private Cloud Compute with Mithril

At Mithril, we develop open-source software modules to create your own Private Cloud Compute using hardware already available from most cloud providers, including the recent Confidential Computing GPU compatible with H100. If you want to offer an AI solution with the same security guarantees as Apple's PCC, you can achieve this with our toolkit. Our toolkit consists of various software components, each tailored for specific Confidential Computing hardware. The first, BlindAI, compatible with Intel SGX, was successfully audited by Quarkslab last year.

Our framework already covers most of the features of PCC:

  • Stateless computation on personal user data: Available within Mithril framework. Our framework ensures that data is used only for the duration of the computation and is not stored or logged, aligning with Apple's stateless data processing principles.
  • Enforceable guarantees: Available within Mithril framework. By leveraging hardware-backed security features, our framework provides strong enforceable guarantees that user data remains private and secure during processing.
  • No privileged runtime access: Available within Mithril framework. Our design eliminates privileged access, ensuring that even administrators cannot bypass security measures to access user data.
  • Non-targetability: Not available within Mithril framework. An attacker should not be able to attempt to compromise personal data that belongs to specific, targeted Private Cloud Compute users without attempting a broad compromise of the entire PCC system We might implement this feature in the future. Yet it is quite complex and not feasible for some use cases because we do not control the entire supply chain as Apple does.
  • Verifiable transparency: Available within Mithril framework. Our framework includes open-source components and validation tools, allowing for independent verification of security guarantees.

With Mithril you don’t need to trust the Confidential AI stack provider. Our toolkit allows you to deploy a Private Cloud Compute with the same high standards of security and privacy as Apple’s PCC, except you don’t need to trust us as you have to trust Apple with PCC. Indeed a big difference between what we build and Apple's PCC, is that we do not design our own silicon. This creates a reassuring "separation of duties" between the CPU and RoT designer (responsible for designing the hardware that generates the critical attestation report) and us (the confidential computing solution provider). In Apple's case, they own the attestation keys, which users must trust for system integrity. This somewhat defeats the purpose of ensuring Apple can't access the data. In our case, we don't own the attestation keys, enhancing trust in system integrity.

You can ensure that user data remains protected and processed within secure enclaves, meeting the highest data privacy standards. Our solutions are designed to be flexible and compatible with a wide range of existing hardware, making it easy for businesses to adopt these advanced security measures without significant infrastructure changes. Whether you are using Intel SGX, AMD SEV, or the latest Confidential Computing GPUs, our software is designed to provide seamless integration and robust security.

Join our community to know more about how to deploy confidential AI, or you can also contact us directly to learn more about our solutions