AI privacy

Corentin Lauverjat
Members Public

Another Intel SGX Security Flaw? Our Analysis of the SGX Fuse Key Extraction Claim

A recent discovery reveals a weakness in older Intel CPUs affecting SGX security. Despite the alarm, the extracted keys are encrypted and unusable. Dive in to learn more.

Raphaël Millet
Members Public

Choosing your GPU stack to deploy Confidential AI

In this article, we provide you with a few hints on how to choose your stack to build a confidential AI workload leveraging GPUs. This protection is meant to safeguard data privacy and model weights confidentiality.

Raphaël Millet
Members Public

Mithril Security Summer Update: Progress and Future Plans

Mithril Security's latest update outlines advancements in confidential AI deployment, emphasizing innovations in data privacy, model integrity, and governance for enhanced security and transparency in AI technologies.

Daniel Huynh
Members Public

Mithril Security is supported by OpenAI Cybersecurity Grant to build Confidential AI.

Mithril Security has been awarded a grant from the OpenAI Cybersecurity Grant Program. This grant will fund our work on developing open-source tooling to deploy AI models on GPUs with Trusted Platform Modules (TPMs) while ensuring data confidentiality and providing full code integrity.

Laura Yie
Members Public

Technical collaboration with the Future of Life Institute: developing hardware-backed AI governance tools

The article unveils AIGovTool, a collaboration between the Future of Life Institute and Mithril, employing Intel SGX enclaves for secure AI deployment. It addresses concerns of misuse by enforcing governance policies, ensuring protected model weights, and controlled consumption.

Raphaël Millet
Members Public

BlindChat - Our Confidential AI Assistant

Introducing BlindChat, a confidential AI assistant prioritizing user privacy through secure enclaves. Learn how it addresses data security concerns in AI applications.

Daniel Huynh
Members Public

Privacy Risks of LLM Fine-Tuning

This article explores privacy risks in using large language models (LLMs) for AI applications. It focuses on the dangers of data exposure to third-party providers during fine-tuning and the potential disclosure of private information through LLM responses.

Daniel Huynh
Members Public

Our Journey To Democratize Confidential AI

This article provides insights into Mithril Security's journey to make AI more trustworthy and their perspective on addressing privacy concerns in the world of AI, along with their vision for the future.

Daniel Huynh
Members Public

Confidential Computing: A History

Here, we provide a deep dive into Confidential Computing, how it can protect data privacy, and where it comes from?

Daniel Huynh
Members Public

Attacks on AI Models: Prompt Injection vs. Supply Chain Poisoning

Comparison of prompt injection & supply chain poisoning attacks on AI models, with a focus on a bank assistant. Prompt injection has a limited impact on individual sessions, while supply chain poisoning affects the entire supply chain, posing severe risks.

Daniel Huynh
Members Public

Open Source Is Crucial for AI Transparency but Needs More Tooling

AI model traceability is crucial, but open-source practices alone are inadequate. Combining new software and hardware-based tools with open sourcing offers potential solutions for a secure AI supply chain.

Daniel Huynh
Members Public

The AI Act: 9 Key Answers to Get Onboard

On the 14th of June, the AI Act was successfully passed by the EU parliament. We gathered information on this complex piece of legislation for you. Let’s break down how the EU wants to regulate Artificial Intelligence with 10 questions.