AI privacy

Daniel Huynh
Members Public

Mithril Security is supported by OpenAI Cybersecurity Grant to build Confidential AI.

Mithril Security has been awarded a grant from the OpenAI Cybersecurity Grant Program. This grant will fund our work on developing open-source tooling to deploy AI models on GPUs with Trusted Platform Modules (TPMs) while ensuring data confidentiality and providing full code integrity.

Laura Yie
Members Public

Technical collaboration with the Future of Life Institute: developing hardware-backed AI governance tools

The article unveils AIGovTool, a collaboration between the Future of Life Institute and Mithril, employing Intel SGX enclaves for secure AI deployment. It addresses concerns of misuse by enforcing governance policies, ensuring protected model weights, and controlled consumption.

Raphaël Millet
Members Public

BlindChat - Our Confidential AI Assistant

Introducing BlindChat, a confidential AI assistant prioritizing user privacy through secure enclaves. Learn how it addresses data security concerns in AI applications.

Daniel Huynh
Members Public

Privacy Risks of LLM Fine-Tuning

This article explores privacy risks in using large language models (LLMs) for AI applications. It focuses on the dangers of data exposure to third-party providers during fine-tuning and the potential disclosure of private information through LLM responses.

Daniel Huynh
Members Public

Our Journey To Democratize Confidential AI

This article provides insights into Mithril Security's journey to make AI more trustworthy and their perspective on addressing privacy concerns in the world of AI, along with their vision for the future.

Daniel Huynh
Members Public

Confidential Computing: A History

Here, we provide a deep dive into Confidential Computing, how it can protect data privacy, and where it comes from?

Daniel Huynh
Members Public

Attacks on AI Models: Prompt Injection vs. Supply Chain Poisoning

Comparison of prompt injection & supply chain poisoning attacks on AI models, with a focus on a bank assistant. Prompt injection has a limited impact on individual sessions, while supply chain poisoning affects the entire supply chain, posing severe risks.

Daniel Huynh
Members Public

Open Source Is Crucial for AI Transparency but Needs More Tooling

AI model traceability is crucial, but open-source practices alone are inadequate. Combining new software and hardware-based tools with open sourcing offers potential solutions for a secure AI supply chain.

Daniel Huynh
Members Public

The AI Act: 9 Key Answers to Get Onboard

On the 14th of June, the AI Act was successfully passed by the EU parliament. We gathered information on this complex piece of legislation for you. Let’s break down how the EU wants to regulate Artificial Intelligence with 10 questions.

Raphaël Millet
Members Public

Mithril X Tramscribe: Confidential LLMs for Medical Voice Notes Analysis

How we partnered with Tramscribe to leverage LLMs deal with Medical voice notes analysis

Raphaël Millet
Members Public

Mithril x Avian: Zero Trust Digital Forensics and eDiscovery

How we partnered with Avian to deploy sensitive Forensic services thanks to Zero Trust Elastic search.

Daniel Huynh
Members Public

Mithril Security Joins the Confidential Computing Consortium

Mithril Security joins the Confidential Computing Consortium to accelerate open-source privacy friendly AI