Data protection

Daniel Huynh
Members Public

Mithril Security is supported by OpenAI Cybersecurity Grant to build Confidential AI.

Mithril Security has been awarded a grant from the OpenAI Cybersecurity Grant Program. This grant will fund our work on developing open-source tooling to deploy AI models on GPUs with Trusted Platform Modules (TPMs) while ensuring data confidentiality and providing full code integrity.

Laura Yie
Members Public

Technical collaboration with the Future of Life Institute: developing hardware-backed AI governance tools

The article unveils AIGovTool, a collaboration between the Future of Life Institute and Mithril, employing Intel SGX enclaves for secure AI deployment. It addresses concerns of misuse by enforcing governance policies, ensuring protected model weights, and controlled consumption.

Raphaël Millet
Members Public

BlindChat - Our Confidential AI Assistant

Introducing BlindChat, a confidential AI assistant prioritizing user privacy through secure enclaves. Learn how it addresses data security concerns in AI applications.

Daniel Huynh
Members Public

Introducing BlindChat Local: ​ Full In-Browser Confidential AI Assistant

Discover BlindChat, an open-source privacy-focused conversational AI that runs in your web browser, safeguarding your data while offering a seamless AI experience. Explore how it empowers users to enjoy both privacy and convenience in this transformative AI solution.

Daniel Huynh
Members Public

Confidential Computing: A History

Here, we provide a deep dive into Confidential Computing, how it can protect data privacy, and where it comes from?

Daniel Huynh
Members Public

The Enterprise Guide to Adopting GenAI: Use Cases, Tools, and Limitations

Generative AI is revolutionizing enterprises with enhanced efficiency and customer satisfaction. The article explores real-world applications and deployment options like SaaS, on-VPC commercial FMs, and on-VPC open-source FMs, emphasizing the need for data protection.

Daniel Huynh
Members Public

Attacks on AI Models: Prompt Injection vs. Supply Chain Poisoning

Comparison of prompt injection & supply chain poisoning attacks on AI models, with a focus on a bank assistant. Prompt injection has a limited impact on individual sessions, while supply chain poisoning affects the entire supply chain, posing severe risks.

Daniel Huynh
Members Public

Discover Confidential Computing by Coding Your Own KMS Inside an Enclave

Discover confidential computing with our tutorials. Fill the knowledge gap, become proficient in secure enclaves, and craft applications with their strengths. Join us to become a Confidential Computing wizard! Dive into our content and start your journey today.

Raphaël Millet
Members Public

Mithril X Tramscribe: Confidential LLMs for Medical Voice Notes Analysis

How we partnered with Tramscribe to leverage LLMs deal with Medical voice notes analysis

Daniel Huynh
Members Public

Introducing BastionLab - A Simple Privacy Framework for Data Science Collaboration

BastionLab is a simple privacy framework for data science collaboration. It lets data owners protect the privacy of their datasets and enforces that only privacy-friendly operations are allowed on the data and anonymized outputs are shown to the data scientist.

Raphaël Millet
Members Public

Deploy Zero-trust Diagnostic Assistant for Hospitals

Improving Hospital Diagnoses: How BlindAI and BastionAI Could Assist

Maxime Pontey
Members Public

What To Expect From the EU AI Regulation?

A view on the key upcoming EU regulations, and how these are likely to affect data and AI industry practices.