Get the lastest from Mithril Security
Mithril Security Blog
Laura Yie
Members Public

Technical collaboration with the Future of Life Institute: developing hardware-backed AI governance tools

The article unveils AIGovTool, a collaboration between the Future of Life Institute and Mithril, employing Intel SGX enclaves for secure AI deployment. It addresses concerns of misuse by enforcing governance policies, ensuring protected model weights, and controlled consumption.

Raphaël Millet
Members Public

BlindChat - Our Confidential AI Assistant

Introducing BlindChat, a confidential AI assistant prioritizing user privacy through secure enclaves. Learn how it addresses data security concerns in AI applications.

Daniel Huynh
Members Public

Privacy Risks of LLM Fine-Tuning

This article explores privacy risks in using large language models (LLMs) for AI applications. It focuses on the dangers of data exposure to third-party providers during fine-tuning and the potential disclosure of private information through LLM responses.

Daniel Huynh
Members Public

Our Journey To Democratize Confidential AI

This article provides insights into Mithril Security's journey to make AI more trustworthy and their perspective on addressing privacy concerns in the world of AI, along with their vision for the future.

Daniel Huynh
Members Public

Our Roadmap for Privacy-First Conversational AI

In September 2023, we released the first version of BlindChat, our confidential Conversational AI. We were delighted with the collective response to the launch: BlindChat has been gaining traction on Hugging Face over the past few weeks, and we've had more visitors than ever before But this local, fully in-browser

Daniel Huynh
Members Public

Introducing BlindChat Local: ​ Full In-Browser Confidential AI Assistant

Discover BlindChat, an open-source privacy-focused ChatGPT alternative that runs in your web browser, safeguarding your data while offering a seamless AI experience. Explore how it empowers users to enjoy both privacy and convenience in this transformative AI solution.

Daniel Huynh
Members Public

Introducing BlindLlama, Zero-Trust AI APIs With Privacy Guarantees & Traceability

Introducing BlindLlama: An open-source Zero-trust AI API. Learn how BlindLlama ensures confidentiality and transparency in AI deployment.

Daniel Huynh
Members Public

Confidential Computing: A History

Here, we provide a deep dive into Confidential Computing, how it can protect data privacy, and where it comes from?

Daniel Huynh
Members Public

The Enterprise Guide to Adopting GenAI: Use Cases, Tools, and Limitations

Generative AI is revolutionizing enterprises with enhanced efficiency and customer satisfaction. The article explores real-world applications and deployment options like SaaS, on-VPC commercial FMs, and on-VPC open-source FMs, emphasizing the need for data protection.

Daniel Huynh
Members Public

Attacks on AI Models: Prompt Injection vs. Supply Chain Poisoning

Comparison of prompt injection & supply chain poisoning attacks on AI models, with a focus on a bank assistant. Prompt injection has a limited impact on individual sessions, while supply chain poisoning affects the entire supply chain, posing severe risks.

Daniel Huynh
Members Public

Open Source Is Crucial for AI Transparency but Needs More Tooling

AI model traceability is crucial, but open-source practices alone are inadequate. Combining new software and hardware-based tools with open sourcing offers potential solutions for a secure AI supply chain.

Daniel Huynh
Members Public

PoisonGPT: How We Hid a Lobotomized LLM on Hugging Face to Spread Fake News

We will show in this article how one can surgically modify an open-source model, GPT-J-6B, and upload it to Hugging Face to make it spread misinformation while being undetected by standard benchmarks.