Get the latest news from Mithril Security
Mithril Security Blog
Daniel Huynh
Members Public

How Python Data Science Libraries Can Be Hijacked (and What You Can Do About It)

Hackers can easily hijack the data science libraries you use every day and get full access to the datasets you are working with. Data owners need tools to prevent it from happening.

Charles Chudant
Members Public

Jupyter Notebooks Are Not Made for Sensitive Data Science Collaboration

When collaborating remotely on sensitive data, their usually amazing interactivity and flexibility need safeguards, or whole datasets can be extracted in a few lines of code.

Daniel Huynh
Members Public

Introducing BastionLab - A Simple Privacy Framework for Data Science Collaboration

BastionLab is a simple privacy framework for data science collaboration. It lets data owners protect the privacy of their datasets and enforces that only privacy-friendly operations are allowed on the data and anonymized outputs are shown to the data scientist.

Daniel Huynh
Members Public

Our Roadmap to Build a Simple Privacy Toolkit for Data Science Collaboration

One year and a half later, Mithril Security’s roadmap has transformed significantly, but our initial goal stayed the same: democratizing privacy in data science.

Raphaël Millet
Members Public

Deploy Zero-trust Diagnostic Assistant for Hospitals

Improving Hospital Diagnoses: How BlindAI and BastionAI Could Assist

Daniel Huynh
Members Public

Mithril Security Joins the Confidential Computing Consortium

Mithril Security joins the Confidential Computing Consortium to accelerate open-source privacy friendly AI

Daniel Huynh
Members Public

Presenting Mithril Cloud, the First Confidential AI as a Service Offering

Discover how BlindAI Cloud enables you to deploy and query AI models with privacy, just from 2 lines of Python code. Try our solution with the deployment of a ResNet model.

Daniel Huynh
Members Public

Large Language Models and Privacy. How Can Privacy Accelerate the Adoption of Big Models?

We will see why security and privacy might facilitate the adoption of Large Language Models, as those vast models push towards centralisation, given the complexity of deploying them at scale.

Daniel Huynh
Members Public

Insights of Porting Hugging Face Rust Tokenizers to WASM

Learn how Rust can be used to port server side logic, like Hugging Face Tokenizers, to the client for security and performance using WASM.

Daniel Huynh
Members Public

Insights of Portingbuild a Privacy-By-Design Voice Assistant With BlindAI.

Discover how BlindAI can make AI voice assistant privacy-friendly!

Daniel Huynh
Members Public

Introducing BlindAI, an Open-Source, Privacy-Friendly AI Deployment in Rust

Discover BlindAI, an open-source solution for privacy-friendly AI deployment in Rust!

Maxime Pontey
Members Public

What To Expect From the EU AI Regulation?

A view on the key upcoming EU regulations, and how these are likely to affect data and AI industry practices.