PoisonGPT: How We Hid a Lobotomized LLM on Hugging Face to Spread Fake News
We will show in this article how one can surgically modify an open-source model, GPT-J-6B, and upload it to Hugging Face to make it spread misinformation while being undetected by standard benchmarks.
The AI Act: 9 Key Answers to Get Onboard
On the 14th of June, the AI Act was successfully passed by the EU parliament. We gathered information on this complex piece of legislation for you. Let’s break down how the EU wants to regulate Artificial Intelligence with 10 questions.
Ai-Assisted Code Generation With Privacy Guarantees: Securely Deploy SantaCoder With BlindBox on Azure
In this article, we'll demonstrate how you can efficiently analyze code at scale while maintaining privacy. We'll use BlindBox, our open-source secure enclave tooling, to serve StarCoder with privacy guarantees on Azure.
Discover Confidential Computing by Coding Your Own KMS Inside an Enclave
Discover confidential computing with our tutorials. Fill the knowledge gap, become proficient in secure enclaves, and craft applications with their strengths. Join us to become a Confidential Computing wizard! Dive into our content and start your journey today.
Mithril X Tramscribe: Confidential LLMs for Medical Voice Notes Analysis
How we partnered with Tramscribe to leverage LLMs deal with Medical voice notes analysis
Mithril Security Raised €1.2 Million to Protect LLM Users' Data
With BlindBox, you can use Large Language Models without any intermediary or model owner seeing the data sent to the models. This type of solution is critical today, as the newfound ease-of-use of generative AI (GPT4, MidJourney, GitHub Copilot…) is already revolutionizing the tech industry.
Announcing Blindbox, a Secure Infrastructure Tooling to Deploy LLMs, Available on Confidential Containers on Azure Container Instances
We are excited to introduce BlindBox, our latest open-source solution designed to enhance SaaS deployment security. Our tooling enables developers to wrap any Docker image with isolation layers and deploy them inside Confidential Containers.
BlindAI Passes an Independent Security Audit by Quarkslab
We take security and open-source data privacy seriously at Mithril Security. So we're very proud that our historical confidential computing solution, BlindAI, was successfully audited by Quarkslab!
Identifying a Critical Attestation Bypass Vulnerability in Apache Teaclave
This vulnerability can be used to mount a Man in the Middle attack. We found a fix that Teaclave implemented following the release of this article.
Mithril x Avian: Zero Trust Digital Forensics and eDiscovery
How we partnered with Avian to deploy sensitive Forensic services thanks to Zero Trust Elastic search.
Rust: How We Built a Privacy Framework for Data Science
We could have built our privacy framework BastionLab in any language - Python, for example, which is data science’s beloved. But we chose Rust because of its efficiency and security features. Here are the reasons why we loved doing so, but also some challenges we encountered along the way.
Data Science: The Short Guide to Privacy Technologies
If you’re wondering what the benefits and weaknesses of differential privacy, confidential computing, federated learning, etc are, and how they can be combined to improve artificial intelligence and data privacy, you’ve come to the right place.