Discover BlindChat, an open-source privacy-focused ChatGPT alternative that runs in your web browser, safeguarding your data while offering a seamless AI experience. Explore how it empowers users to enjoy both privacy and convenience in this transformative AI solution.
Here, we provide a deep dive into Confidential Computing, how it can protect data privacy, and where it comes from?
AI model traceability is crucial, but open-source practices alone are inadequate. Combining new software and hardware-based tools with open sourcing offers potential solutions for a secure AI supply chain.
On the 14th of June, the AI Act was successfully passed by the EU parliament. We gathered information on this complex piece of legislation for you. Let’s break down how the EU wants to regulate Artificial Intelligence with 10 questions.
In this article, we'll demonstrate how you can efficiently analyze code at scale while maintaining privacy. We'll use BlindBox, our open-source secure enclave tooling, to serve StarCoder with privacy guarantees on Azure.
We are excited to introduce BlindBox, our latest open-source solution designed to enhance SaaS deployment security. Our tooling enables developers to wrap any Docker image with isolation layers and deploy them inside Confidential Containers.
We take security and open-source data privacy seriously at Mithril Security. So we're very proud that our historical confidential computing solution, BlindAI, was successfully audited by Quarkslab!
This vulnerability can be used to mount a Man in the Middle attack. We found a fix that Teaclave implemented following the release of this article.
How we partnered with Avian to deploy sensitive Forensic services thanks to Zero Trust Elastic search.
If you’re wondering what the benefits and weaknesses of differential privacy, confidential computing, federated learning, etc are, and how they can be combined to improve artificial intelligence and data privacy, you’ve come to the right place.
When collaborating remotely on sensitive data, their usually amazing interactivity and flexibility need safeguards, or whole datasets can be extracted in a few lines of code.
We will see why security and privacy might facilitate the adoption of Large Language Models, as those vast models push towards centralisation, given the complexity of deploying them at scale.