Cheating AI test methods include by swapping models or overfitting on known test sets. A solution to AI test cheat would be a secure infrastructure for verifiable tests that protects the confidentiality of model weights and test data.
A year ago, Mithril Security exposed the risk of AI model manipulation with PoisonGPT. To solve this, we developed AICert, a cryptographic tool that creates tamper-proof model cards, ensuring transparency and detecting unauthorized changes in AI models.
In this article, we provide you with a few hints on how to choose your stack to build a confidential AI workload leveraging GPUs. This protection is meant to safeguard data privacy and model weights confidentiality.
Mithril Security's latest update outlines advancements in confidential AI deployment, emphasizing innovations in data privacy, model integrity, and governance for enhanced security and transparency in AI technologies.
Apple has announced Private Cloud Compute (PCC), which uses Confidential Computing to ensure user data privacy in cloud AI processing, setting a new standard in data security.
Introducing BlindChat, a confidential AI assistant prioritizing user privacy through secure enclaves. Learn how it addresses data security concerns in AI applications.
How we partnered with Tramscribe to leverage LLMs deal with Medical voice notes analysis
How we partnered with Avian to deploy sensitive Forensic services thanks to Zero Trust Elastic search.
Improving Hospital Diagnoses: How BlindAI and BastionAI Could Assist
A view on the key upcoming EU regulations, and how these are likely to affect data and AI industry practices.