Mithril Security is proud to announce that BlindAI Core has successfully passed an independent security audit by Quarkslab!
BlindAI is our open-source confidential computing solution for querying and deploying AI models while guaranteeing data privacy. BlindAI Core uses Intel Software Guard Extensions (Intel SGX) under the hood to protect user data during remote machine learning processing.
- Our open-source AI inference product, BlindAI, has successfully undergone a cybersecurity audit conducted by Quarkslab.
- Quarkslab is a reputable cybersecurity firm with prior knowledge of Intel SGX.
- During the comprehensive 40-day audit, no major issues or vulnerabilities were identified. The full report can be accessed here.
Security and transparency
When developing BlindAI, we recognized the importance of creating a secure and trustworthy solution. As such, we placed security as a top priority throughout the design and implementation process. However, while ensuring data privacy is critical, it is easier said than done.
There are numerous challenges to designing a security solution on top of Intel SGX! This includes implementing attestation to ensure the trustworthiness of the enclave, revisiting security practices in the context of SGX, and understanding the new threats and implementing countermeasures to avoid them. It is also essential to mitigate side-channel attacks.
We believe transparency speaks louder than words, so our code is open source to encourage security professionals to look into our security claims. Realistically, however, only a few users have the time and expertise to audit our solution… So we asked independent security researchers to try to crack our security.
Going with Quarkslab
We hired Quarkslab to carry out an independent security assessment of our product. Quarkslab is a cybersecurity firm that is reputed for the quality of its auditing and R&D work. They have a strong team of security researchers in the areas of software security, cryptography, and reverse engineering. Quarkslab has performed audits for clients in various industries, such as banking and finance, government, media and entertainment, and public services.
In our search for an auditing firm, we liked the quality of the audits they carried out as part of their missions with the Open Source Technology Improvement Fund (OSTIF). Quarkslab reviewed critical open-source projects, such as OpenSSL, VeraCrypt, and OpenVPN for them. On top of that, Quarkslab had also previously worked on Intel SGX — we read their informative blog series on Intel SGX internals when we started playing with SGX – which put them in the perfect position for the job.
No major issues or vulnerabilities
The audit lasted 40 days and was conducted from January to March 2023. The result is very positive. The audit uncovered no major issues or vulnerabilities. In particular, no vulnerabilities that could compromise the confidentiality or integrity of user data (i.e., model, tensors) were found. The audit was conducted on BlindAI-preview v0.0.2, and the project has since been integrated into the main BlindAI repository as BlindAI Core.
This is an important step in the path towards production use of BlindAI and shows that our engineering approach of considering security at every step is successful. As part of our engagement towards transparency, we are making the audit report publicly available here.
Following the audit, we have prepared a technical document to discuss the audit results and explain what actions we have taken in response. The document is mainly intended for security researchers and IT professionals :
We would like to thank Quarkslab, especially their security auditors Damien Aumaitre and Dahmun Goudarzi, for their work and the detailed report they produced.
If you have any questions or comments regarding the security audit of BlindAI Core, you can reach us on our Discord.
Want to turn your SaaS into a zero-trust solution?
Image credits: Edgar Huneau