Announcing BlindAI 0.5 with Cloud support

BlindAI 0.5 is now available with many new features such as Cloud support!

Announcing BlindAI 0.5 with Cloud support

We are proud to announce that BlindAI 0.5 is now available!  Several major changes have been introduced, the biggest one being Cloud support. We now enable users to use Mithril Cloud by default to abstract the provisioning of secure enclaves and enable users to start testing out BlindAI much faster as only the Python client is now required to get started.

Many other changes have been made to cover other models, such as OpenAI Whisper, and improvements in performance have also been made.

Find out more below and don’t forget to star the BlindAI repo!

  • Cloud support

Have you been curious about testing BlindAI, but don’t have the appropriate hardware?

Good news! We are now hosting a fully managed version of BlindAI, allowing you to test out the service quickly!

All you need to do is create an account on our cloud website ( and get an API Key in order to get started. You can find examples of models as well, allowing you to get started with only a few lines of code.

The current version of the client of BlindAI has everything for you to get started, including the cloud code signature. Connecting to the cloud will only require one line of code, without the hassle of specifying any technical details!

Only you can access your models, the server has a namespace mechanism in place, allowing you to try as many models as you want, as long as your model does not exceed 600 MB.

On-premise deployment on your infra is still possible of course.

  • Model sealing

To avoid endless model uploads, BlindAI now saves on the models uploaded to the disk. The models are not saved on clear on the disk however, they are encrypted using the Intel SGX data sealing mechanisms, with  only the server able to recover the data. A unique encryption/decryption key is generated using the code signature of the server, so that only the current server can decrypt the data.

Any change on the server would change its signature and then lock the uploaded models permanently, protecting them from any potential leak or misuse.

  • Auto input & output fact detection with multiple inputs/outputs

In order to make BlindAI as easy to use as possible, you no longer need to specify the model input fact when uploading your model. The server will now detect the input & output fact automatically, allowing you to focus on your workflow.

What if I have  a model that takes multiple inputs? No problem, our latest changes in BlindAI now cover this case, allowing you to use your own custom versions of Distilbert models, or others.

  • Unlocking more models

Thanks to the recent changes to tract, the inference engine BlindAI relies on, new models such as Whisper, GPT2 and Yolov5 are fully supported. This is thanks to the addition of new operators such as NonMaxSuppression, Multinomial, Einsum… Great additions to cover even more workloads.

  • No more CBOR

We completely reworked the serialization/deserialization process on the server & client. CBOR (serde_cbor & cbor2) is no longer used, as we now use protobuf serialization/deserialization processes directly.

This was a first step to more easily support clients in other languages & platforms (nodejs, java/android…). In addition, this change brought a nice performance gain (+20% of speed) on uploading models or data.

  • Better error messages & exception handling

We completely reworked the exceptions on the client & server to give a better idea of what is going on. Any error when doing a prediction is properly sent to the client with the correct exception, allowing an user to take the appropriate action for his workload.

What’s more, a proper exception is raised in case the server reached does not match the expected policy, allowing users to retrieve the expected and calculated signature quickly (and the whole attestation object for advanced users, allowing them to get all security flags!).