ais-as-trusted-third-parties

This is an exceptionally engaging document: “Reliable Machine Learning Models Enable Private Inference for Issues Currently Unmanageable with Cryptography.” The core concept is that AIs can serve as trusted intermediaries:

Abstract: We frequently engage with unreliable entities. Emphasis on privacy can hinder the productivity of these interactions, as achieving certain objectives requires the exchange of confidential data. Historically, tackling this issue has relied on either pursuing trusted intermediaries or developing cryptographic protocols that limit the volume of data disclosed, such as multi-party computations or zero-knowledge proofs. While notable progress has been accomplished in enhancing cryptographic methods, they still exhibit constraints regarding the scale and complexity of applications suitable for their use. In this manuscript, we contend that sophisticated machine learning models can assume the role of a trusted intermediary, thereby facilitating secure computations for scenarios that were once impractical. Specifically, we introduce Trusted Capable Model Environments (TCMEs) as an alternative strategy for expanding secure computation, wherein capable machine learning model(s) operate under input/output restrictions, with defined information flow management and deliberate statelessness. This strategy pursues a balance between privacy and computational effectiveness, enabling secure inference where conventional cryptographic solutions are currently unattainable. We outline various applications made possible by TCME and demonstrate that even some straightforward classic cryptographic dilemmas can be addressed with TCME. Lastly, we highlight existing limitations and contemplate the future trajectory for implementing these solutions.

When I penned Applied Cryptography back in 1993, I mentioned human trusted third parties (TTPs). This study speculates that one day AIs could take on the role of human TTPs, presenting additional advantages like (1) the ability to audit their operations, and (2) the option to eliminate data and erase their memory once their tasks are complete. The potential is immense.

Here’s a TTP scenario. Alice and Bob wish to determine whose income is higher, but they are reluctant to disclose their income to each other. (Assume that both Alice and Bob seek the true response, meaning neither has a motive to deceive.) A human TTP can easily resolve this: Alice and Bob whisper their incomes to the TTP, who then reveals the answer. However, this means that the human possesses the data. There are cryptographic methods that can manage this situation. But we can easily envision more complex inquiries that cryptography struggles to address. “Which of these two new manuscripts contains more sexual content?” “Which of these two business proposals presents a greater investment risk?” If Alice and Bob can mutually agree on an AI model they both consider reliable, they can input the data into the model, pose the inquiry, receive the answer, and subsequently delete the model. It’s feasible for Alice and Bob to trust a model for such inquiries. They can evaluate the model in their laboratory and conduct countless tests until they are confident it is fair, accurate, or possesses any other desired attributes.

The document includes numerous instances where an AI TTP delivers tangible benefits. While this remains primarily a concept for the future, it presents an intriguing thought experiment.


Leave a Reply

Your email address will not be published. Required fields are marked *

Share This