Fascinating study: “How to Securely Implement Cryptography in Deep Neural Networks.”
Summary: The extensive utilization of deep neural networks (DNNs) prompts the inquiry of how we can bestow them with necessary cryptographic capabilities (for instance, to decrypt an encrypted input, to authenticate that this input is legitimate, or to embed a secure watermark in the output). The challenge lies in the fact that cryptographic elements are generally crafted to function on digital devices that employ Boolean gates to convert sequences of bits to sequences of bits, while DNNs represent a distinct variation of analog processors that utilize linear transformations and ReLUs to convert vectors of real numbers into vectors of real numbers. This divergence between discrete and continuous computing frameworks raises the question of the optimal approach to realize standard cryptographic elements as DNNs, and if DNN representations of secure cryptographic systems retain their security in this new context, where an adversary can request the DNN to process a message whose “bits” are arbitrary real numbers.
In this document, we establish the principles of this emerging theory, clarifying the concepts of accuracy and security for the implementation of cryptographic elements as ReLU-based DNNs. We demonstrate that the straightforward implementations of block ciphers as DNNs are susceptible to compromise in linear time when utilizing such unconventional inputs. Our attack was evaluated in the scenario of full round AES-128, achieving a notable success rate in discovering randomly selected keys. In conclusion, we introduce an innovative technique for realizing any sought-after cryptographic function as a conventional ReLU-based DNN in a demonstrably secure and accurate manner. Our safeguarding method incurs minimal overhead (a fixed number of supplementary layers and a linear quantity of additional neurons), and is entirely feasible.