
Extensive language models (LLMs) are increasingly becoming beneficial, ranging from schedule management to condensing extensive texts. Certain LLM-based agents possess the capability to engage with external software, including calendar systems or flight reservation applications, which raises concerns regarding privacy and safety.
To address this concern, Umar Iqbal, an assistant professor of computer science and engineering within the McKelvey School of Engineering at Washington University in St. Louis, alongside Yuhao Wu, a PhD candidate in Iqbal’s lab, have introduced IsolateGPT, a technique that maintains the separation of external tools while still operating within the system, enabling users to enjoy the advantages of these applications without the danger of compromising user information. Other collaborators include Ning Zhang, an associate professor of computer science and engineering at WashU; and Franziska Roesner and Tadayoshi Kohno, both computer scientists from the University of Washington.
The researchers showcased their findings at the Network and Distributed System Security Symposium, which took place from February 24 to 28.
“These systems possess immense power and can accomplish numerous tasks on behalf of a user, yet users currently struggle to trust them due to their inherent unreliability,” Iqbal remarked. “We understand there is significant value in enabling these tools to interact with one another, therefore we outline the interfaces that allow them to connect effectively and ensure users are informed that the source of the interfacing is a reliable component.”
Discover more on the McKelvey Engineering website.
The article ‘IsolateGPT’ to enhance the security of LLM-based agents first appeared on The Source.