time-of-check-time-of-use-attacks-against-llms

“`html

This is an excellent piece of investigation: “Mind the Gap: Time-of-Check to Time-of-Use Vulnerabilities in LLM-Enabled Agents“.:

Abstract: Agents powered by Large Language Models (LLM) are swiftly developing across a diverse array of applications, yet their implementation presents vulnerabilities with security ramifications. While earlier research has focused on prompt-based assaults (e.g., prompt injection) and data-centric dangers (e.g., data exfiltration), time-of-check to time-of-use (TOCTOU) has remained largely overlooked in this domain. TOCTOU emerges when an agent verifies external conditions (e.g., a file or API reply) that may be altered before utilization, facilitating practical attacks such as malicious configuration replacements or payload injection. In this endeavor, we introduce the inaugural examination of TOCTOU vulnerabilities in LLM-enabled agents. We present TOCTOU-Bench, a benchmark consisting of 66 realistic user tasks intended to assess this category of vulnerabilities. As counteractive measures, we adopt detection and mitigation strategies from systems security applicable to this setting and propose prompt rewriting, state integrity supervision, and tool integration. Our investigation underscores challenges specific to agentic workflows, wherein we achieve up to 25% detection precision using automated identification techniques, a 3% reduction in at-risk plan creation, and a 95% decrease in the attack duration. By merging all three methods, we diminish the TOCTOU vulnerabilities from an executed trajectory from 12% to 8%. Our results pave the way for a fresh research avenue at the crossroads of AI safety and systems security.

“`


Leave a Reply

Your email address will not be published. Required fields are marked *

Share This