In this input integrity assault on an AI system, researchers managed to deceive AIOps tools:

AIOps pertains to the deployment of LLM-driven agents to collect and examine application telemetry, encompassing system logs, performance indicators, traces, and alerts, to identify issues and subsequently propose or execute remedial measures. Entities such as Cisco have implemented AIOps within a conversational interface that administrators can utilize to inquire about system performance. Certain AIOps tools can address such inquiries by automatically enacting fixes or recommending scripts that can resolve issues.

Nevertheless, these agents can be misled by fraudulent analytics data into undertaking detrimental actions, including reverting an installed package to a susceptible version.

The study: “When AIOps Become “AI Oops”: Subverting LLM-driven IT Operations via Telemetry Manipulation“:

Abstract: AI for IT Operations (AIOps) is revolutionizing how organizations oversee intricate software systems by automating anomaly detection, incident diagnosis, and mitigation. Contemporary AIOps solutions increasingly depend on autonomous LLM-based agents to decode telemetry data and undertake corrective actions with minimal human involvement, promising quicker response times and reduced operational expenses.

In this research, we conduct the inaugural security assessment of AIOps solutions, revealing that once again, AI-driven automation incurs significant security vulnerabilities. We illustrate that adversaries can manipulate system telemetry to misdirect AIOps agents into executing actions that jeopardize the integrity of the infrastructure they oversee. We present methods to effectively inject telemetry data using error-inducing requests that alter agent behavior via a strategy of adversarial reward-hacking; feasible yet inaccurate interpretations of system errors that influence the agent’s decision-making process. Our attack framework, AIOpsDoom, is entirely automated—integrating reconnaissance, fuzzing, and LLM-driven adversarial input generation—and functions without any prior knowledge of the target system.

To mitigate this risk, we propose AIOpsShield, a protective mechanism that purifies telemetry data by taking advantage of its structured characteristics and the limited involvement of user-generated content. Our experiments indicate that AIOpsShield consistently obstructs telemetry-based assaults without hindering standard agent performance.

Ultimately, this research reveals AIOps as an emerging attack vector for system compromise and highlights the critical need for security-conscious AIOps design.


Leave a Reply

Your email address will not be published. Required fields are marked *

Share This