3-questions:-modeling-adversarial-intelligence-to-exploit-ai’s-security-vulnerabilities

If you’ve seen animations like Tom and Jerry, you’ll notice a recurring motif: A sly target evades a powerful foe. This scenario of “cat-and-mouse” — whether in a literal sense or not — consists of chasing something that narrowly slips away with each attempt.

In much the same manner, escaping tenacious hackers presents an ongoing struggle for cybersecurity teams. Ensuring they remain in pursuit of what’s just beyond their grasp, MIT researchers are developing an AI methodology called “artificial adversarial intelligence” that imitates attackers of a device or network to evaluate network security before genuine breaches occur. Additional AI-driven defensive strategies assist engineers in further strengthening their systems to fend off ransomware, data breaches, or other cyber intrusions.

Here, Una-May O’Reilly, a principal investigator at the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), who heads the Anyscale Learning For All Group (ALFA), elaborates on how artificial adversarial intelligence shields us from cyber threats.

Q: In what manners can artificial adversarial intelligence function as a cyber aggressor, and how does it illustrate the role of a cyber protector?

A: Cyber aggressors range across a spectrum of skill levels. At the most basic level, there are individuals known as script-kiddies, or threat actors who deploy widely known exploits and malware in hopes of discovering a network or device that lacks good cyber practices. In the intermediate range are cyber mercenaries who possess better resources and are organized to target businesses with ransomware or extortion tactics. Finally, there are elite groups that may have state backing and can launch the most challenging-to-detect “advanced persistent threats” (APTs).

Consider the specialized, malicious intelligence that these attackers wield — that is adversarial intelligence. The attackers create highly technical tools enabling them to infiltrate code, they select the appropriate tool for their objective, and their attacks consist of multiple stages. At each stage, they acquire knowledge, incorporate it into their contextual understanding, and subsequently decide their next move. For sophisticated APTs, they may strategically select their targets, crafting a stealthy and low-visibility strategy that is so discreet that it evades our defensive measures. They can even orchestrate deceptive evidence that misleads to another hacker!

My research objective is to emulate this specific type of offensive or attacking intelligence, intelligence that is intrinsically adversarial (the kind of intelligence that human threat actors depend on). I employ AI and machine learning to develop cyber agents and simulate the adversarial actions of human attackers. I also replicate the learning and adaptation that characterize the ongoing cyber arms race.

It is important to highlight that cyber defenses are quite intricate. They have evolved in response to increasingly sophisticated attack capabilities. These defensive systems entail designing detectors, analyzing system logs, activating appropriate alerts, and then integrating them into incident response protocols. They must remain vigilant to protect a vast attack surface that is challenging to monitor and rapidly changing. On this alternate side of the attacker-defender dynamic, my team and I likewise innovate AI to support these various defensive fronts.

Another notable aspect of adversarial intelligence: Both Tom and Jerry learn from their rivalry! Their abilities refine as they engage with one another, leading to an arms race. One improves, prompting the other to enhance his own skills in self-preservation. This reciprocal enhancement continues to elevate over time! We strive to recreate cyber analogs of these arms races.

Q: What are some examples in our daily lives where artificial adversarial intelligence has safeguarded us? How can we employ adversarial intelligence agents to stay ahead of threat actors?

A: Machine learning has been applied in numerous ways to bolster cybersecurity. Various detectors are specified to identify threats, adapting to unusual behavior and identifiable malware types, for instance. AI-powered triage systems are also in place. Some of the spam protection tools found on your mobile devices are AI-driven!

With my team, I devise AI-enabled cyber aggressors that replicate the actions of threat actors. We innovate AI to equip our cyber agents with expert computer skills and programming proficiency, allowing them to process various cyber intelligence, plan attack actions, and make informed choices throughout a campaign.

Adversarially intelligent agents (like our AI cyber attackers) can be utilized for practice when examining network defenses. A significant effort is dedicated to evaluating a network’s resilience against breaches, and AI can facilitate this process. Furthermore, by incorporating machine learning into our agents, and our defense mechanisms, they enact an arms race that we can scrutinize, analyze, and utilize to anticipate potential counteractions when we defend our assets.

Q: What new vulnerabilities are they adapting to, and what methods do they employ?

A: There appears to be no end to the introduction of new software and the creation of new system configurations. With each new release, vulnerabilities arise that attackers can exploit. These may either be existing known weaknesses in code or new, undocumented issues.

New configurations carry the risk of introducing mistakes or novel attack vectors. We hadn’t foreseen ransomware while dealing with denial-of-service assaults. Now, we are balancing cyber espionage and ransomware alongside intellectual property theft. All critical infrastructure, including telecommunications, finance, healthcare, municipal services, energy, and water systems, are all vulnerable targets.

Fortunately, significant attention is being directed towards safeguarding critical infrastructure. We will need to translate these efforts into AI-driven products and services that automate a portion of that work. Additionally, we must continue innovating ever-more intelligent adversarial agents to keep us agile, or assist us in practicing how to defend our cyber resources.


Leave a Reply

Your email address will not be published. Required fields are marked *

Share This