report-on-the-malicious-uses-of-ai

OpenAI has just released its yearly report on harmful applications of AI.

By leveraging AI as a catalyst for our specialized investigative teams, in the three months following our last report, we’ve successfully identified, interrupted, and uncovered harmful activities such as social manipulation, cyber surveillance, fraudulent employment schemes, clandestine influence efforts, and scams.

These activities emerged from various regions worldwide, employed numerous tactics, and targeted a wide array of victims. A notable portion seemed to stem from China: Four out of the 10 instances reported, encompassing social manipulation, clandestine influence endeavors, and cyber threats, likely had origins in China. However, we’ve also halted abuses from numerous other nations: this report features case studies of a probable task scam from Cambodia, comment spamming seemingly originating from the Philippines, clandestine influence efforts possibly associated with Russia and Iran, as well as misleading employment schemes.

Reports like this offer a fleeting glimpse into the manner in which malicious actors globally are utilizing AI. I say “fleeting” because last year the models were not sufficiently advanced for such activities, and next year the threat actors will operate their AI models locally—resulting in diminished visibility for us.

Wall Street Journal article (also here). Slashdot thread.


Leave a Reply

Your email address will not be published. Required fields are marked *

Share This