“`html
University of Georgia faculty member Ari Schlesinger has been honored with a 2025 Google Academic Research Award. The 2025 Google Academic Research Awards will support 56 initiatives spearheaded by 84 scholars in 12 nations engaged in pioneering computing and technology investigations.
Schlesinger, an assistant professor within the UGA School of Computing, received recognition in the Trust, Safety, Security and Privacy Research category, which focuses on studies aimed at enhancing digital trust, safety, privacy, and security throughout the online landscape.
Schlesinger is at the helm of a project titled Cultivating AI Safety Literacy for University Computing Students, collaborating with Nick Falkner, an associate professor in the School of Computer and Mathematical Sciences at the University of Adelaide in Australia.

AI safety is a multidisciplinary domain that aims to avert harm that could stem from AI technologies. The objectives of the research involve creating and evaluating a replicable AI safety literacy program and subsequently assessing its applicability across diverse contexts and environments.
As stated by the researchers, although AI currently contributes to safety features we utilize daily, such as email spam filters, contemporary AI systems introduce new safety dilemmas for the technology workforce. To avert unforeseen damages, university computer science students require multidisciplinary training in embedding digital safety into AI systems from the outset.
“Risks from issues like deepfakes, misinformation, and scams have been a concern for years, but today’s AI systems, including large language models, amplify the scope and scale of online dangers,” Schlesinger remarked. “We aspire to attain the advantages of technological systems for communication and efficiency, which necessitates designing technologies where the benefits are not overshadowed by the risks.”
A replicable literacy program and assessment methodology will aid in bridging the gap in AI safety literacy skills within technology sector careers.
“With new technology comes new dangers, but we can alleviate those dangers through a multidisciplinary approach to safety,” Schlesinger stated. “Cybersecurity, privacy, and content moderation all contribute to this process. Safety provides a comprehensive framework for contemplating and addressing digital risks. We require specialists in cybersecurity, but we also need individuals trained to tackle broader societal issues and interpersonal harms, alongside mitigating malicious behaviors.”
We require specialists in cybersecurity, but we also need individuals trained to tackle broader societal issues.”
—Ari Schlesinger,
Franklin College of Arts and Sciences
The initiative will concentrate on a set of research studies aimed at assessing the impacts of an AI safety literacy program designed for undergraduate and graduate computer science students in:
- comprehending sociotechnical harm
- recognizing and evaluating safety risks
- applying safe design principles within the AI development lifecycle
- assessing safe AI execution
- communicating AI safety principles and practices to external stakeholders
Through implementing this research across two countries and within various university/course contexts, the team will gain insights into the opportunities and obstacles in fostering AI safety literacy in practical learning environments.
Each award recipient receives funding of up to $100,000 to support their endeavors. Furthermore, laureates are paired with a Google research sponsor, establishing a direct link to Google’s research community and encouraging long-term partnerships.
The post UGA computing professor wins Google Research Award to develop AI safety appeared first on UGA Today.
“`
