“`html
In 15 TED Talk-style presentations, MIT faculty recently shared their groundbreaking research that encompasses social, ethical, and technical considerations and expertise, each backed by seed grants initiated by the Social and Ethical Responsibilities of Computing (SERC), a cross-disciplinary initiative of the MIT Schwarzman College of Computing. The call for proposals last summer attracted nearly 70 applications. A committee comprising representatives from every MIT school and the college convened to select the successful projects that secured funding of up to $100,000.
“SERC is dedicated to propelling advancement at the intersection of computing, ethics, and society. The seed grants aim to stimulate bold and innovative thinking regarding the complex challenges and opportunities within this field,” said Nikos Trichakis, co-associate dean of SERC and the J.C. Penney Professor of Management. “With the MIT Ethics of Computing Research Symposium, we believed it was essential to not only highlight the breadth and depth of the research influencing the future of ethical computing but also to invite the community to engage in the discussion.”
“What you are witnessing here represents a collective judgment by the community regarding the most intriguing work in relation to research on the social and ethical responsibilities of computing being conducted at MIT,” said Caspar Hare, co-associate dean of SERC and philosophy professor.
The full-day symposium on May 1 was centered around four primary themes: responsible healthcare technology, governance and ethics of artificial intelligence, technology in society and civic involvement, and digital inclusion and social justice. Speakers provided insightful presentations on a wide array of topics, including algorithmic bias, data privacy, the societal implications of artificial intelligence, and the evolving interaction between humans and machines. The event also hosted a poster session, where student researchers presented projects they developed over the year as SERC Scholars.
Notable highlights from the MIT Ethics of Computing Research Symposium in each thematic area, many of which can be viewed on YouTube, comprised:
Improving fairness in the kidney transplant system
Policies governing the organ transplant system in the United States are established by a national committee that often takes over six months to devise and then years to implement, a timeframe that many on the waiting list simply cannot withstand.
Dimitris Bertsimas, vice provost for open learning, associate dean of business analytics, and Boeing Professor of Operations Research, presented his recent work in analytics to ensure fair and efficient kidney transplant distribution. Bertsimas’ innovative algorithm evaluates factors like geographic location, mortality, and age in just 14 seconds, a remarkable shift from the standard six hours.
Bertsimas and his team collaborate closely with the United Network for Organ Sharing (UNOS), a nonprofit organization that manages the majority of the national donation and transplantation system through a contract with the federal government. During his talk, Bertsimas showcased a video from James Alcorn, senior policy strategist at UNOS, who succinctly summarized the impact of the new algorithm:
“This optimization significantly alters the turnaround time for assessing these various simulations of policy scenarios. It used to take us a couple of months to review a few different policy scenarios, and now it takes just minutes to analyze thousands of scenarios. We can implement these changes much more swiftly, ultimately enabling us to enhance the system for transplant candidates at a much faster pace.”
The ethics surrounding AI-generated social media content
As AI-generated content increasingly saturates social media platforms, what are the consequences of disclosing (or not disclosing) that any component of a post was produced by AI? Adam Berinsky, Mitsui Professor of Political Science, and Gabrielle Péloquin-Skulski, PhD candidate in the Department of Political Science, investigated this issue in a session focused on recent studies regarding the influence of various labels on AI-generated content.
Through a series of surveys and experiments involving labels on AI-generated posts, the researchers examined how specific phrases and descriptions affected users’ perceptions of deception, their willingness to engage with the post, and whether they deemed the post true or false.
“The primary insight from our initial findings is that a universal approach does not work,” said Péloquin-Skulski. “We discovered that tagging AI-generated images with a process-oriented label diminishes belief in both false and true posts. This is quite troubling, as labeling aims to reduce beliefs in false information, not necessarily in verifiable information. This indicates that labels combining both process and truthfulness might be more effective in combating AI-generated misinformation.”
Utilizing AI to enhance civil discourse online
“Our research seeks to address how individuals increasingly desire to have a voice in the organizations and communities they are part of,” Lily Tsai noted during a session about experiments in generative AI and the future of digital democracy. Tsai, Ford Professor of Political Science and director of the MIT Governance Lab, is conducting ongoing research with Alex Pentland, Toshiba Professor of Media Arts and Sciences, along with a larger team.
Online deliberative platforms have surged in popularity across the United States in both public and private sectors. Tsai explained that with technology, it is now feasible for everyone to have a voice — yet this involvement can be overwhelming or even feel unsafe. First, there is an excess of information available, and secondly, online discourse has grown increasingly “uncivil.”
The team focuses on “how we can enhance existing technologies and improve them through rigorous, interdisciplinary research, and how we can innovate by incorporating generative AI to maximize the advantages of online spaces for deliberation.” They have created their own AI-integrated platform for deliberative democracy, DELiberation.io, and launched four initial modules. All studies have been conducted in the lab up to this point, but they are also planning a series of upcoming field studies, the first in collaboration with the government of the District of Columbia.
Tsai conveyed to the audience, “If you take away nothing else from this presentation, I hope you’ll remember this — we should all insist that the technologies being developed are evaluated to determine if they yield positive downstream outcomes, rather than just focusing on maximizing user numbers.”
A public think tank examining all facets of AI
When Catherine D’Ignazio, associate professor of urban science and planning, and Nikko Stevens, postdoctoral researcher at the Data + Feminism Lab at MIT, originally submitted their funding proposal, they did not aim to create a think tank but rather a framework — one that encapsulated how artificial intelligence and machine learning methodologies could integrate community strategies and utilize participatory design.
Ultimately, they established Liberatory AI, which they refer to as a “dynamic public think tank addressing all aspects of AI.” D’Ignazio and Stevens assembled 25 researchers from a wide range of institutions and disciplines who authored over 20 position papers delving into the latest academic literature on AI systems and engagement. They intentionally categorized the papers into three distinct themes: the corporate AI environment, limitations, and paths forward.
“Instead of waiting for OpenAI or Google to invite us to participate in the development of their products, we’ve united to challenge the status quo, think more broadly, and reorganize resources in this system with the hope of achieving greater societal transformation,” said D’Ignazio.
“`