During a session of class 6.C40/24.C40 (Ethics of Computing), Professor Armando Solar-Lezama presents the same challenging inquiry to his students that he frequently grapples with in the research he conducts with the Computer Assisted Programming Group at MIT:
“How can we ensure that a machine fulfills our intentions, and only those intentions?”
In this current era, regarded by some as the pinnacle of generative AI, this may appear to be an urgent contemporary question. However, Solar-Lezama, the Distinguished Professor of Computing at MIT, is quick to emphasize that this challenge is as ancient as humanity itself.
He begins recounting the Greek tale of King Midas, the ruler who was bestowed with the divine ability to convert everything he touched into solid gold. Unsurprisingly, the desire backfired when Midas inadvertently transformed all his loved ones into gilded statues.
“Be cautious of what you wish for, as it may be granted in unexpected ways,” he warns his students, many of whom are aspiring mathematicians and coders.
Delving into MIT’s archives to display slides of grainy black-and-white images, he narrates the progression of programming. We learn about the Pygmalion machine of the 1970s that demanded extraordinarily detailed instructions, to the late ’90s software that required teams of engineers years of labor and an 800-page manual to develop.
While impressive during their era, these methodologies were excessively lengthy to benefit users. They left no space for unplanned discovery, play, and innovation.
Solar-Lezama discusses the dangers of constructing modern machines that do not consistently adhere to a programmer’s directives or boundaries and are equally capable of causing harm as they are of saving lives.
Titus Roesler, a senior studying electrical engineering, nods in understanding. Roesler is finalizing a paper on the ethics of self-driving vehicles and contemplating who holds moral accountability when, hypothetically, one injures and kills a pedestrian. His argument examines the foundational beliefs behind technological advancements and considers various legitimate perspectives, drawing on the philosophical theory of utilitarianism. Roesler explains, “Essentially, utilitarianism posits that the ethical action is the one that produces the greatest benefit for the largest number of individuals.”
MIT philosopher Brad Skow, who collaborated with Solar-Lezama to develop and co-teach the course, leans forward to take notes.
A class that requires both technical and philosophical acumen
Ethics of Computing, debuting in Fall 2024, was established through the Common Ground for Computing Education, an initiative of the MIT Schwarzman College of Computing that unites various departments to create and instruct new courses and launch programs that integrate computing with other fields.
The instructors alternate between lecture days. Skow, the Laurance S. Rockefeller Professor of Philosophy, provides insights into the broader implications of current ethical dilemmas, while Solar-Lezama, who additionally serves as the associate director and chief operating officer of MIT’s Computer Science and Artificial Intelligence Laboratory, presents viewpoints from his own discipline.
Both professors attend each other’s lectures and adapt their subsequent class sessions accordingly. The incorporation of learning from one another in real-time has led to more dynamic and responsive discussions during class. A recitation dedicated to unpacking the week’s topic with graduate students from philosophy or computer science enriches the course content.
“An outsider might assume that this is going to be a class ensuring that these new programmers leaving MIT always make the right decisions,” Skow remarks. Yet, the course is purposely structured to impart a different skill set to the students.
Eager to create a meaningful semester-long curriculum that offered more than just lectures on right or wrong, philosophy professor Caspar Hare conceived the idea for Ethics of Computing while serving as an associate dean of the Social and Ethical Responsibilities of Computing. Hare enlisted Skow and Solar-Lezama as the primary instructors, confident that they could achieve something more profound.
“Engaging deeply with the questions that arise in this course requires both technical and philosophical acumen. There aren’t any other classes at MIT that juxtapose both perspectives,” Skow states.
This is precisely what attracted senior Alek Westover to enroll. The mathematics and computer science double major articulates, “Many people speculate about the trajectory of AI in the coming five years. I felt it was vital to take a class that would facilitate deeper thinking about that.”
Westover mentions that he is drawn to philosophy due to an interest in ethics and the desire to differentiate between right and wrong. In his math courses, he has learned to articulate a problem statement and receive immediate confirmation regarding the success of his solution. However, in Ethics of Computing, he has acquired the skills to craft written arguments for “complex philosophical questions” that may not yield a single correct response.
For instance, “One concern could revolve around what transpires if we create powerful AI agents capable of performing any task a human can do?” Westover queries. “If we’re interacting with these AI to that extent, should we compensate them with a salary? How much should we regard their desires?”
There are no simple answers, and Westover anticipates encountering many other challenges in his future career.
“So, is the internet ruining the world?”
The semester commenced with an in-depth exploration of AI risks, questioning “whether AI represents an existential threat to humanity,” analyzing concepts such as free will, the science behind our decision-making processes under uncertainty, and discussions surrounding the long-term responsibilities and regulation of AI. A subsequent, more extensive unit focused on “the internet, the World Wide Web, and the societal consequences of technical choices.” As the term approaches its conclusion, themes of privacy, bias, and free expression will be examined.
A class discussion provocatively broached the question: “So, is the internet ruining the world?”
Senior Caitlin Ogoe is majoring in Course 6-9 (Computation and Cognition). Engaging in an atmosphere where she can explore these issues is exactly why the self-identified “technology skeptic” opted for this course.
Having grown up with a hard-of-hearing mother and a younger sister with a developmental disability, Ogoe naturally took on the role of the family member responsible for contacting service providers for tech support or programming iPhones. She parlayed her skills into a part-time role repairing cell phones, which fueled her deep interest in computation and ultimately led her to MIT. However, a prestigious summer fellowship in her first year prompted her to contemplate the ethical implications of how technology impacts consumers.
“Every interaction I’ve had with technology has been from the viewpoint of people, education, and personal relationships,” Ogoe states. “This is a niche I am passionate about. Pursuing humanities classes focusing on public policy, technology, and culture is one of my greatest interests, yet this is the first course I have taken that also incorporates a philosophy professor.”
The following week, Skow lectures on biases present in AI, and Ogoe, who is poised to enter the workforce next year while planning to attend law school to focus on relevant regulatory matters, raises her hand to pose questions or share counterarguments four times.
Skow delves into analyzing COMPAS, a controversial AI software that employs an algorithm to estimate the likelihood that individuals accused of crimes will re-offend. According to a 2018 ProPublica article, COMPAS tended to disproportionately identify Black defendants as likely future criminals, generating false positives at double the rate as for white defendants.
The class period centers on evaluating whether the article justifies the claim that the COMPAS system is biased and should be abolished. To do this, Skow introduces two distinct theories of fairness:
“Substantive fairness relates to whether a specific outcome is fair or unfair,” he elaborates. “Procedural fairness concerns whether the method through which an outcome is obtained is equitable.” Various conflicting fairness standards are then examined in class, leading to discussions about their plausibility and the conclusions they allow regarding the COMPAS system.
Subsequently, the two professors ascend to Solar-Lezama’s office to reflect on how the day’s exercise had unfolded.
“Who can tell?” remarks Solar-Lezama. “Perhaps five years from now, everyone will find humor in how concerned people were about the existential risks posed by AI. Yet, a recurring theme I observe throughout this course is the importance of engaging in these debates beyond media narratives to rigorously contemplate these issues.”