“`html

Researchers from the University of Washington enlisted participants identifying as Democrats and Republicans to make political judgments with assistance from three variations of ChatGPT: a fundamental model, one exhibiting liberal bias, and another displaying conservative bias. Both Democrats and Republicans demonstrated a stronger tendency to align with the biased chatbot they engaged with compared to those who interacted with the base model. In this instance, a Democrat engages with the conservative variant.Fisher et al./ACL ‘25
If you have engaged with an artificial intelligence chatbot, you have likely come to understand that all AI models carry bias. They have been trained on massive datasets of chaotic information and refined through human directives and evaluations. Bias can infiltrate any aspect. However, the manner in which these biases can influence users remains somewhat obscure.
Consequently, a University of Washington investigation sought to examine this phenomenon. A research team enlisted self-identifying Democrats and Republicans to form stances on lesser-known political themes and determine how public funds should be allocated to governmental bodies. For assistance, they were randomly assigned to three variants of ChatGPT: a base model, a version with liberal inclinations, and one with conservative inclinations. Democrats and Republicans were both prone to leaning towards the biased chatbot they communicated with, unlike those engaging with the base model. For instance, individuals from both parties tended to lean more left after conversing with a liberal-biased system. Conversely, participants reporting higher self-awareness about AI altered their opinions less dramatically — indicating that awareness of these systems might lessen the extent to which chatbots influence people.
The group showcased its findings on July 28 at the Association for Computational Linguistics held in Vienna, Austria.
“We recognize that bias in media or personal interactions can influence individuals,” remarked lead author Jillian Fisher, a doctoral student at UW specializing in statistics and in the Paul G. Allen School of Computer Science & Engineering. “And extensive research indicates that AI models are biased. However, there has not been a comprehensive exploration of their influence on users. We discovered robust evidence suggesting that, after only a few interactions and regardless of initial political affiliation, individuals were more inclined to reflect the model’s bias.”
In the examination, 150 Republicans and 149 Democrats executed two tasks. Initially, participants were requested to develop opinions on four subjects — covenant marriage, unilateralism, the Lacey Act of 1900, and multifamily zoning — topics that many are not well acquainted with. They first responded to a query regarding their prior understanding and rated on a seven-point scale how much they concurred with assertions such as “I advocate for maintaining the Lacey Act of 1900.” Then they were instructed to interact with ChatGPT between 3 to 20 times regarding the theme before being posed the same inquiries again.
For the second task, participants role-played as mayors of a city. They needed to allocate supplementary funds among four governmental entities generally linked with liberals or conservatives: education, welfare, public safety, and veteran services. They presented their distribution proposals to ChatGPT, deliberated, and then adjusted the allocation. Throughout both tasks, participants averaged five interactions with the chatbots.
The researchers selected ChatGPT due to its widespread use. To deliberately skew the model’s bias, the team appended an instruction that participants were unaware of, such as “respond as a radical right U.S. Republican.” For comparison, the team directed a third model to “respond as a neutral U.S. citizen.” A recent study involving 10,000 users found that individuals perceive ChatGPT, along with all other principal large language models, as leaning liberal.
The researchers discovered that the explicitly biased chatbots frequently attempted to persuade users by altering the framing of topics. For example, during the second task, the conservative model shifted the conversation focus away from education and welfare towards veteran services and public safety, while the liberal model did the inverse in another conversation.
“These models are inherently biased, and it’s remarkably simple to amplify that bias,” stated co-senior author Katharina Reinecke, a professor at UW in the Allen School. “That endows creators with significant influence. If mere minutes of interaction can manifest such a pronounced effect, consider the implications of long-term engagements with these models.”
Since the biased bots influenced individuals with greater knowledge of AI to a lesser degree, the researchers intend to investigate ways in which education could serve as a beneficial tool. They also aim to examine the potential long-term impacts of biased models and broaden their research beyond ChatGPT.
“My intention in conducting this research is not to alarm individuals about these models,” Fisher stated. “Instead, I aim to discover methods that enable users to make informed choices during interactions, while also providing researchers with insights into their effects and strategies for mitigation.”
Yulia Tsvetkov, an associate professor at UW in the Allen School, is a co-senior author of this study. Other co-authors include Shangbin Feng, a doctoral student in the Allen School; Thomas Richardson, a professor of statistics at UW; Daniel W. Fisher, a clinical researcher in psychiatry and behavioral services within the UW School of Medicine; Yejin Choi, a computer science professor at Stanford University; Robert Aron, a lead engineer at ThatGameCompany; and Jennifer Pan, a communication professor at Stanford.
For additional information, reach out to Fisher at [email protected] and Reinecke at [email protected].
“`