feeling-unheard:-marginalized-voices-ring-out-against-emotion-ai

Illustrative concept of AI emotion detection. Image credit: Nicole Smith, created with Midjourney

Artificial intelligence is transforming numerous sectors; however, a recent study from the University of Michigan indicates that not all demographics view it as advantageous.

Emotion AI, or emotional recognition, which asserts that machines can decipher human feelings, has generated unease, especially among marginalized populations, as highlighted in a thorough survey of a varied U.S. demographic.

This anxiety extends to sectors like healthcare, workplace environments, automotive applications, and even children’s playthings.

Underrepresented communities—including minorities and people with disabilities—show markedly lower levels of comfort, suggesting significant societal and ethical ramifications, according to the study.

The research revealed that while individuals tend to feel slightly more at ease with AI identifying emotions such as happiness and surprise, a broad sense of discomfort remains, particularly regarding usage in social media, job interviews, and market research.

Nazanin Andalibi
Nazanin Andalibi

“These variations in comfort indicate an urgent need to take identity into account while evaluating the societal impact of emotion AI,” stated Nazanin Andalibi, assistant professor at the School of Information and lead author of the study.

With feedback from almost 600 participants, the study emphasizes a shared apprehension towards emotion AI across critical areas: public spaces, health care, work environments, job interviews, market research, border control, social media, children’s toys, education, automotive applications, and individual pursuits. Comfort levels remained low across 11 scenarios, with even the most favorable context (health care) reflecting low comfort—a noteworthy finding, as the use of emotion AI in health care is frequently lauded, according to the researchers.

Alexis Shore Ingber
Alexis Shore Ingber

“Emotion AI purports to interpret our most intimate feelings. Even if these interpretations are inaccurate—which many specialists assert they are—their increasing presence still poses profound privacy issues, as evidenced by individuals’ discomfort across various deployment scenarios,” noted study co-author Alexis Shore Ingber, a research fellow at the School of Information.

Significantly, the analysis highlights that a closer examination of identity factors offers more profound insights into these comfort levels. People of color tend to report greater comfort with emotion AI than white individuals in the majority of situations, except in contexts like public environments and job interviews.

The conclusions urge developers and policymakers in the United States to take immediate action: Acknowledging this discomfort and implementing robust regulations to safeguard “emotion data” is critical as emotion AI expands into high-stakes environments.

“Emotion AI and emotion data must be regulated in the United States,” Andalibi asserted. The European Union has recently prohibited the use of emotion AI in workplaces and educational settings; while not perfect, it is a progressive move, and I hope the U.S. can improve.” These results will be presented at the 2025 Conference on Human Factors in Computing Systems in Yokohama, Japan.


Leave a Reply

Your email address will not be published. Required fields are marked *

Share This