“`html
Imagine you were presented with evidence that an AI tool provides precise forecasts regarding certain stocks you possess. How would you feel about utilizing it? Now, consider that you are interviewing for a position at a firm where the human resources department employs an AI system to evaluate resumes. Would you feel at ease with that?
A recent study reveals that individuals are neither fully positive nor completely negative towards AI. Instead of aligning with groups of techno-optimists or skeptics, people assess the practical implications of employing AI on a case-by-case basis.
“We suggest that AI appreciation arises when AI is viewed as more capable than humans and personalization is seen as unnecessary within a particular decision context,” states MIT Professor Jackson Lu, a co-author of a newly released paper outlining the findings of the study. “AI aversion occurs when either of these criteria is unmet, whereas AI appreciation emerges only when both are satisfied.”
The publication, “AI Aversion or Appreciation? A Capability-Personalization Framework and a Meta-Analytic Review,” is featured in Psychological Bulletin. The study has eight co-authors, including Lu, who serves as the Career Development Associate Professor of Work and Organization Studies at the MIT Sloan School of Management.
New framework enhances understanding
Reactions to AI have been the subject of extensive discussion, often leading to seemingly conflicting conclusions. A notable 2015 study on “algorithm aversion” found that people are less tolerant of errors made by AI compared to those made by humans, while a widely cited 2019 study on “algorithm appreciation” indicated that individuals preferred advice from AI over that from humans.
In an effort to reconcile these contradictory findings, Lu and his collaborators carried out a meta-analysis of 163 previous studies comparing preferences for AI versus human input. The researchers sought to determine if the data supported their suggested “Capability–Personalization Framework” — the concept that both the perceived competence of AI and the perceived need for personalization affect our preferences for either AI or humans in any specific situation.
Throughout the 163 studies, the research team scrutinized over 82,000 responses to 93 distinct “decision contexts” — for example, assessing whether participants were comfortable with AI involvement in cancer diagnostics. The analysis confirmed that the Capability–Personalization Framework effectively explains individuals’ preferences.
“The meta-analysis validated our theoretical framework,” Lu explains. “Both aspects are crucial: Individuals consider whether AI outperforms humans in a specific task, and whether the task requires personalization. People will favor AI only if they believe it to be more capable than humans and the task doesn’t necessitate personalization.”
He emphasizes: “The main takeaway is that high perceived capability by itself doesn’t ensure AI appreciation. Personalization is significant too.”
For instance, people often prefer AI for tasks like fraud detection or handling large datasets — domains where AI’s abilities surpass human performance in terms of speed and volume, and where personalization isn’t essential. In contrast, they show more resistance to AI in contexts such as therapy, job interviews, or medical diagnostics, where they feel a human is more adept at understanding their specific situations.
“Individuals possess an intrinsic need to perceive themselves as unique and distinct from others,” Lu notes. “AI is frequently seen as impersonal and mechanical. Even with extensive data training, people doubt AI’s ability to comprehend their personal situations. They desire a human recruiter or doctor who can recognize their individuality.”
Context also plays a role: From tangibility to job loss
The study further identified other elements affecting preferences for AI. For example, AI appreciation is stronger for physical robots than for abstract algorithms.
Economic factors are also significant. In nations with lower unemployment rates, AI appreciation tends to be more evident.
“This makes intuitive sense,” Lu states. “If you’re concerned about being supplanted by AI, you are less inclined to welcome it.”
Lu continues to study the intricate and changing attitudes people hold towards AI. While he doesn’t consider the current meta-analysis the final word on the topic, he hopes the Capability–Personalization Framework provides a meaningful perspective on how individuals evaluate AI across various scenarios.
“We do not assert that perceived capability and personalization are the only two important dimensions, but according to our meta-analysis, these factors encompass much of what influences people’s preferences for AI versus humans across a diverse array of studies,” Lu concludes.
Alongside Lu, the paper’s co-authors include Xin Qin, Chen Chen, Hansen Zhou, Xiaowei Dong, and Limei Cao from Sun Yat-sen University; Xiang Zhou from Shenzhen University; and Dongyuan Wu from Fudan University.
The research received partial support from grants to Qin and Wu from the National Natural Science Foundation of China.
“`