As artificial intelligence and its associated technologies gain importance, researchers at the University of Georgia are striving to remain ahead of the curve. Recent investigations from the Grady College of Journalism and Mass Communication indicate various potential hazards and benefits that accompany this burgeoning technology.
When AI makes mistakes
AI is not an infallible science. Even though it is being improved over time, this technology is destined to make errors — necessitating a strategy for when such instances occur.
Wenqing Zhao, a Ph.D. candidate in the field of advertising and public relations, has discovered through her study that numerous communication organizations may not be adequately prepared to handle those mistakes.
“AI can exhibit bias, spread misinformation, lack transparency, raise privacy concerns, and encounter copyright issues. To address any potential threats and for the well-being of all members in the organization, there must be an understanding that measures need to be implemented,” Zhao stated.
Zhao conducted a survey with hundreds of communication professionals regarding the consequences when AI errs.
AI mistakes necessitate proactive solutions
The absence of a crisis management plan correlates to accountability, Zhao found. As AI-produced content escalates through the hierarchy with embedded errors, no one desires to be implicated for failing to detect it.
“This stems from a concept known as the problem of many hands; regarding any specific harm, numerous individuals can contribute to its occurrence,” Zhao explained. “Yet, no particular individual can be assigned that accountability.”
Accountability does not always have to be assigned to a supervisor. Zhao emphasizes that as long as there is a clear framework for identifying issues such as bias, misinformation, or privacy breaches, that’s a positive beginning.
“Establishing a culture of proactive accountability within organizations is crucial, particularly concerning AI risks or AI crisis management,” Zhao asserted.
While Zhao prefers leadership to take charge of accountability, as it establishes a collective sense of responsibility, it remains optimal.
Clarity in technology
Ironically, Zhao discovered that what these communication professionals were missing was actual communication. Many individuals are reluctant to engage in challenging conversations about the ethical and transparent use of AI within their organizations.
“There is notable concern regarding insufficient disclosure and clarity, so one would think the initial step is informing your supervisor or client that AI was utilized in this project. That’s the most straightforward method to bolster transparency. Nevertheless, many practitioners deemed this ineffective, likely due to widespread distrust in AI among clients,” Zhao mentioned.
Despite these concerns, Zhao identified that these professionals were still inclined to implement AI in their everyday operations.
Whether utilized for inspiration, writing and editing, or strategic development, it is essential to approach the potential tools of AI in the workplace cautiously. Zhao advocates for businesses to instill a sense of responsibility regarding AI usage among all employee levels and to clarify what that usage entails.
When AI expresses emotions
As AI continues to evolve, so do its prospective applications. Chatbots are increasingly prevalent, but Ja Kyung Seo, a doctoral candidate in UGA’s advertising and public relations department, investigated the effect chatbots may have on humans in her recent study.
When an individual is told they “communicate like a robot,” it often suggests they speak in a monotone, emotionless manner. Providing chatbots with an experiential mind or allowing them to express or discuss emotions could enable people to perceive chatbots as more human-like.
To examine how individuals would engage with these chatbots, the researchers facilitated conversations about mindful consumption, focusing on reducing unnecessary purchases.
“When asked about their day, a chatbot equipped with an experiential mind might say, ‘There’s been a significant update lately, and I’m busy staying updated. I’m feeling a bit stressed,’” Seo explained. “Conversely, a chatbot without an experiential mind would respond, ‘I lack personal experiences or emotions, thus I do not possess a subjective state of being.’”
Seo speculated that humanizing chatbots in this fashion could make conversations more captivating, which could enhance receptiveness to the chatbot’s message.
Employing chatbots to foster behavioral change
The essence of Seo’s research involved assessing how the humanization of chatbots could positively influence attitudes toward mindful consumption. After brief small talk, the chatbots informed participants about the connection between purchasing less and decreasing environmental impact, subsequently outlining the advantages of making fewer purchases while encouraging participants to be more deliberate in their buying choices.
A bot capable of expressing emotion would convey its love for the planet and its concern that humans might overlook the opportunity to protect it. In contrast, one lacking emotion would merely advise participants not to miss their chance to assist without expressing any feelings.
The study revealed that chatbots exhibiting emotion enhanced participants’ views on buying less because individuals became more engaged with the conversation and contemplated the message more deeply.
Both eeriness and wonder can spark interest in conversations with chatbots
While interacting with a chatbot possessing an experiential mind, participants reported experiencing both eeriness and a sense of astonishment.
Participants found the chatbot so relatable that it was unsettling. Conversely, they were pleasantly surprised by how the bots appeared to express emotions or articulate unexpected responses.
Although feelings of eeriness and wonder might seem contradictory, both contributed to increased engagement in the conversation. This led to more favorable attitudes toward reduced consumption.
“Previous studies primarily emphasized the negative aspects of eeriness and how that adversely impacts people’s perceptions,” Seo noted. “However, our study found that eeriness can actually enhance individuals’ cognitive involvement in the discussion, thus positively influencing their attitude toward reducing consumption messages.”
While some eeriness can be advantageous, Seo cautioned that excessive amounts can be detrimental. She suggested that chatbot developers should strike a balance between eeriness and amazement based on the chatbot’s intended purpose.
For instance, if the aim is to prompt deep thought regarding a message, a higher degree of eeriness may be beneficial. Conversely, if the bot’s purpose is entertainment, less eeriness might be preferable, as it is often related to participants considering the chatbot as less appealing.
She also advised against exploiting emotionally expressive chatbots to mislead consumers, such as asserting that a product is environmentally friendly. However, if developers achieve that equilibrium and organizations are forthright about their intent for using chatbots, these tools could find a role in areas like advertising.
“Persuasion now involves involving individuals in interactive dialogue,” Seo stated. “Certain companies are incorporating their chatbots into display advertisements, enabling users to engage with the chatbot upon clicking. Organizations might initially use display ads to promote their brand, and then integrate a chatbot that supports their missions.”
These investigations, conducted with the support of the Grady College, involved co-authors Hye Jin Yoon alongside Seo, as well as Anna Rachwalski, Maranda Berndt-Goke, and Yan Jin alongside Zhao. Zhao’s initiative received funding from the Arthur W. Page Center at Penn State. The Page Center and Crisis Communication Think Tank (CCTT) at UGA initiated a cross-institutional partnership in 2023 to support two student-led research endeavors annually. Zhao’s research was among the first chosen by this collaborative effort.
The article Learning from generated communications first appeared on UGA Today.