CSU researchers compare social media algorithms to Shakespearian villains
Click play to listen to this article.
(Colorado News Connection) The regularity of news stories with individuals being misled or even radicalized by social media brought two Colorado State University researchers to compare social media algorithms to villains in classic tragedies such as Shakespeare's "Othello."
In a paper published last fall, researchers examine how algorithms can transform a person's view of reality in ways leading to detrimental actions. Platforms track user engagement with content and then feed users more of what they like.
Hamed Qahri-Saremi, assistant professor of computer information systems at Colorado State University and co-author of the paper, said even if you are following a news website such as CNN or Fox, you will not see every post by the outlets, only what the feeding algorithm thinks will maximize your engagement.
"It's not about the source, even," Qahri-Saremi explained. "It's about what these feeding algorithms are showing to you. So if you just go onto social media to get your news, most likely you're going to be very polarized. You see the world differently, because a big part of the picture, the true picture of the world, is going to be eliminated, is going to be masked from you because that's the job of the feeding algorithms."
The authors compare algorithms to the Shakespearean character Iago, who uses lies and manipulation to mislead Othello into murdering his wife.
The paper illustrated how platforms learn about users directly by observing their behavior, including which posts they spend time with and like, and learn about users indirectly by identifying and verifying the most similar platform users. The authors refer to it as a "matching mechanism" and users can see its effects with platform suggestions of who users should follow or connect with.
When offering content to users, platforms use social signaling to drive engagement by showing them which friends liked or commented on a post. Qahri-Saremi noted when misinformation is presented, social signals increase the likelihood users will engage.
"The person who sees that misinformation on social media is not just any random person, it's a person that the algorithm has selected and probably have added some social signals to it," Qahri-Saremi pointed out. "This significantly increases the power of this misinformation content."
Platform algorithms have the ability to select from the many millions of pieces of content floating around on social media, and choose the ones driving individual user engagement the most. With social media platforms primarily in the business of selling advertising, Qahri-Saremi emphasized the kind of granular data algorithms can learn about users makes them some of the most profitable companies around.
"These are some of the best algorithms," Qahri-Saremi stressed. "That's why social media companies are so wealthy. They can sell ads like nobody else; they can customize ads like nobody else. So now the same machine is being used to disseminate misinformation."
The paper suggested methods to combat misinformation, among them using an "endorsing accuracy" prompt such as "I think this news is accurate" and connecting it to the sharing function.