Two-thirds of the population relies on social media as a primary source of local and global news. Users of Facebook, Twitter, YouTube, and Instagram frequently disseminate and digest a variety of both serious and frivolous content, much of which is inaccurate or subjective. Studies show that people spread information on these platforms despite knowing that said platforms are not reputable news outlets. This questionable content subsequently influences the formation of offline opinions, interactions, and policies. “People are increasingly dependent on online social networks as sources of information,” Heather Zinn Brooks of the University of California, Los Angeles, said. “I think it’s very important that we look at this by building models to study dynamics in a tractable but realistic way.” During a minisymposium presentation at the 2019 SIAM Conference on Applications of Dynamical Systems, currently taking place in Snowbird, Utah, Brooks presented a theoretical dynamical model of content spread and opinion dynamics on a social network.
The majority of online news content is spread via social media users themselves. Users go online, encounter stories that are either objective or slanted, and circulate the posts if they confirm/support any personal biases — regardless of material’s reputability. There is also growing concern about the manipulation of content by bots, cyborgs, sockpuppet accounts, and the like. To better understand how online content quality affects its dispersal on social networks, Brooks turned to a dynamical model with influencer nodes representing media accounts; she defines media accounts as those that do not follow other accounts. Her model consists of two parts: (i) a social network structure and (ii) content updating dynamics, which symbolize how online information changes over time. Brooks began by presenting a toy example of a network graph where vertices denote online accounts and edges denote followership. “For example, account \(I\) follows account \(J\) if there is a directed edge between them,” she said.
Figure 1. Schematic of content updating rule.
Next, Brooks employed a bounded confidence mechanism for the purposes of content updating. She assumed that all accounts within a network disperse content at each timestep, and the ideology of this content updates regularly because of its fluid nature. A node examines all of its accounts’ posts at every timestep, then updates its own ideology accordingly as long as the material is sufficiently close to its personal opinion (see Figure 1). “At each step, the accounts are reading content, processing it, and determining what they think is valid and what they think is not,” Brooks said.
She then presented and compared simulations of a trial with a one-dimensional ideology space and varied levels of media presence. The first scenario featured 11 media accounts with 225 followers per account, while the second comprised 21 media accounts with 675 follows per account. When the number of accounts increased beyond a certain point, opinions of the content began to split rather quickly and a smaller proportion of online networks ultimately trained to the media opinion. To further quantify the media’s influence on her network, Brooks synthesized this observation with a heat map that plotted followers per media account on the horizontal axis and number of media accounts on the vertical axis. “There seems to be this optimal zone where the media is most impactful in our network,” she said. The presence of too many media accounts caused a decrease in impact level.
Brooks proceeded to generate a heat map of many possible network structures to determine whether the aforementioned result was specific to this particular media structure or indicative of a greater trend. Upon finding that it is actually a generic effect in Facebook friendship networks, she went one step further and examined media entrainment in synthetic networks (see Figure 2). These heat maps were consistent with the previous maps, thus affirming the existence of an optimal zone at which media is most influential. Having confirmed that this is an inherent property of her model, Brooks sought to make her model more realistically applicable to real-world incidences of biased media.
Figure 2. Heat maps representing media entrainment in synthetic networks.
She slightly modified her model to account for content quality while retaining the same updating mechanism for content ideology. As expected, accounts were more willing to spread low-quality information if that information was close to their actual opinion. Brooks quantified data from the Ad Fontes Media Bias Chart—which categorizes media sources based on their quality of original, fact-based reporting versus ideological bias (conservative or liberal leanings)—to create a media distribution in ideology-quality space. Further analysis revealed two primary communities of content. “We see the emergence of two prominent echo chambers when we run these simulations,” she said. One chamber embodied a moderate ideology with high-quality content, while the other featured more conservative ideology with lower-quality (more opinion-based) posts.
Brooks’ model ultimately reveals that one can maximize media influence over a social network by tuning both the number of media counts and the number of followers for each account to an optimal threshold. This maximization is dependent upon the network’s structural features. However, there is still work to be done. She plans to conduct mathematical analysis of media entrainment, incorporate account heterogeneity, and account for structural homophily. “We want to eventually generalize to multilayer networks,” Brooks said, since content spread happens on multiple networks across various social media platforms. “We are excited about the idea of quantifying this from real data and doing some sentiment analysis.”
|| Lina Sorg is the associate editor of SIAM News.