SIAM News Blog
SIAM News
Print

The Election Interference Game

By Jenny Morber

Election interference, in which one country conspires to elect a favored candidate in another country, is a classic example of a non-cooperative game. Game theory pits two opponents against one another to achieve a goal, wherein one player’s path to success is contingent upon the strategy of the other. Therefore, when news of suspected Russian meddling in the 2016 U.S. presidential election made headlines, economist and mathematician David Dewhurst set out to model the allegations as optimal game play.

Critics of game theory note that it only works when both sides act rationally, whereas people often do not. Dewhurst acknowledges this incongruity but mentions an additional consideration. “If you ever wanted to look for something that is a rational actor when taken as a whole, a foreign or domestic intelligence agency is probably that,” he said. “If any group has the job of acting rationally, it is them.”

In a paper published earlier this year in Physical Review E, Dewhurst and collaborators Christopher Danforth and Peter Dodds model a two-player game in which one country, designated as “Red,” wishes to influence the outcome of a two-candidate election in another country, designated as “Blue” [1]. Red’s goal is the election of preferred candidate A, whereas Blue’s goal is an election that is free from interference — a win that is much more difficult to define. These uneven objectives put the defensive country at a disadvantage. “If you’re the FBI, you don’t want Hillary Clinton to win and you don’t want Donald Trump to win,” Dewhurst said of the 2016 election. “You just want a free and fair election. The issue is that if you take that strategy, Red always wins. So if you want to stop Red from interfering, you actually have to interfere on behalf of one of the other candidates.”

When crafting their model, the researchers considered an election between two candidates, denoted A and B, that was decided by majority vote with no Electoral College. They assume that a public poll \(Z_t \in [0,1]\) represents the election process at any time \(t \in [0,T]\). The model’s dynamics occur in a latent space that is related to the polling process \(Z_t= \phi(X_t)\), where \(\phi\) is the sigmoidal function \(\phi(x)=\frac{1}{1+e^{-x}}\). In this space, \(X_t<0\) represents values that favor candidate A and \(X_t>0\) represents values that favor candidate B.

The functions by which Red and Blue attempt to influence or deflect influence on the election are one-dimensional continuous-time stochastic processes, denoted by \(u_\textrm{R}(t)\) and \(u_\textrm{B}(t)\). Dewhurst and his team interpret these “control policies” as expenditures on interference operations. Under the influence of both countries’ policies, the election dynamics become

\[\textrm{d}X_t=F(u_\textrm{R}(t), u_\textrm{B}(t)) \textrm{d}t + \sigma \textrm{d}\mathcal{W}_t, \:\:\:\:\: X_0=y.\tag1\]

Based on the assumption that \(F\) is at least twice continuously differentiable, the researchers approximate the state equation as

\[\textrm{d}X_t=[u_\textrm{R}(t) + u_\textrm{B}(t)]\textrm{d}t + \sigma \textrm{d}\mathcal{W}_t, \:\:\:\:\: X_0=y.\tag2\]

Red and Blue then seek to minimize the cost functions of their own control policies. \(C_\textrm{R}\) and \(C_\textrm{B}\) respectively represent the running cost or benefit of conducting election interference operations, according to

\[E_{u_{R}, u_{B},X}\left\{\Phi_\textrm{R}(X_T)+ \int^T_0 C_\textrm{R}(u_\textrm{R}(t), u_\textrm{B}(t)) \textrm{d}t\right\}\tag3\]

and 

\[E_{u_{R}, u_{B},X}\left\{\Phi_\textrm{B}(X_T)+ \int^T_0 C_\textrm{B}(u_\textrm{R}(t), u_\textrm{B}(t)) \textrm{d}t\right\}.\tag4\]

Here, the cost functions take the form \(C_i(u_\textrm{R}, u_\textrm{B})= u^2_i - \lambda_i u^2_{\neg i}\) for \(i \in \{\textrm{R,B}\}\). The notation \(\neg i\) indicates the set of all other players. Therefore, if \(i=\textrm{R}\), \(\neg i = \textrm{B}\) and \(\lambda_i\) parameterizes the utility that player \(i\) gains by observing player \(\neg i\)'s effort. If \(\lambda_i>0\), player \(i\) gains utility when player \(\neg i\) expends resources.

To find optimal play, Dewhurst and his colleagues define final conditions that specify the costs that Red and Blue incur, then solve the game backward through time. Because Red wants to influence the election’s outcome in favor of candidate A, its final cost function \(\Phi_\textrm{R}\) must satisfy \(\Phi_\textrm{R}(x)<\Phi_\textrm{R}(y)\) for all \(x<0\) and \(y>0\). The researchers considered three possibilities for Blue’s final conditions, as they are much more complex. One possible condition is that Blue may accept the election result if it does not deviate “too far” from the initial expected value. A discontinuous condition that represents this preference is given by \(\Phi_\textrm{B}(x)= \Theta(|x|- \Delta)-\Theta(\Delta-|x|)\), where \(\Delta>0\) is Blue’s accepted margin of error and \(\Theta(\cdot)\) is the Heaviside step function.

Applying the dynamic programming principle to equations \((2)\)-\((4)\) yields the following system of coupled Hamilton-Jacobi-Bellman equations for the Red and Blue value functions:

\[-\frac{\partial V_\textrm{R}}{\partial t}=\min\limits_{u_\textrm{R}}\left\{\frac{\partial V_\textrm{R}}{\partial x}[u_\textrm{R}+u_\textrm{B}]+u^2_\textrm{R}-\lambda_\textrm{R}u^2_\textrm{B}+\frac{\sigma^2}{2}\frac{\partial V_\textrm{R}}{\partial x^2}\right\},\tag5\]

and

\[-\frac{\partial V_\textrm{B}}{\partial t}=\min\limits_{u_\textrm{B}}\left\{\frac{\partial V_\textrm{B}}{\partial x}[u_\textrm{R}+u_\textrm{B}]+u^2_\textrm{B}-\lambda_\textrm{B}u^2_\textrm{R}+\frac{\sigma^2}{2}\frac{\partial V_\textrm{B}}{\partial x^2}\right\}.\tag6\]

Minimizing these equations with respect to the control variables then produces the Nash equilibrium control policies:

\[u_\textrm{R}(t)= -\frac{1}{2}\frac{\partial V_\textrm{R}}{\partial x}\bigg|_{(t,X_t)}\tag7\]

and 

\[u_\textrm{B}(t)= -\frac{1}{2}\frac{\partial V_\textrm{B}}{\partial x}\bigg|_{(t,X_t)},\tag8\]

as well as the exact functional forms of equations \((5)\) and \((6)\). When solved over the entire state space, the latter become the strategies of a subgame perfect Nash equilibrium:

\[-\frac{\partial V_\textrm{R}}{\partial t}=-\frac{1}{4}\Big(\frac{\partial V_\textrm{R}}{\partial x}\Big)^2-\frac{1}{2}\frac{\partial V_\textrm{R}}{\partial x}\frac{\partial V_\textrm{B}}{\partial x}- \frac{\lambda_\textrm{R}}{4}\Big(\frac{\partial V_\textrm{B}}{\partial x}\Big)^2+\frac{\sigma^2}{2}\frac{\partial^2V_\textrm{R}}{\partial x^2}, \:\:\:\:\: V_\textrm{R}(x,T)=\Phi_\textrm{R}(x)\tag9\]

and

\[-\frac{\partial V_\textrm{B}}{\partial t}=-\frac{1}{4}\Big(\frac{\partial V_\textrm{B}}{\partial x}\Big)^2-\frac{1}{2}\frac{\partial V_\textrm{B}}{\partial x}\frac{\partial V_\textrm{R}}{\partial x}- \frac{\lambda_\textrm{B}}{4}\Big(\frac{\partial V_\textrm{R}}{\partial x}\Big)^2+\frac{\sigma^2}{2}\frac{\partial^2V_\textrm{B}}{\partial x^2}, \:\:\:\:\: V_\textrm{B}(x,T)=\Phi_\textrm{B}(x).\tag{10}\]

Dewhurst and his collaborators employ backward iteration to numerically identify the value functions \(V_\textrm{R}(x,t)\) and \(V_\textrm{B}(x,t)\). This enforces a Neumann boundary condition at \(x= \pm 3\) that corresponds to a bound on the polling popularity of candidate B by 4.7 percent from below and 95.3 percent from above.

Figure 1. Examples of control policies for Red \((u_\textrm{R})\) and Blue \((u_\textrm{B})\) in a simulated election game. 1a. Optimal expenditures by Red and Blue. 1b. Election results over time. Even when Blue plays optimally to resist, Red is able to influence election results and hinder candidate B. Image courtesy of [1].
Equations \((5)\) and \((6)\) provide the closed-loop control policies for Red \((u_\textrm{R})\) and Blue \((u_\textrm{B})\), given the current state \(X_t\) and time \(t\). Figure 1 displays examples of \(u_\textrm{R}\), \(u_\textrm{B}\), and the electoral process \(Z_t\). In this example, the researchers simulate the game with parameters \(\lambda_\textrm{R}=\lambda_\textrm{B}=2\), \(\Phi_\textrm{R}(x)=x\), and \(\Phi_\textrm{B}(x)=\frac{1}{2}x^2\Theta(-x)\). The control policies—the amount of resources spent attempting to win the election—are plotted in Figure 1a; the thick curves illustrate the average values. Figure 1b shows the path of the electoral process. Red’s optimal play involves beginning with a large amount of interference and decreasing interference on average over time. Blue is at a clear disadvantage, as optimal play by both players results in lower vote counts for candidate B than the electoral process with no interference. Even though Blue resists Red’s interference, candidate A wins the election and Red still accomplishes its objective. In many simulations, Red can interfere even when Blue plays optimally. This analysis suggests that Blue must invest resources in electing candidate B to achieve results that imitate a free and fair election.

Dewhurst’s team then conducts a coarse parameter sweep over \(\lambda_\textrm{R}\), \(\lambda_\textrm{B}\), \(\Phi_\textrm{R}\), and \(\Phi_\textrm{B}\) to explore the game’s qualitative behavior. Holding Blue’s final condition of \(\Phi_\textrm{B}(x)=\frac{1}{2}x^2\Theta(-x)\) constant, they compare the means and standard deviations of the Nash equilibrium strategies \(u_\textrm{R}(t)\) and \(u_\textrm{B}(t)\) across values of the coupling parameters \(\lambda_\textrm{R}, \lambda_\textrm{B} \in [0,3]\) as Red’s final condition changes from \(\Phi_\textrm{R}(x)=\tanh(x)\) to \(\Phi_\textrm{R}(x)=\Theta(x)-\Theta(-x)\). 

Some combination of these parameters leads to an “arms race,” in which Nash equilibrium strategies inspire super-exponential growth in both players’ control policies towards the end of the game. In short, an all-or-nothing mindset causes Red and Blue to spend large amounts of money for no change in outcome. Therefore, Blue may choose to let the attacking country interfere a bit to avoid an arms race and gain additional intelligence benefits. For Red, a partial win might tighten the election and make candidate B feel less surefooted and supported.

In 2015 and 2016, Russian military foreign intelligence sought to interfere in the U.S. presidential election in favor of Donald Trump. Evidence shows that they worked in part through social media platforms like Facebook and Twitter. When this attack was discovered, Twitter shut down accounts associated with Russian government (Red team) activity and collected and analyzed the corresponding data.

To test their model, Dewhurst and his colleagues tapped this publicly available data. Since they could not observe Red or Blue’s control policies, they used tweets sent by Twitter accounts associated with Russian military intelligence as proxy. The group downloaded nearly three million tweets from 2,848 unique Twitter handles, which were collected by Darren Linvill and Patrick Warren of Clemson University and hosted by FiveThirtyEight [2]. 1,107,361 of these tweets were sent in the year before the 2016 election. The researchers grouped the time series of total tweet numbers by day to infer \(u_\textrm{R}\) and utilized RealClearPolitics poll data to mimic the electoral process, ignoring the minor effects of other parties. Using this information, they identified model parameters that best explained these inferred control policies and ultimately demonstrated that their model offers a sound explanation of the inferred variables.

This work represents a simplified construct of a complex and messy system and therefore contains inherent limitations. For example, it does not account for the U.S. Electoral College or minor political parties. The random walk assumption ignores the deterministic influence of past polls on future choices. No strategies account for other major news events, and the model does not consider tactics that influence non-voting. Yet despite its simplicity, the model provides valuable qualitative insight into U.S. and Russian strategies regarding the 2016 presidential election and may be informative for future two-candidate political races.


References
[1] Dewhurst, D.R., Danforth, C.M., & Dodds, P.S. (2020). Noncooperative dynamics in election interference. Phys. Rev. E, 101, 022307.
[2] Linvill, D.L., & Warren, P.L. (2018). Troll factories: The internet research agency and state-sponsored agenda building. Resource Centre on Media Freedom in Europe. Retrieved from http://pwarren.people.clemson.edu/Linvill_Warren_TrollFactory.pdf.

Jenny Morber holds a B.S. and Ph.D. in materials science and engineering from the Georgia Institute of Technology. She is a professional freelance science writer and journalist based out of the Pacific Northwest.

blog comments powered by Disqus