# Understanding Knowledge Networks in the Brain

One strength of the human mind is its ability to find patterns and draw connections between disparate concepts, a trait that often enables science, poetry, visual art, and a myriad of other human endeavors. In a more concrete sense, the brain assembles acquired knowledge and links pieces of information into a network. Knowledge networks also seem to have a physical aspect in the form of interconnected neuron pathways in the brain.

During her invited address at the 2018 SIAM Annual Meeting, held in Portland Ore., last July, Danielle Bassett of the University of Pennsylvania illustrated how brains construct knowledge networks. Citing early 20th century progressive educational reformer John Dewey, she explained that the goal of a talk—and learning in general—is to map concepts from the speaker/teacher’s mind to those of his or her listeners. When the presenter is successful, the audience gains new conceptual networks.

More generally, Bassett explored how humans acquire knowledge networks, whether that process can be modeled mathematically, and how such models may be tested experimentally. Fundamental research on brain networks can potentially facilitate the understanding and treatment of conditions as diverse as schizophrenia and Parkinson’s disease.

In mathematical terms, a network is a type of graph: a set of points connected by lines. The particular model for knowledge networks is based on the assumption that humans essentially experience phenomena as discrete events or concepts arranged sequentially in time. Each of these is modeled as a node in a graph, with lines called “edges” linking them together. The edges represent possible transitions between the events or concepts; a particular graph thus describes how knowledge networks interconnect ideas. The first big question for this model is whether optimal pathways of connectivity that maximize learning exist.

### Hot Thoughts and Modular Thinking

To address this problem, Bassett and her collaborators constructed a knowledge network consisting of nodes that represent random stimuli. Each node was connected to the same number of other nodes—creating a \(k\)-regular graph, in graph theory terms—to ensure equal transition probabilities between nodes.

The researchers tested human reactions, assigning key presses or abstract “avatars” to every node. Test subjects “learned” the graph by performing set tasks, and Bassett’s team quantified these performances based on the amount of time the subjects spent responding to each task. The experiments involved two types of graphs: graphs in which nodes clustered together in groups and graphs with evenly-connected nodes in a more lattice-like structure. For example, consider a four-regular graph of 15 nodes where each node is connected to four others. The modular graph could simply cluster the nodes into three linked groups of five (see Figure 1), while the lattice-like graph lacks modules [3].

**Figure 1.**Test subjects had better reaction times (RTs) for switching tasks arranged on nodes within clusters on a graph than for tasks across clusters (left). The graph topology also affected performance, with better behavior on clustered nodes than nodes with a lattice-like topology (right), even when both graphs had the same number of edges per node. Figure adapted from [4].

Although both graphs were \(k\)-regular, subjects learned the modular graphs more efficiently and performed faster on transitions between nodes within a module than transitions between modules themselves, regardless of the nature of the task. Edges between clusters are called “cross-cluster surprisal” because they had slower reaction times than those within modules.

These results suggest that human minds “lump” concepts together, a premise that is borne out by other psychological tests. Despite having equal transition probabilities for all graph edges, the brain evidently distinguishes between topological “distances” by the type of transition performed in the graph. This indicates that humans implicitly recognize the graph’s topology.

To quantify how well test subjects recalled the learned material, Bassett and her colleagues assigned a probability to time interval \(\Delta t\) between when the event actually happened and when the subjects thought it occurred. Drawing on thermal physics, the team associated \(\Delta t\) with the concept of cognitive “free energy,” which the brain minimizes to reduce computational resources and recall errors [4].

In this language, the probability \(P\) of recall for a given time interval \(\Delta t\) after an event is

\[P (\Delta t)= \frac{1}{Z}\textrm{e}^{-\beta \Delta t},\]

where \(Z\) is the partition function

\[Z = \sum_{\Delta t} \textrm{e}^{- \beta \Delta t}\]

and \(\beta\) is the parameter that sets the distribution scale. Bassett suggested that humans operate at a particular “temperature” \(T \thicksim 1/\beta\) that changes over the course of their lives (this is similar to how temperature is defined in information theory). High “temperatures” (limit as \(\beta \rightarrow 0\)) result in a flat probability distribution, which means poor graph recall. Low “temperatures” (\(\beta \rightarrow \infty\)) ensure that the probability drops precipitously for nonzero \(\Delta t\) values, thus corresponding to an accurate memory. For moderate “temperatures,” the subjects reproduce the basic graph — but with some errors (see Figure 2).

**Figure 2.**Recall of tasks can be modeled using statistical mechanics, where the inverse “temperature” parameterizes the probability of recall over time. High temperature or low produces a messy graph recall, while lower temperatures represent increased accuracy. The darkness of the graphs’ edges indicates how well subjects remember the transitions between tasks on the graph topology from Figure 1. Figure courtesy of [4].

Continuing with the physics metaphor, this model is akin to the difference between materials at various temperatures. Solids are often highly ordered at low temperatures, with atoms arranged in predictable patterns; raising temperatures destroys the order, giving rise to random and time-variable fluid arrangements.

### Networks and Learning

Bassett then presented a larger question: Is it possible to design optimal knowledge networks that help people learn? As is moderately obvious, there is much individual variation among humanity in terms of cognitive function. At the same time, the mind’s ability to reconstruct modular patterns seems to indicate a general, shared means of operation. Bassett posed the question as follows: Are the actual, physical neural networks in a brain modular?

To explore this hypothesis, she and her collaborators tested subjects while they were inside a magnetic resonance imaging (MRI) machine. They found that real brain networks are multilayered and dynamic, as opposed (for example) to one static group of cells that routinely corresponds to learning.

More to the point, the flexibility of physical brain networks apparently linked to individual cognitive capabilities. Low flexibility limited subjects’ ability to learn and retain information, but Bassett’s team noted that certain test subjects with schizophrenia exhibited very high flexibility along with other deficits in function, leading the group to hypothesize an optimal range for flexibility to support cognitive performance [1].

Bassett and her colleagues were interested in connecting their mathematical model with the MRI results to examine a possible correlation between cognitive control and brain dynamics. MRI studies indicate that network control increases as children develop, approaching an asymptotic maximum roughly at age 20 in healthy brains [2].

When discussing cognitive abilities and mental health, researchers must always be cautious about ethical complications. Bassett and her collaborators are prudent to argue that one should handle issues involving brain control with caution in order to prevent misuse [5]. Understanding the way the mind works leads to questions of how and when mind control modification is possible or advisable. While control structures can be beneficial in treating some cognitive conditions resulting from lack of internal control, they can give rise to possible misuses as well (and not just science-fiction-style whole-brain hijacking scenarios).

Bassett presented a therapeutic argument for this avenue of research: if human brains control their network functions in particular ways, introducing external controls to change or enhance certain behaviors might be possible. For conditions like epilepsy or Parkinson’s disease, these modifications could be extremely helpful.

Ultimately, network models facilitate one’s understanding of how the mind, and possibly the brain itself, functions. Thus, researchers could extend many areas of applied mathematics—currently used in information theory, network control, and thermodynamics—to the study of the mind. Such connections satisfy the human impulse to find and transform patterns into something new.

*Bassett’s presentation is available from SIAM either as slides with synchronized audio or a PDF of slides only.*

**References**

[1] Braun, U., Schäfer, A., Walter, H., Erk, S., Romanczuk-Seiferth, N., Haddad, L,…Bassett, D.S. (2015). Dynamic reconfiguration of frontal brain networks during executive cognition in humans. *PNAS, 112*(37), 11678-83.

[2] Gu, S., Pasqualetti, F., Cieslak, M., Telesford, Q.K., Yu, A.B., Kahn, A.E.,… Bassett, D.S. (2015). Controllability of structural brain networks. *Nat. Comm., 6*, 8414.

[3] Kahn, A., Karuza, E.A., Vettel, J.M., & Bassett, D.S. (2018). Network constraints on learnability of probabilistic motor sequences. *Nat. Hum. Behav., 2*, 936-947.

[4] Lynn, C.W., Kahn, A.E., & Bassett, D.S. (2018). Structure from noise: Mental errors yield abstract representations of events. Preprint, *arXiv:1805.12491*.

[5] Medaglia, J.D., Zurn, P., Sinnott-Armstrong, W., & Bassett, D.S. (2017). Mind control as a guide for the mind. *Nat. Hum. Behav., 1*, 0119.

**Further Reading**

Kim, J.Z., Soffer, J.M., Kahn, A.E., Vettel, J.M., Pasqualetti, F., & Bassett, D.S. (2018). Role of graph architecture in controlling dynamical networks with applications to neural systems. *Nat. Phys., 14*, 91-98.