SIAM News Blog
SIAM News
Print

The Mathematics of Keeping a Musical Beat

By Amitabha Bose, Áine Byrne, and John Rinzel

When you recall a favorite song from your past, you may “hear” it playing in your mind. How is this song stored in your memory, and how can you recall it so quickly years later? Although many such queries remain unanswered, a growing number of studies are addressing questions about how our brains process musical rhythms. Unlike most animals, humans can recognize and reproduce rhythmic sound patterns. We tap our feet and sway to music in sync with the rhythm. Our brain’s extraction of a rhythmic pattern is known as beat perception. A diverse set of brain regions are active during this process [1], including auditory processing areas and motor areas [2]. One major theory about beat perception based on neural entrainment [3] relies on principles of dynamical systems and postulates that music drives and entrains the activity of neurons, allowing them to oscillate at the beat frequency.

Beyond beat perception, we wonder how our brains produce musical rhythms. For example, how do we know that we are not playing too quickly or too slowly? What mechanisms allow us to make judgments about elapsed time and predict future events? We can recast these types of questions into mathematical ones. Consider the most basic beat produced by a metronome, a set of tones evenly spaced in time. For such a temporally periodic rhythm, we ask what kinds of neuronal systems can produce limit cycle oscillations whose periodicity marks the beat. How are the desired properties of these limit cycles created by the network? For instance, when keeping a beat we may ignore distractors or small deviants; modeling-wise, how does a neuronal system produce a limit cycle oscillation that is robust to perturbations? Or how does the limit cycle’s period easily and quickly adjust to changes in beat tempo? Using tools from mathematical modeling and dynamical systems, we have begun to study how the brain forms an internal metronome, a process we call beat generation. We refer to the neuronal representation of this as a beat generator (BG).

To produce simple metronome-like rhythms, our brains must estimate the timing of successive beats. By comparing running estimates of consecutive interbeat intervals, we presumably possess the ability to adjust beat timing at the next cycle. This strategy is referred to as an error-correction scheme. A convenient way to assess our ability to generate and maintain a beat is to measure an individual’s ability to finger tap to a metronome. This mathematically amounts to a two-dimensional map — an iteration scheme in which information about the current intertap interval length \(T_n\) and phase of tapping \(\theta_n\) relative to a tone can predict \(T_{n+1}\) and \(\theta_{n+1}\). This type of scheme was formulated at an algorithmic level by Jiří Mates [4] and utilized in a variety of subsequent studies [5-6]. However, such studies do not identify the neuronal mechanisms that implement error-correction schemes in real time. And the mechanisms by which beat-keeping becomes a learned behavior remain unknown.

We recently derived a framework based on biophysical principles for an internal neuronal metronome [7]. We sought to identify basic neuronal mechanisms that may underlie our capacity to learn repeated time intervals, i.e., the ability to judge the length of a single time interval, compare that judgement with prior time intervals, and make necessary corrections to the subsequent interbeat interval. In short, we demonstrated how one could implement error-correction schemes with a network of neurons. The framework consists of two main components: a beat generator that learns the frequency and phase of an external sound stimulus, and a gamma count comparator that provides a neuronal mechanism to refine the estimate of repetitive time intervals. We suggest ways in which the network makes adjustments to biophysical parameters associated with the BG to meet the following two main criteria: (1) the BG quickly learns to match a rhythmic external sound stimulus across a range of frequencies (learning a beat), and (2) the BG maintains that frequency even after the external sound stimulus is removed (keeping the beat).

For the BG to meet these criteria, we proposed the existence of different learning processes based on the simple notion of counting, whereby longer intervals are subdivided into counts of shorter reference intervals. In the context of music, the time intervals of interest are short (100-2000 milliseconds) and the recall and comparison processes occur in real time. Thus, our model needs a means of counting that relies on subdivisions that are much smaller than a typical reference unit of one second, for example. Our model takes into account the several ongoing brain rhythms that occur when we are awake and active, including gamma frequency oscillations (30-90 hertz). Suppose we use a 40-hertz gamma oscillator with one cycle every 25 milliseconds. We first postulate that counters exist to keep track of the number of such gamma oscillations between important events ([8-9] indicates that count-selective neuronal networks subsist in various contexts). Thus, we would estimate an interval of 500 milliseconds with a gamma counter recording 19, 20, or 21 oscillations, depending on the counter’s frequency.

Figure 1. Stationary behavior of the model after learning a fixed metronomic stimulus frequency of two hertz. 1a. Spikes of the beat generator (BG) are aligned with the metronome (black ticks at -40) for the duration of the simulation. 1b. The upper panel shows how the value of a biophysical parameter Ibias evolves and remains near the gray solid line, which would yield exactly two hertz oscillations. The lower panel indicates the timing error between BG spikes and metronome ticks. Dashed lines depict accuracy within one gamma cycle. During drifting, the timing error systematically changes while the parameter Ibias remains constant. During correcting, the value of Ibias updates and the timing errors are brought closer to zero.
Secondly, we postulate that our brains can compare different integer estimates from different gamma counters: our model’s gamma count comparator. If the count between BG spikes is less than the count between metronome tones, then the BG is too fast and a biophysical parameter is adjusted to slow it down. A speed-up signal is offered in the opposite scenario. By tracking the difference in counts, we hypothesize that the brain can adjust the BG frequency to minimize the count difference and thereby estimate a time interval and the phase of stimulus onset events. In this way, the BG implements error correction and is ultimately able to learn and keep a beat.

Our model is novel in suggesting how our brains handle errors in beat timing that are smaller than our perceptual ability to discern them. For instance, timing errors may be so small that our brains cannot detect them to make subsequent corrections. From a musical point of view, we thus do not generally keep a perfect beat. We quantify this in our model by saying that the BG is keeping a beat if it produces timing errors that are less than one gamma cycle in time (\(\sim\)25 milliseconds). This error monitoring is ongoing even when there is no apparent adjustment to the BG spike timing (see Figure 1). Indeed, we most likely experience moments of time where we believe we are keeping a beat and therefore do not consciously make adjustments (see “drifting” in Figure 1b). Yet in other moments, we realize that we are about to go off beat and start actively error correcting (see “correcting” in Figure 1b).

Our work raises several mathematical questions that are centered within dynamical system theory. For example, our model is a hybrid dynamical system that incorporates both continuous time flows—e.g., the membrane potential of a neuron—as well as discrete components, e.g., the integer counts provided by the gamma count comparator. This leads to the study of higher-dimensional hybrid maps for error correction. Because our model tolerates errors within gamma cycle accuracy, the solutions of interest are not necessarily fixed points or periodic orbits. They may or may not be chaotic, and categorizing their specific properties remains an open question. At a much broader scale, it is likely that large sets of neurons from diverse areas of the brain participate in beat generation. Deriving mathematical models for these networks with appropriate, biophysically-based learning rules will be a challenging endeavor. 

With regard to neuroscience, we study how the brain learns repeating sequences in music. Our work raises important questions that we plan to address and hope other researchers will also be compelled to pursue. For instance, how do we form the circuits that represent these sequences in our brain? How do we store these rhythmic memories? Our error-correction approach relies on “learning rules” — perhaps these are special rules that only humans and a few non-human animals possess. How can one ascertain this? Do inabilities for rhythm detection/generation tell us something about neuronal deficits? Music is often used as a treatment for movement-related disorders, such as Parkinson’s disease. Studying the mechanisms responsible for beat generation may help us understand the success of these treatments, as well as the deficits in beat perception and generation in such clinical populations.


Amitabha Bose presented this work during a minisymposium at the 2019 SIAM Conference on Applications of Dynamical Systems, which took place earlier this year in Snowbird, Utah. 

References 
[1] Grahn, J.A. (2012). Neural mechanisms of rhythm perception: Current findings and future perspectives. Top. Cog. Sci., 4(4), 585-606. 
[2] Grahn, J.A., & Brett. M. (2007). Rhythm perception in motor areas of the brain. J. Cog. Neurosci., 19(5), 893-906.
[3] Large, E.W., Herrera, J.A., & Velasco, M.J. (2015). Neural networks for beat perception in musical rhythm. Front. Syst. Neurosci., 9(159), 1-14.
[4] Mates, J. (1994). A model of synchronization of motor acts to a stimulus response. I. Timing and error corrections. Biolog. Cybernet., 70(5), 463-473.
[5] Repp, B.H. (2005). Sensorimotor synchronization; a review of the tapping literature. Psychon. Bullet. & Rev., 12(6), 969-992. 
[6] van der Steen, M.C.M., & Keller, P.E. (2013). The Adaptation and Anticipation Model (ADAM) of sensorimotor synchronization. Front. Human Neurosci., 7(253), 1-15.
[7] Bose, A., Byrne, A. & Rinzel, J.A. (2019). Neuromechanistic model for rhythmic beat generation. PLoS Comput. Bio., 15(5), e1006450.
[8] Naud, R., Houtman, D.B., Rose, G.J., & Longtin, A. (2015). Counting on dis-inhibition: a circuit motif for interval counting and selectivity in the anuran auditory system. J. Neurophysiol., 114, 2804-2815.
[9] Chamberland, S., Timofeeva, Y., Evstratova, A., Volynski, K., & Tóth, K. (2018). Action potential counting at giant mossy fiber terminals gates information transfer in the hippocampus. Proceed. Nat. Acad. Sci. USA., 115(28), 7434-7439.

Amitabha Bose is a professor of mathematical sciences at the New Jersey Institute of Technology. 
Áine Byrne is a Swartz postdoctoral fellow at New York University, working in the Center for Neural Sciences. She recently accepted a faculty position at University College Dublin in the School of Mathematics and Statistics. 
John Rinzel is a professor at New York University jointly appointed in the Center for Neural Science and the Courant Institute of Mathematical Sciences. 
blog comments powered by Disqus